text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Features of Streptococcus agalactiae strains recovered from pregnant women and newborns attending different hospitals in Ethiopia Background Streptococcus agalactiae (Group B Streptococcus, GBS) serotypes, sequence types, and antimicrobial resistance profile vary across different geographic locations affecting disease patterns in newborns. These differences are important considerations for vaccine development efforts and data from large countries in Africa is limited. The aim of this study was to determine serotypes and genotypes of GBS isolates from pregnant women and their newborns in Ethiopia. Methods A hospital based cross-sectional study was conducted at three hospitals in Ethiopia from June 2014 to September 2015. Out of 225 GBS isolates, 121 GBS were recovered, confirmed and characterized at CDC’s Streptococcus Laboratory using conventional microbiology methods and whole genome sequencing. Results Of the 121 isolates, 87 were from rectovaginal samples of pregnant women, 32 from different body parts of their newborns and 2 from blood of newborns with suspected sepsis. There were 25 mother-infant pairs and 24 pairs had concordant strains. The most prevalent serotypes among mothers and/or their babies were II, Ia and V (41.5, 20.6, 19.5 and 40.6%, 25 and 15.6%, respectively). Multilocus sequence typing (MLST) on 83 isolates showed ST10 (24; 28.9%) and ST2 (12; 14.5%) as most predominant sequence types. All GBS strains were susceptible to penicillin, cefotaxime and vancomycin, which correlated to the presence of wildtype PBP2x types and the lack of known vancomycin-resistance genes. Tetracycline resistance was high (73; 88%, associated primarily with tetM, but also tetO and tetL). Five isolates (6%) were resistant to erythromycin and clindamycin and 3 isolates were fluoroquinolone-resistant, containing associated mutations in gyrA and parC genes. All isolates were positive for one of four homologous Alpha/Rib family determinants and 1–2 of the three main pilus types. Conclusions Predominant serotypes were II, Ia, and V. A limited number of clonal types were identified with two STs accounting for about half of the isolates. All strains collected in this study were susceptible to beta-lactam antibiotics and vancomycin. Typical of most GBS, these isolates were positive for single alpha-like family protein, serine-rich repeat gene, as well as 1–2 pilus determinants. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-020-05581-8. Background Group B Streptococcus (GBS) is a recognized cause of infant sepsis and meningitis globally and is a leading cause of morbidity and mortality in Africa [1]. GBS is also a common commensal colonizer of the gastrointestinal and urogenital tracts of women and maternal colonization is a major factor in mother-to-child GBS transmission. Early onset (< 7 days) GBS disease has been well characterized, whereas the epidemiology of late onset disease (LOD, 7-89 days) is less well understood [2]. Penicillin G is the drug of choice for intrapartum prophylaxis [3] and GBS isolates remain mostly susceptible to penicillin. Its prophylactic use has been instrumental in significantly reducing the incidence of early-onset diseases in neonates [4]. However, GBS isolates with reduced susceptibility to penicillin have been reported for more than a decade [5]. Erythromycin and clindamycin have been used as alternatives to prevent vertical transmission of GBS among pregnant women who are allergic to penicillin, but resistance to these antibiotics has emerged in several countries, including reports from Africa [6,7]. GBS strains are subdivided according to type-specific capsular polysaccharides into 10 unique serotypes which are also a major focus of vaccine development [8]. Serotypes I to V account for about 98% of colonizing GBS isolates worldwide with serotype III usually associated with invasive disease and less common among colonization isolates. GBS serotype distribution is not uniform across different geographic regions and temporal variations have also been described [9]. As information on GBS from their genomes has accumulated, molecular methods have also proved very useful for investigating the population structure of GBS and discriminating differences between strains isolated from different sources [5,[10][11][12]. For example, the ST17 lineage has been associated with neonatal infections, particularly with late-onset disease [11]. There are previous reports showing diversification and shifts in serotypes [5,12,13] and abundant evidence of past capsular switching within several MLST-based lineages [12,14] which also pose potential challenges for the development of a vaccine. Global data on circulating GBS strains is important for disease control and for informing the development of effective vaccines. However relatively few studies are available from Sub-Saharan Africa [1] and there are limited data on serotypes and strain characterization [7]. Ethiopia is an important country with a substantial birth cohort in Africa, and while there are previous data published from Ethiopia on antibiotic susceptibility patterns for GBS, there are limited available detailed descriptive data on circulating strains [7,15]. Here we used whole genome sequencing to investigate serotype distribution, clonal relationships, lineage distributions, virulence factor determinants, and antimicrobial susceptibility patterns of S. agalactiae strains recovered from pregnant mothers and their newborns attending three hospitals in Ethiopia. Study population This prospective, cross-sectional study was conducted at 3 hospitals in Ethiopia: Adama Hospital Medical College (AHMC), Hawassa University Comprehensive Specialized Hospital (HUCSH) and Tikur Anbessa Specialized Hospital (TASH) between June 2014 and September 2015. The three hospitals were selected based on their convenience. AHMC is a rural hospital located in Adama City in the Oromia regional state. It is located 100 km due east of Addis Ababa and has a total population of 220,212. HUCSH, a rural hospital, is located in Hawassa which is the capital city of Southern Nations Nationalities and Peoples Region (SNNRP) and is located 275 km south of Addis Ababa. The total population of Hawassa town is 235,000. TASH is an urban hospital located in Addis Ababa, the capital city of Ethiopia, with a large population size of 3,384,569. Eligibility criteria Pregnant women who were admitted for delivery along with their newborn were included. Pregnant women with cesarean section delivery and those who were on antibiotic treatment for the last 2 weeks prior to data collection were excluded. Newborns who were suspected of neonatal disease, those with signs and symptoms of neonatal disease (breathing problem, reduced movement, reduced suckling, seizure, slow or increased heart rate, vomiting, increased or reduced body temperature). At each study site, to isolate GBS from pregnant women and newborns, recto-vaginal swabs from mothers and samples from the nasal area, external ear, umbilical cord or throat area of newborns were placed into Lim broth (BD Diagnostics, USA) and incubated for 18-24 h at 37°C in CO 2 enriched atmosphere. Then sub-cultured onto sheep blood agar (BD Diagnostics, USA) and incubated in CO 2 enriched atmosphere at 37°C for 18-24 h. If there was no growth, blood agar plate was re-incubated and examined after 48 h. Isolation of Bacteria To isolate GBS from newborns suspected of early onset disease, about 1 ml blood was inoculated into Tryptone Soy Broth (BD Diagnostics, USA). All blood cultures were incubated aerobically at 37°C and inspected daily for 7 days for the presence of visible microbial growth by observing any of one of the following changes: turbidity, hemolysis, gas production and coagulation of broth. Blood cultures with sign of microbial growth were sub-cultured onto blood agar (BD Diagnostics, USA). The blood agar plate was incubated aerobically in CO 2 enriched atmosphere at 37°C for 24-48 h. To identify GBS, hemolytic reaction on BAP (betahemolytic or non-hemolytic), Gram reaction, catalase test, CAMP (Christie, Atkins, and Munch-Petersen) test and Strep B Grouping Latex (Remel, USA) were used. Isolates were stored at − 70°C in medium containing skim milk, tryptone, glucose, and glycerol [16] and transported to the Streptococcus Laboratory at the Centers for Disease Control and Prevention for confirmation and characterization. For several analyses, where more than one isolate was available for mothers and/or babies and they had the same serotype and MLST type, only a single isolate was selected so as not to duplicate results. DNA extraction and whole genome sequencing At CDC, isolates were cultured on Trypticase soy agar supplemented with 5% sheep blood (BAP). A positive CAMP test and Strep. B Grouping Latex (Remel, USA) were used to confirm isolates as S. agalactiae. For whole genome sequencing (WGS), GBS isolates were cultured on BAPs and incubated overnight at 37°C. Genomic DNA was extracted manually using a modified QIAamp DNA mini kit protocol (Qiagen, Inc., Valenica, CA, USA) (https://www.cdc.gov/streplab/downloads/pcrbody-fluid-dna-extract-strep.pdf). Nucleic acid concentration was quantified by Qubit assay (Thermo Fisher Scientific Inc., Waltham, MA, USA) and samples were sheared using a Covaris M220 ultrasonicator (Covaris, Inc., Woburn, MA, USA) programmed to generate 500-bp fragments. Libraries were constructed on the SciCloneG3 (PerkinElmer Inc., Waltham, MA, USA) using an Ovation Rapid multiplex library preparation kit with 96 dual indices (NuGEN, San Carlos, CA, USA) and quantified by KAPA qPCR library quantification method (Kapa Biosystem Inc., Wilmington, MA, USA). Short read sequences were generated with MiSeq v2 500 cycle kit (Illumina Inc). Isolate identifiers, pipeline features, and assembly metrics are listed in Supplementary Table 1. For the 119 isolates that yielded high quality sequencing metrics with contig size below 500, sequences are available in the NCBI repository and accession number provided in Supplemental Table 1. Multilocus sequence typing Seven locus MLST to assign sequence type (ST) was facilitated from whole genome sequence data with the CDC pipeline using SRST2 and database at http:// pubmlst.org/sagalactiae/ [5]. eBURST was used to group isolates into lineages or clonal complexes (CCs), based upon sharing at least six of 7 MLST alleles with one or more other members [18]. The relations between STs and serotypes of GBS isolates were illustrated by the minimum spanning tree (PHYLOVIZ software version 2.0; PHYLOViZ team, Lisbon, Portugal). Antimicrobial susceptibility testing Antimicrobial susceptibility patterns of GBS were tested by broth microdilution. The antimicrobials tested included penicillin, cefotaxime, erythromycin, clindamycin, levofloxacin, vancomycin, daptomycin, tetracycline, and linezolid. Isolates were classified as sensitive, intermediate, or resistant according to Clinical Laboratory Standards Institute (CLSI) guidelines [19]. Strains were determined to be multidrug resistant if resistant to ≥3 different antibiotic classes. Phenotypic MICs were compared with WGS-predictions for non-β-lactam antibiotics (except daptomycin) using sequence queries and a bioinformatics pipeline (https://github.com/BenJames-Metcalf/Spn_Scripts_Reference) for detection of resistance determinants provided in a previous study [5]. The PBP2x typing scheme used serves to flag missense mutations within the pbp2x gene for subsequent isolate MIC testing for beta-lactam antibiotics. Source, serotype and MLST profiles of S. agalactaie Among 225 S. agalactiae isolates collected at the 3 study sites, only 121 isolates were recovered and confirmed as GBS at CDC's Streptococcus Laboratory for further characterization. Of these 121 GBS isolates, 87 were from rectovaginal samples from healthy pregnant women, 32 from different body parts of their healthy newborns (ear, throat, and nasal), and 2 from newborns with suspected early-onset disease. Twenty-eight isolates (22.8%) were collected at AHMC, Oromia Regional state (June 2014 to October 2014); 60 (48.8%) at HUCSH, Sidama Regional State (November 2014 to March 2014) and 33 (27.3%) were from TASH, Addis Ababa, the capital city of Ethiopia (March 2015 to August 2015). Of the 121 isolates, there were 25 mother-infant pairs and nine cases with > 1 isolate for mother or baby. A total of 83 non-duplicated isolates were identified for analysis of antimicrobial susceptibility and genotypes. Most isolates were positive for pilus subunit queries PI-1 and PI-2a (24 serotype II, 17 serotype V, 11 serotype Ib, and 5 serotype III), while seven isolates (all serotype Ia) were positive for PI-1 alone. Isolates with serotypes Ia (n = 7), II (n = 3) and V (n = 1) were PI-2a positive. The pilus subunit query for PI-2b was identified in serotype Ia (4 isolates), II (3 isolates) and in a single serotype IV isolate in the study (Table 1). Antimicrobial susceptibility profile, resistance determinants, virulence factor determinants All 83 non-duplicated GBS strains collected in the study were susceptible to penicillin, linezolid, cefotaxime, and vancomycin ( Table 2). PBP2X types were restricted to types, 1, 4 and 5. The majority of isolates were resistant to tetracycline (n = 73; 88%) conferred by tetM (48; 57.8%), tetL + M (24; 28.9%) or tetO (1; 1.2%) ( Table 1). All GBS strains of serotypes II, III, IV and V were resistant to tetracycline ( Table 2). Five isolates within serotypes III and V were nonsusceptible to erythromycin. These included 4 inducibly clindamycin-resistant isolates carrying ermTR and one constitutively clindamycin-resistant ermB-positive strain. Two isolates with ermTR were also positive for lnu genes. Three isolates of serotype V/ST19 had levofloxacin MICs > 4 μg/ml and had mutations within both gyrA and parC (Table 1) and these same isolates also had the aac6 (aph2) determinant that confers gentamicin resistance. Five isolates were considered MDR and this included the 3 resistant to levofloxacin. Discussion Vaccination of pregnant women against GBS is a promising strategy to prevent invasive GBS disease in their (1) CC249 (14) ST249 (2) Ia (7) tetM (7) alp1 (7) srr1 (7) PI-2a (7) ST933 a (5) a A new ST from this study; CC clonal complexes; ST sequence types; n number b All 5 isolates also had determinants found to be associated with gentamicin resistance (aac6-aph2) infants [8]. Vaccine candidates include protein-based formulations and serotype-specific polysaccharideprotein conjugates [20] and thus an understanding of serotype and surface-protein antigen distribution in maternal colonization and infant disease worldwide is important. While there have been several reports published from Ethiopia on maternal and infant GBS colonization and disease over the past decade, little information on GBS strain characteristics has been described. In this study, serotype II was the most common predicted serotype, with five types (Ia, Ib, II, III and V) accounting for the vast majority of isolates. This is consistent with maternal colonization serotypes described from other studies in Africa and globally [7,9], however, serotype II is not usually the predominant serotype but rather serotypes III and V [9]. Globally, the vast majority of invasive and colonizing GBS isolates are grouped into five CCs (CC1, CC10, CC17, CC19, and CC23) [21,22]. In this study, 88% of isolates were grouped into one of these CCs, although no isolates belonging to CC1 and CC17 were identified, emphasizing the diversity of S. agalactiae in human isolates and highlighting the potential for local geographic differences. A recent study from Northwest Ethiopia grouped GBS in four CCs (CC1, CC10, CC19 and CC23) [23]. Similar to findings by others [21,22] we also saw evidence of past capsular locus replacement events within 2 genetic lineages (ST19 and ST196) which may have implications for vaccine development strategies. Surface protein antigens play an important role in the pathogenesis of GBS infection, and several of these antigens have been documented as promising vaccine targets [24]. Our data are consistent with past work indicating that a vaccine containing the 3 pilus protein components could be effective in preventing disease caused by GBS as all strains carried at least one of the pilus proteins [25]. Similarly, in all of the isolates one of four highly related Alpha family proteins (Alpha, Rib, Alp2/3, Alp1) were detected suggesting possible broad coverage of the fused N terminal domain Rib-Alpha (GBS-NN) vaccine tested in phase I clinical trials in pregnant women [26]. The HvgA surface-anchored protein has been found to be critical for GBS intestinal colonization and translocation across the blood brain barrier during the onset of meningitis [27]. There were no isolates from meningitis cases in this study but a small proportion of GBS colonization isolates (2.4%) contained the hvgA gene. The hvgA gene has been detected in previous studies, primarily among highly virulent S. agalactiae belonging to serotype III/ST17 [27] and rarely found among nonserotype III isolates [14,28]. Two hvgA positive isolates in our sampling were of serotype Ia/ST934, which is S. agalactiae have generally been considered universally susceptible to penicillin although there have been several reports of isolates with mutations associated with decreased susceptibility over the last decade [12,29,30]. Here, PBP 2X types were restricted to types, 1, 4 and 5, commonly seen in penicillin-susceptible US isolates [5] and all isolates were sensitive to penicillin and cefotaxime. This supports data published from Jimma, Ethiopia [31], however, several previous colonization studies in Ethiopia have documented large numbers of GBS strains resistant to penicillin [15,32]. Differences in these reported rates warrants further investigation to determine if these rates are indeed real or due to challenges with appropriate and accurate laboratory testing for GBS species identification and antibiotic resistance. The proportion of isolates with in vitro resistance to both erythromycin and clindamycin was similar to rates described by Mengist et al. [31] but lower than that reported from other regions of Ethiopia [7,32] and other countries [7,12,33]. Combined resistance to erythromycin and clindamycin in GBS is most commonly due to 23S rRNA methylases encoded by different erm genes which supports our findings with ermTR and ermB determinants predominantly associated with macrolide and lincosamide resistance [5]. The high level of tetracycline resistance, associated with tetM resistance determinant, may strengthen the hypothesis that current globally circulating S. agalactiae strains in humans were selected by tetracycline usage in 1940s [34]. The proportion of isolates resistant to levofloxacin were similar to that from United States [5,12], Taiwan [35], Italy [36] and Brazil [37] and lower compared to data reported from China [38] and Canada [39]. For daptomycin CLSI has only susceptible (S) interpretation MICs > 1 considered non-susceptible c One serotype III isolates that were clindamycin susceptible were positive for D-zone test and were considered inducibly resistant d Three serotype V isolates that were clindamycin susceptible were positive for D-zone test and were considered inducibly resistant A key strength of our study was the molecular characterization of GBS isolates from a region of the world with limited data on this subject. A major challenge and limitation were the reduced recovery rate of GBS strains at CDC allowing only half of the isolates available for additional testing. The inability to recover GBS from the initial frozen stocks could partly be due to loss of viability during storage and transport, and a number of samples were also heavily contaminated which may have contributed also to reduced recovery. Conclusions In summary, the most prevalent GBS serotype was serotype II followed Ia, V, Ib, III and IV. This study suggests that circulating GBS in mothers and infants is primarily restricted to five major genetic lineages. All isolates were susceptible to penicillin and resistance to macrolides was relatively low. This study highlights the importance of additional studies to assess GBS epidemiology and develop accurate GBS prevention strategies in Ethiopia. Further epidemiological studies are needed for a more detailed understanding of GBS strain distributions and for subsequent development of vaccination strategies.
4,292.4
2020-11-16T00:00:00.000
[ "Medicine", "Biology" ]
Research on FeO Content of Sinter The FeO content of sinter is the key role for sinter quality, carbon consumption and production cost. And it has great effect on production status, fuel ratio and hot metal output of blast furnace. According to the relevant national standards and industry standards, the particle size, reducibility, drum strength, fuel ratio of sinter and BF production status in Tangsteel iron-making plant were analyzed in this paper. It is found that the optimal range of FeO content in Tangsteel is from 8.0 to 8.5 wt.%. Introduction The FeO content of sinter is very important for efficient production. It directly affects the particle size and cold strength of BF raw materials, thus affecting the reduction performance of FeO. In the process of iron-making, the sinter with high FeO content has poor reducibility, which will affect the sTablele operation of blast furnace and not meet the need of smelting process. Therefore, the study on sinter with suiTablele FeO content has attracted much attention. In foreign advanced iron and steel enterprises (such as Japan and France), the reasonable range of FeO content of sinter is 4.0-5.0 wt.% [1]. In domestic enterprises (such as Baosteel, Wusteel, Jisteel, Ansteel), the FeO content of sinter is controlled at about 7.0 wt.% [2][3][4][5], while for many small and medium-sized steel enterprises in China, the FeO content of sinter is relatively high, even more 10.0 wt.% [6][7][8] . . The control of FeO content of sinter has been a controversial issue in Tangsteel iron-making plant. For example, the FeO content of sinter in Tangsteel iron-making plant had fluctuated within the range of 12.0-9.16 wt.% since the end of 2007, which resulted in many production problems, such as the large fluctuation of sinter particle size, unsTablele blast condition, wind pressure fluctuation, and especially the great fluctuation of static pressure, unsTablele wall body, difficult control of furnace temperature and heat, daily coke ratio fluctuation around 10Kg/t Fe. The FeO content of sinter was controlled between 8.0-9.0 wt.% after 2009, which brought the sTablele furnace condition, degressive operation difficulty, and easy control for BF production [9][10][11]. Therefore, FeO content of sinter must be studied in blast furnace production. The effect of FeO content in sinter on raw material granularity, reduction performance, drum strength and blast furnace fuel ratio are researched in this paper. Sample Preparation According to the literature review, if the FeO content of sinter is in the range of 7.0-9.0wt.%, the blast furnace production will be sTablele, the operation will be easy and BF production will be easy to control. Therefore, the experimental test points are randomly distributed within the range of 7.0-9.0 wt.% FeO content so as to find the sTablele region. Particle Size Test with Different FeO Content The raw material of experiment is the sinter, used in blast furnace in Tangsteel iron making plant. After screening, the size composition range of sinter was <5 mm, 5-10 mm, 10-16 mm, 16-25 mm, 25-40 mm and >40 mm. Reducibility Test with Different FeO Content Based on Iron ores-determination of reducibility (GB/ t13241-91), the reduction test of sinter, used in Tangsteel iron-making plant, was studied in this paper. The conditions were as follows: the reaction tube with inner diameter of 75 mm; the sample mass was 500g; the particle size was 10.0-12.5mm; the ratio of reduced gas composition CO to N 2 was 3/7, the reduced gas flow rate was 15L/min; the reduction temperature was 900 C and the reaction time was 180 min. The reducibility of sinter was measured as medium temperature reduction index (RI). Reducibility Test with Different FeO Content According to iron ores-low temperature disintegration testmethod using cold tumbling after static reduction (GB/ T13242-91), the low temperature disintegration of sinter was researched in this paper. The low temperature degradation test was composed of constant temperature reduction test and drum test. The conditions of former test were that the inner diameter of reaction tube was 75mm, the sample mass was 500g, the particle size was 10.0-12.5 mm, the reduction gas was composed of CO 2 -30CO-60N 2 , the reduction temperature was 500 C, the gas flow rate was 15l/min, and the reduction time was 60min. After the reduction reaction, it was cooled to room temperature in N 2 atmosphere, and then weighted and loaded into a drum with φ130 × 200 mm to do the drum test. The rotational speed was 30 rpm and total round number was 300. Then it was respectively sieved by 6.3 mm, 3.15 mm and 0.5 mm square hole screen to get corresponding particle size. And the mass percent of samples larger than 6.3 mm, less than 3.15 mm and less than 0.5 mm were respectively presented as RDI+ 6.3 , RDI -3.15 and RDI -0.5 , which were taken as the indicators of low temperature reduction performance of sinter. Reducibility Test with Different FeO Content The experiment adopted the 1/2ISO standard drum with diameter of 1000×250mm. The sinter with particle size greater than 10 mm was 7.5 kg, which was rotated at 25 rpm for 8 min. The mass percent of particle size greater than 6.3 mm was taken as the drum index, and the mass percent of particle size less than 0.5 mm as abrasion index. The particle size test result of sinter adopted in Tangsteel iron-making plant is shown in Tablele1 and Figure 1. Effect on Sinter Granularity According to the experimental data and curve trend of Tablele1 and Figure 1, it can be seen that the particle size of sinter in the range of less than 5 mm decreases slightly with the increasing of FeO content from 7.5 wt.% to 8.5 wt.%, and then increases slightly with the increasing of FeO content from 8.5 wt.% to 9.0 wt.%, but the increasing quantity is not obvious. In the range of 5-10 mm, the particle size of sinter has a waving trend with the increasing of FeO content, but the variation is not obvious. In the range of 10-16 mm, the particle size of sinter is increased firstly and then decreases with the increasing of FeO content, and the maximum turning point is around 8.0 wt.%, and the variation range is slightly larger than the previous two intervals. With the increasing of FeO content in the range of 16-25 mm, the particle size variation is similar to that in the range of 5-10 mm, but the fluctuation amplitude is relatively large, and the maximum value is 8.5 wt.%. With the increasing of FeO content in the range of 25-40 mm, the particle size decreases firstly and then increases, and the lowest turning point is around 8.0 wt.%. The variation range is larger than that in the range of 10-16 mm. With the increasing of FeO content in the range of over 40 mm, the particle size increases first and then decreases. The maximum turning point is around 8.0 wt.% with a slight fluctuation. Based on the above experimental data and analysis results, it can be seen that the best comprehensive distribution area of sinter particle size is within the FeO content of 8.0-8.5wt.%. Effect on Sinter Reducibility As shown in Figure 2, the experimental results show that the reduction of sinter decreased with the increasing of FeO content, and there is a maximum value of 83.76 in the range of FeO content 7.0-7.5 wt.%, while there is also a lowest value 81.61 in the range of FeO content 8.5-9.0 wt.%. The difference between the highest point and the lowest point is about 2.15, thus, the difference is not big. According to the above experimental results and analysis, FeO content has no significant influence on the reduction of sinter within the range of 7.0-9.0 wt.%, but the overall trend is decreasing. Based on the RDI +6.3 curve of Figure 3, it can be seen that the overall trend of low temperature disintegration index is gradually decreased with the increasing of FeO content of sinter from 7.0 wt.% to 9.0 wt.%, and when the FeO content is in the range of 8.0-9.0 wt.%, the fluctuation extent of index is such small that it could be considered as no change; the overall trend of RDI -3.15 is gradually increased with the increasing of FeO content, and when the FeO content is in the range of 8.0-8.5 wt.%, the increasing extent of index is very small, which can be considered as no change; the overall trend of RDI -0.5 is gradually decreased with the increasing of FeO content of sinter, but the decreasing extent is smaller than that of RDI +6.3 , and when the FeO content is in the range of 8.0-9.0 wt.%, the increasing extent of index is very small, which can be considered as no change. From the experimental results and analysis, it can be seen that the excellent area of low temperature disintegration index of sinter is within the range of 8.0~8.5 wt.% FeO content. Effect on Sinter Tumbler Strength The influence of different FeO content on drum strength of sinter is shown in Figure.4. The drum strength of sinter increases with the increasing of FeO content in the range of 7.5-9.0 wt.%, but the increasing speed in the range of 7.5-8.0 wt.% is slower than that in the range of 8.0-9.0 wt.%, which could be seen from the slope change of curve segmentation. Therefore, according to the curve and analysis results of Figure 4, the FeO content of sinter has great influence on the drum strength. The overall trend of drum strength of sinter is generally increased within the range of 7.5-9.0 wt.%. Effect on Sinter Fuel Rate The variation of fuel ratio of sinter with different FeO content is shown in Figure 5. The FeO content has little influence on fuel ratio of sinter and the overall trend is gradually increased. In the range of 7.0-9.0wt.% FeO content, the increasing extent is not significant. It is remarkable that the abnormal point (fuel ratio 629kg/t) with about 8.5wt.% FeO content is a test point caused by the abnormal furnace body during the experimental process, which is regarded as an invalid point and can be ignored. Therefore, according to the above experimental results and analysis, it is considered that the fuel ratio of sinter is increased slightly with the increasing of FeO content in the range of 7.0-9.0 wt.%, and its value fluctuation is within the normal index range of BF production. The influence of different FeO content on the utilization coefficient and gas utilization rate of blast furnace is shown in Figure 6. The FeO content of sinter has no significant influence on utilization coefficient and gas utilization rate of blast furnace. BF utilization coefficient is increased with the increasing of FeO content within the range of 7.0-9.0 wt.%, while the increasing extent is slight and can be ignored. Therefore, according to the above experimental results and analysis, it is considered that FeO content within the range of 7.0-9.0 wt.% has little influence on the utilization coefficient and gas utilization rate of blast furnace. To sum up, according to the data analysis and research about the influence of different FeO content sinter on the particle size, reducibility, drum strength and fuel ratio, when the FeO content is within the range of 8.0-8.5 wt.%, the sinter metallurgical properties are sTablele and conducive to the smooth production of blast furnace. Effect on Blast Furnace Production According to the previous study on the influence of different FeO content (FeO content within the range of 7.0-9.0 wt.%) on particle size, reduction performance, drum strength and fuel ratio of sinter, it is found that the comprehensive performance index of sinter is better when the FeO content is within the range of 8.0-8.5 wt.%. However, in order to verify whether the effect of different FeO content of sinter on blast furnace production is consistent with the experimental results, the actual production data of No.3 blast furnace in Tangsteel is tracked and collected, and the results are shown in Figure 7. The curve about influence of different FeO content on blast volume is shown in Figure 7 (b). From the curve trend, it can be seen that the fluctuation of blast volume gradually becomes sTablele with the increasing of FeO content of sinter, and the minimum fluctuation area is within the range of 7.9-8.6 wt. % FeO content. The curve about influence of different FeO content on BF utilization coefficient is shown in Figure 7(c). The fluctuation frequency of utilization coefficient of blast furnace decreases with the increasing of FeO content of sinter. And the fluctuation of BF utilization coefficient is relatively slow within the range of 7.8-8.6wt.%. The curves about influence of different FeO content on drum index, permeability and gas utilization rate are shown in Figure 7(d). From the curves trend, it can be seen that the drum index has no obvious change with the increasing of FeO content, thus, the FeO content has little effect on drum index. However, the effect on sinter permeability and gas utilization rate is gradually reduced, and the most influential region is within the range of 7.8-8.6 wt.%. According to the collected data and analysis results, it is concluded that the FeO content has good influence on BF production within the range of 7.9-8.6 wt.%, which is basically consistent with the previous experimental results (thus, the most suiTablele FeO content of sinter is 8.0-8.5 wt.%). Therefore, the experimental result in this paper has guiding significance for BF actual production in Tangsteel iron-making plant. Conclusions Based on the sampling research of metallurgical properties of sinter in Tangsteel iron-making plant and systematical study on the effect of sinter FeO content on particle size, reduction performance, drum strength and BF fuel ratio, the analysis results are as follows: The increasing of FeO content of sinter has little impact on particle size of raw material. The reduction performance and reduction degradation index of sinter slowly decrease, while the drum strength and blast furnace fuel ratio slowly increase with the increasing of FeO content. The best metallurgical property is within range of 8.0-8.5 wt.% FeO content. According to the analysis results of actual production data of No.3 blast furnace in Tangsteel iron-making plant, the comprehensive metallurgical property of sinter is the best within the range of 7.9-8.6 wt.%FeO content, which is basically consistent with the test result.
3,332
2019-02-16T00:00:00.000
[ "Materials Science", "Engineering" ]
Testing Bell inequalities at the LHC with top-quark pairs Entanglement between the spins of top-quark pairs produced at a collider can be used to test a (generalized) Bell inequality at energies never explored so far. We show how the measurement of a single observable can provide a test of the violation of the Bell inequality at the 98% CL with the data already collected at the Large Hadron Collider and at the 99.99% CL with the higher luminosity of the next run. Introduction.-A characteristic property of a quantum system is the presence of quantum correlations (entanglement) among its constituents not accounted for by classical physics (for a review, see [1]), leading to the violation of specific constraints, the so-called Bell inequalities [2,3].The violation of Bell inequalities requires the presence of the strongest version of quantum non-locality; although weaker forms of non-classical correlations have been identified, they play no role in our considerations. Quantum correlations can readily be studied in a bipartite system made of two spin-1/2 particles [4]. This physical system is routinely produced at colliders and the spin correlations among quark pairs have been shown [5,6] to be a powerful tool in the physical analysis-limited aspects of which have been already studied by the experimental collaborations at the LHC on data at 7 [7], 8 [8] and 13 TeV [9] of center-of-mass (CM) energy. In this Letter, we focus on top-antitop pairs produced at the Large Hadron Collider (LHC) and identify a single observable probing the presence of quantum correlations among their spins. The measurement of such an observable provides a test of a (generalized) Bell inequality. Many experiments have been performed to test analogous inequalities in various quantum systems involving photons and atoms [4,[10][11][12][13]. Similar tests in the high-energy regime of particle physics have been suggested by means of e + e − collisions [14], neutral meson physics [15,16], Positronium [17], Charmonium decays [18] and neutrino oscillations [19]. No test has so far been performed at the high energies made available by the LHC-even though some preliminary work has been done in [5,6] and more recently in [20]. In particular, we build on the results of [20] in which the entanglement of the top-quark pairs and the kinematical regions where it could be maximal were identified and explicitly discussed. Let us stress that all these tests involve the direct measurement of the joint probabilities entering the various inequalities and therefore might be affected by the so-called loopholes, depending on the specific characteristics of the used setups. Our approach is quite different and unexplored: the focus is not on the probabilities of joint events, specifically top-quark pair spin projection measurements, but rather on their mutual spin correlations. Such a measurement of correlations will provide evidence against a whole class of local completions of quantum mechanics by explicitly exposing their internal inconsistency. In order to be validated, these classical theories will need to reproduce both the probabilities entering the Bell inequality and the averages of the spin correlation matrix through the presence of auxiliary stochastic variables and do that both at atomic energies and in the extreme relativistic setting of proton collisions at the LHC. Reformulating the actual determination of the selected spin observable into a statistical test, we show how the value of this observable can be extracted from the events and the violation quantified at the confidence level (CL) of 98% with the data already collected by the experimental collaborations at the LHC and 99.99% CL (4σ significance) with the higher luminosity of the next run. Detector acceptance, efficiency and migration effects have been taken into account. Methods.-The quantum state of a two spin-1/2 pair, as the one formed by a top-quark pair system, can be expressed by the density matrix where σ i are Pauli matrices, 1 is the unit 2 × 2 matrix, while the sums of the indices i, j run over the labels representing any orthonormal reference frame in threedimensions. The real coefficients A i = Tr[ρ (σ i ⊗ 1)] and B j = Tr[ρ (1 ⊗ σ j )] represent the polarization of the two spins, while the real matrix C ij = Tr[ρ (σ i ⊗ σ j )] gives their correlations. In the case of the top-quark pair sys-tem, A i , B j and C ij are functions of the parameters describing the kinematics of the quark pair production. In the CM reference frame of the top-quark pair system as produced at a pp collider, the two spin-1/2 quarks fly apart in opposite directions. One can then extract the probability P(↑n; −) of finding the spin of one quark in the state ↑n, with the projection of the spin along the axis determined by the unit vectorn pointing in the up direction. Similarly, one can consider double probabilities, like P(↑n; ↓m), of finding the projection of the spin of the quark along the unit vectorn pointing in the up state, while the companion antiquark has the projection of its spin along the direction of a different unit vector m pointing in the down state. In classical physics, these probabilities involve averages over suitable distributions of variables and obey the following (generalized) Bell inequality [11]: wheren 1 ,n 2 ,n 3 andn 4 are four different threedimensional unit vectors determining four spatial directions along which the spins of the quark and antiquark can be measured. In quantum mechanics the same probabilities are computed as expectation of suitable spinobservable operators in the state (1), so that the previous inequality reduces to the following constraint involving only the spin correlation matrix C ij and not the polarization coefficients A i and B j . In order to test the Bell inequality in Eq. (3), one needs to experimentally determine the matrix C and then suitably choose four spatial directionsn 1 ,n 2 ,n 3 andn 4 that maximize the left-hand side of (3). In practice, there is no need to optimize the choice ofn i : this maximization process has already been performed in full generality in Ref. [12], for a generic spin correlation matrix. Indeed, consider the matrix C and its transpose C T and form the symmetric, positive, 3 × 3 matrix M = C T C whose three eigenvalues m 1 , m 2 , m 3 can be ordered by decreasing magnitude: m 1 ≥ m 2 ≥ m 3 . The two-spin state density matrix ρ in (1) violates the inequality (3), or equivalently (2), if and only if the sum of the two greatest eigenvalues of M is strictly larger than 1, that is In other words, given a spin correlation matrix C of the state ρ that satisfies (4), there are for sure choices for the vectorsn 1 ,n 2 ,n 3 ,n 4 for which the left-hand side of (3) is larger than 2. It should be stressed that the above formulation, based on the relation (2), departs from the more standard approaches adopted in testing Bell inequalities, in particular in quantum optics. While in the standard, direct tests one needs to experimentally determine the expectation values of spin observables entering the Bell inequalities, in the above, indirect approach the actual measure of probabilities is avoided, in favor of the determination of the spin correlation matrix C-the entries of which can be measured by studying the kinematics of the quarkantiquark decay products [6]. In the recent analysis [9], the spin correlations of the top-quark pairs produced at the LHC are analyzed but only after being averaged over a large portion of phase space; the values obtained for the entries of C are in agreement with the inequality (3), for any choice of the four vectorsn i . This agreement is the consequence of the averaging procedure (mixing) that unavoidably reduces the entanglement content of the density matrix ρ. On the other hand, the study in [20] suggests that by focusing on specific, small regions of the phase space, the entanglement of the top-quark pair state could be close to maximal (see also [21]) and the Bell inequality in Eq. (2) could be violated at the maximal level. Results.-The sum of the eigenvalues m 1 +m 2 provides an observable whose value, as extracted from the data, tests whether the Bell inequality in Eq. (3) is violated or not. To compute this observable we collect all the entries of the correlation matrix C as given in the process where = e, µ are taken only in different-flavor combinations, in order to better connect with experimental measurements in this channel. E miss T stands for the transverse missing energy. We simulate full matrix elements for the top quark production and decays through the decay chain formalism built into MadGraph5 [22], which embeds full spin correlations and Breit-Wigner effects, thus excluding only non-resonant diagrams. Within the Standard Model we consider gluon (gg) and quark (qq) initiated top-quark pair production at leading order in the strong and electroweak couplings, using the NNPDF23 [23] leading order parton distributions set and within the four flavor number scheme, thus fully taking into account for bottom quark mass effects. Next-to-leading order corrections in the strong coupling are known to be small on the largest entries of C: at the LHC energies their impact is less than 2% [5], and will be neglected in the following. We assume a CM energy of 13 TeV, setting both the renormalization and factorization scales to the sum of the transverse energies of the final state particles. We follow [6] for the choice of orthonormal basis for the matrix C of Eq. (1). The unit vectorsr andn are built starting from the direction of flightk of the top quark in the top pair CM frame with respect to one of the proton beams directions in the laboratory framep: a, bâb kk −k n sgn(y)n -sgn(y)n r sgn(y)r -sgn(y)r (see Fig. 1) where and Θ represents the top-quark scattering angle. The correlation matrix C can be experimentally accessed through the angular spin correlations of the tt leptonic decays-whose directions of flight in the t andt rest frames are described, respectively, by the unit vectorsˆ ± . These angular spin correlations are determined by prop- erly averaging the products ξ ab = cos θ a + cos θ b − , where we defined the quantities cos θ a + =ˆ + ·â and cos θ b − =ˆ − ·b , and the labels a and b ∈ {k, n, r} follow the conventions of tab. (I) for the choices of reference axes. Indeed one can show that, in the absence of acceptance cuts, the elements of the 3×3 matrix C can be expressed as [6] C ab σ(m tt , cos Θ) = −9 1 σ dξ ab dσ dξ ab ξ ab , with the residual dependence of the cross section σ on cos Θ and the invariant mass m tt of the top-quark pair system being understood. The integral of Eq. (9) represents precisely the average of the products ξ ab taken over the leptonic angular phase space. In order to fully take advantage of Eq. (9), the event generation was performed removing any possible kine-matic cuts, both in production and decay. The diagonalization of the matrix C, needed to test Eq. (4), can be performed as a function of m tt and Θ. The result of this procedure is shown in Fig. 2, whose event statistics benefits from the intrinsic initial-state symmetry Θ → π − Θ. The binning choice represents the best compromise between the expected event statistics at the LHC and the unavoidable dilution of entanglement effects coming from averaging ξ in bigger portions of phase space. We can identify in Fig. 2 two regions where (4) holds, one at m tt close to threshold, and another at m tt 0.9 TeV and 2Θ/π 0.7. Of these two regions, only the one at large m tt presents a constructive sum of the qq and gg contributions, both giving rise to a top-quark pair state close to a pure, maximally entangled state [6,20], and therefore increase m 1 + m 2 . In the other region close to threshold, qq events produce a mixed state and dilute the gg pure maximally entangled state-even if the qq contribution is subdominant in terms of cross section rates. This difference explains the higher values of m 1 + m 2 in the top right corner of Fig. 2, where m 1 + m 2 is expected to reach a value as large as 1.6. In order to assess to what extent the values of m 1 + m 2 > 1 can realistically be used to prove a violation of the generalized Bell inequalities Eq. (3), we study the impact on their determination of statistical uncertainties due to detector resolution, acceptance, efficiency and migration effects. To this aim, we reformulate the problem into a statistical test of the null hypothesis {H 0 : m 1 + m 2 ≤ 1}. We compute the significance of the corresponding outcome at the LHC with 139 fb −1 , the full Run II luminosity, by performing 10 new, independent simulations of the process in Eq. (5), with fast simulation of the ATLAS detector using the Delphes [24] framework. We require at least two anti-k t jets with R = 0.4 and at least one b-tagged jet, all with transverse momentum p T > 25 GeV and rapidity |η| < 2.5. Similarly, both e ± and µ ± leptons are required to have p T > 20 GeV and |η| < 2.47. The neutrino momenta from the dileptonic decay are not directly detectable, since only their sum can be inferred through the missing transverse energy E T miss of the event. The t andt momenta need thus to be reconstructed using the neutrino weighting technique [25]. With this method, the sums of the momenta of the candidate reconstructed neutrinos, charged leptons and candidate b jets are constrained to satisfy four equations on the invariant mass of the two candidate W bosons and top quarks. The possible solutions of this unconstrained system are assigned a weight w. The solution which maximizes w is eventually used to reconstruct the t andt momenta for that event. This procedure allows us to determine the reconstructed distributions which are eventually corrected for detector resolution and acceptance effects using a simplified unfolding procedure. The good agreement between our response matrix and those published for comparable processes [26] shows that migration effects have been properly simulated. Tuning such pseudo-experiments to have a statistics equal to the one expected with present LHC luminosity, we can take the resulting standard deviations s i on m 1 + m 2 as the predicted statistical uncertainty, with detector effects included. In testing the hypothesis, we use a standard χ 2 statistical test, where the sum runs over the set of bins that maximize the Standard Model expected significance for m 1 + m 2 > 1. We find that, under such conditions, the null hypothesis and the violation of Eq. (3) can be assessed at the 98% CL with present Run II luminosity. Moreover, after rescaling this result by the projected luminosity of the LHC full Run III, we expect that it will be possible to test the violation at the 99.99% CL (4σ significance). While systematic uncertainties associated with the unfolding procedure itself are known to be negligible [26], the results hereby presented do not include other theoretical and experimental systematic effects, the inclusion of which is beyond the scope of the present Letters since it would require a more detailed simulation both of the detector and of the collisions conditions, which is only possible for the experimental collaborations. Conclusions.-We have shown that the measurement of a single, suitable defined observable of the top-quark pair system can be used to ascertain quantum correlations among the spins of two quarks and in turn test a Bell inequality with the data already collected at the LHC. This test can provide clear evidence for quantum mechanics in an energy range never explored before. Testing Bell inequalities at high-energy colliders differs from the more familiar tests performed using quantum optics experiments. In the latter, the request of Bell locality can be easily achieved by using spin polarization measurements that are space-like separated; these measurements cannot influence each other, so that the two events have independent statistics. It is not possible to follow the same procedure in the case of the top-quark system because of the obvious restrictions of the detectors employed at the LHC. Nevertheless, there are advantages in studying Bell's inequalities in a high-energy settings along the lines discussed in this Letter. In quantum optics tests, to avoid so-called loop-holes connected to the lack of control of the number of pairs produced that actually impinge in the detectors, and other inevitable inefficiencies, one is generally forced to use inequalities more involved than (2). Although loop-hole free test of Bell inequalities have been recently performed in quantum optics [27,28], none of these problems affect the (indirect) Bell test presented above, as it reduces to the study of the spin correlation matrix C, without the need of any a priori commitment about efficiencies of detectors. We believe that our results will stimulate additional analyses to test Bell inequalities by means of the actual experimental data collected at LHC and motivate further investigations on other possible tests of quantum mechanics at high-energy colliders.
4,069.8
2021-02-23T00:00:00.000
[ "Physics" ]
Doubly Special Relativity in Position Space Starting from the Conformal Group We propose version of doubly special relativity theory starting from position space. The version is based on deformation of ordinary Lorentz transformations due to the special conformal transformation. There is unique deformation which does not modify rotations. In contrast to the Fock-Lorentz realization (as well as to recent position-space proposals), maximum signal velocity is position (and observer) independent scale in our formulation by construction. The formulation admits one more invariant scale identified with radius of three-dimensional space-like hypersection of space-time. We present and discuss the Lagrangian action for geodesic motion of a particle on the DSR space. For the present formulation, one needs to distinguish the canonical (conjugated to $x^\mu$) momentum $p^\mu$ from the conserved energy-momentum. Deformed Lorentz transformations for $x^\mu$ induce complicated transformation law in space of canonical momentum. $p^\mu$ is not a conserved quantity and obeys to deformed dispersion relation. The conserved energy-momentum $P^\mu$ turns out to be different from the canonical one, in particular, $P^\mu$-space is equipped with nontrivial commutator. The nonlinear transformations for $x^\mu$ induce the standard Lorentz transformations in space of $P^\mu$. It means, in particular, that composite rule for $P^\mu$ is ordinary sum. There is no problem of total momentum in the theory. $P^\mu$ obeys the standard energy-momentum relation (while has nonstandard dependence on velocity). Introduction. Doubly special relativity (DSR) proposals [1][2][3][4][5] might be specified as the theories with underlying symmetry group being the Lorentz group 1 , but with kinematical predictions different from that of special relativity. It can be achieved by taking of some deformation of the Lorentz group realization in space of conserved energymomentum. In particular, Magueijo-Smolin (MS) suggestion [2,3] is to take the momentum space realization of the group in the form Λ U = U −1 ΛU, where Λ represents ordinary Lorentz transformation and U(P µ ) is some operator. Ordinary energy-momentum relation (P µ ) 2 = −m 2 is not invariant under the realization and is replaced by [U(P µ )] 2 = −m 2 . It suggests kinematical predictions different from that of special relativity. There is a number of attractive motivations for such a modification (see discussion in [1][2][3][4][5]), in particular, one can believe on DSR as an intermediate theory where the quantum gravity effects are presented even in the regime of neglible gravitational field [9,3]. In turn, it implies that the formulation includes dimensional parameter (U = U(P µ , λ)) in such a way that one recovers special relativity in some limit. The parameter (or parameters, see the recent work [6]) turns out to be one more (in addition to speed of light) observer independent scale present in the formulation. The scale was identified with the Planck energy in [2,3]. The emergence of a new scale was taken as the guiding principle for construction of different DSR models in a number of papers. Modifications with various dimensional scales has been proposed [1-3, 4, 5, 6, 13]. In particular, in the work [6] it was discussed an algebraic construction which implies three scales c, E p , R, with E p identified with the Planck energy and R being the cosmological constant. To complete the picture, it is desirable to find underlying spacetime interpretation for the DSR kinematics, that is to construct position space realization of the Lorentz group which generates one or another DSR kinematics. Then one could be able to formulate dynamical problems on DSR space in the standard framework, starting from the action functional, which suggests physical interpretation of the results obtained in momentum space formulation. Actually, in this case the spaces of velocities, of canonical (conjugated to the position) momentum, the energy-momentum space as well as their properties and map of one to another can be obtained by direct computations (the issue being rather delicate question in the formulation with energy-momentum space as the starting point [8,9,10]). One expects also that the central problem of the DSR kinematics (the problem of total momentum for multi-particle system) can be clarified in the position space formulation. To find a position version for the given DSR kinematics one needs to decide, in fact, what is the relation among the energy-momentum P µ and the position variables x µ (as well as the canonical momenta p µ ), the approach undertaken in [5,3,8,9,11,13]. Let us enumerate some of the results. Assuming coincidence of P µ and p µ (equivalently, invariance of P dx) [5,10], one obtains the energy-momentum dependent Lorentz transformations in position space. In the algebraic approach [11], position version is encoded in the Poisson brackets of an algebra which unifies the Poincare algebra and the phase space one. It implies noncommutativity 2 of position variables [6]. For the MS kinematics it is possible to take ordinary Lorentz realization on x µ and then to deform standard relation among x and P in some particular way [13]. Then the MS invariant and the MS transformations are generated on the momentum space froṁ x 2 = −m 2 c 2 , x ′ µ = Λ µ ν x ν , which gives a consistent picture in oneparticle sector. Being quite simple, this point of view seems to be unreasonable, mainly due to the fact that it is difficult to construct an addition rule with acceptable physical properties in the multiparticle sector of the theory (see [13] for detailed discussion). Besides the nonlinear MS transformations, the MS energy -momentum relation is invariant also under some inhomogeneous linear transformations [13]. The latter are induced starting from linearly realized Lorentz group in five-dimensional position space. Fer the case, there are different possibilities to relate new scale with fundamental constants. In particular, identification with vacuum energy suggests emergence of minimum quantum of mass [13]. The abovementioned works are devoted to search for space-time interpretation of a given DSR kinematics. Instead of this, one can ask on reasonable deformations of the Lorentz group realization in position space without reference on a particular DSR kinematics [14,5]. We follow this line in the present work. We propose deformation of the Lorentz transformations based on the conformal group. By construction, maximum signal velocity is observer (and space-time) independent scale of the formulation, the latter is described in Section 2. In Section 3 we discuss geodesic motion of a particle, with the Lagrangian action being invariant interval of the DSR space. Kinematics corresponding to the theory is constructed and discussed in some details. In particular, the present formulation turns out to be free of the problem of total momentum in many-particle sector. 2 Deformation of the Lorentz transformations due to the special conformal transformation. In this Section we motivate that the conformal group in four dimensions seem to be an appropriate framework to formulate the DSR models in position space 3 . In ordinary special relativity the requirement of invariance of the Minkowski interval: ds ′ 2 = ds 2 immediately leads to the observer independent scale |v i | = c. To construct a theory with one more scale, the invariance condition seems to be too restrictive. Actually, the most general transformations x µ −→ x ′ µ (x ν ) which preserve the interval are known to be the Lorentz transformations in the standard realization [15] x ′ µ = Λ µ ν x ν , the latter does not admit one more invariant scale. So, one needs to relax the invariance condition keeping, as before, the speed of light invariant. It would be the case if ds 2 = 0 will imply ds ′ 2 = 0, which guarantees appearance of the invariant scale c (in the case of linear relation x 0 = ct). Thus, supposing existence of one more observer independent scale R, one assumes deformation of the invariance condition: ds By construction, the maximum velocity remains the invariant scale of the formulation. In the limit R → ∞ one obtains the ordinary special relativity theory. Complete symmetry group for the case is the conformal group (see, for example [16]). Besides the Lorentz transformations it con-sist of the dilatations x ′ µ = ρx µ and the special conformal transformations with the parameter b µ , Similarly to the previous DSR proposals [2,3,5], let us deform the Lorentz group realization in accordance with the rule Λ def = U −1 ΛU. We take the special conformal transformation 4 with some fixed b µ being the similarity operator U The above mentioned proportionality factor for the case is A = G −2 . The parameters b µ can be further specified by the requirement that space rotations Then the only choice is b µ = (λ, 0, 0, 0), which gives the final form of the deformed Lorentz group realization Our convention for the Minkowski metric is η µν = (−, +, +, +). One confirms now emergence of one more observer independent scale: there is exist unique vector x µ with zero component unaltered by the transformations (3). Namely, from the condition x ′ 0 = x 0 one has the only solution x µ = (R ≡ − 1 λ , 0, 0, 0) (the latter turns out to be the fixed vector). Thus all observers should agree to identify R as the invariant scale. Let us point that the transformations (3) are not equivalent to either the Fock-Lorentz realization [15], or to recent DSR proposals (the realizations lead to varying speed of light). Invariant interval under the transformations (3) can be find by inspection of transformation properties of the following quantities: whereΩ Then the quantity represents the invariant interval. On the domain where the metric is non degenerated, the corresponding four dimensional scalar curvature is zero, while three-dimensional space-like slice x 0 = 0 is curved space with constant curvature R (3) = − 24 R 2 . To conclude this Section, let us note that deformations of the special relativity in some domain by means of the transformation Λ def = U −1 ΛU suggests the (singular) change of variables X = U −1 x. The variable X has the standard transformation law under Λ def : X ′ = ΛX. It is true for the Fock-Lorentz realization [14] and for the recent DSR proposal [5] (see discussion in [14,17]). Moreover, different DSR proposals in the momentum space can be considered either as different definitions of the conserved momentum p µ in terms of the de Sitter momentum space variables η A [11], or as different definitions of p µ in terms of the special relativity velocities v µ = dx µ dτ [13]). Thus the known DSR proposals state, in fact, that experimentally measurable coordinates can be different from the ones specified as "measurable" by the special relativity theory. For the case under consideration, the transformation (3) acts as ordinary Lorentz transformation on the variables Geodesic motion of a particle in the space (8) looks as a free motion in the coordinates (9):Ẍ = 0, see the next Section. So, Eq. (9) represents coordinates of a locally inertial frame. 3 Particle dynamics and kinematics on the DSR space. The invariant interval (8) suggests the following action 5 for a particle motion It is invariant under the global symmetry (3), under the "translations": . To discuss kinematics corresponding to the theory, it is convenient to use the Hamiltonian formulation for the system. One finds the canonical momenta for the variables x, e and the Hamiltonian Here and below the expressions of the type p 2 mean contraction with respect to the Minkowski metric η µν . Transformation law for p µ follows from (3) (13) On the next step of the Dirac procedure, from the condition of preservation in time of the primary constraint:ṗ e = 0, one finds the secondary constraint, the latter represents deformed dispersion relation for the canonical momenta There are no of tertiary constrains in the problem. Then dynamics for the variables (x, p) is governed by the equationṡ The equations acquire the most simple form in the gauge e =Ω −2 for the primary constraint p e = 0 (the gauge coincides with the standard one in the limit λ −→ 0) The canonical momentum has now the standard expression in terms of velocity p µ =ẋ µ . As a consequence, it's transformation law coincides with the one forẋ µ , see Eq.(5). The system (16) implies the following Lagrangian equations for x μ The deformed gauge is convenient to study dynamics in a particular reference frame, but implies complicated law for transformation to other frames. Actually, to preserve the gauge, Eq.(3) must be accompanied by reparametrization of the evolution parameter τ ′ (τ ), where τ ′ represents a solution of the equation ∂τ ′ ∂τ = G −2 (Λ). In contrast, the gauge e = 1 retains the initial transformation law (3), and seem to be reasonable to discuss kinematics of the theory. Kinematical rules must be formulated for conserved energy and momentum. One notes that the canonical momentum (11) is not a conserved quantity, in accordance with Eq. (15). The discussion in the end of the Section 2 prompts that the conserved momentum may be P µ = e −1Ẋ µ (x). It's expression in terms of the canonical momentum (in any gauge) is given by ). (18) By direct computation one finds that P µ is actually conserved onshell (15) and obeys the ordinary energy-momentum relation as a consequence of Eq. (14). The deformed transformations (3), (13) induce the standard realization of the Lorentz group on P µ -space. As a consequence, composition rule for the momenta is the standard one, there is no problem of total momentum in the theory. So, the present version of the DSR theory leads to the standard kinematical rules on the space (18) The energy and momentum have nonstandard relation (18), (11) with measurable quantities (velocities and coordinates). It suggests that kinematical predictions of the theory differ from that of the special relativity theory. The difference among the canonical momentum and the conserved one implies an interesting situation in canonically quantized version of the theory. While the conjugated variables (x, p) have the standard brackets, commutators of the coordinates x µ with the energy and momentum P µ are deformed, as it can be seen from Eq. (18). Thus the phase space (x, P ) is endowed with the noncommutative geometry (with the commutators [x, P ] and [P, P ] being deformed). In particular, the energy-momentum subspace turns out to be noncommutative. The modified bracket [x, P ] suggests that the Planck's constant has slight dependence on x (similar bracket structure, with energy dependent Planck's constant, arises in the MS model [3]). Conclusion In this work we have proposed version of the doubly special relativity theory in position space based on deformation of ordinary Lorentz transformations due to the special conformal transformation. There is unique deformation (3) which does not modify the space rotations, namely, the deformation with the special conformal parameter being b µ = (λ, 0, 0, 0). The invariant interval (8) corresponds to the flat four-dimensional space-time (on a domain where the metric is non degenerated). By construction, maximum signal velocity is observer independent scale of the theory. The formulation admits one more independent scale R ≡ − 1 λ , the latter is identified with radius of three-dimensional hypersection of (8) at x 0 = 0. Geodesic motion of a particle on the space (8) has been discussed in some details. The conjugated momentum p µ (11) for the coordinate x µ has complicated transformation law (13), and obeys the deformed energy-momentum relation (14). The conserved energymomentum P µ (18) turns out to be different from the canonical one. The transformations (3), (5) for (x, p) induce the standard Lorentz transformations on the space of conserved momentum. It means, in particular, that composite rule for the energy-momentum is ordinary sum. There is no problem of total momentum in the theory. The conserved momentum, in contrast to the canonical one, obeys the standard energy-momentum relation. Kinematical rules of the theory are summarized in Eq.(19). One expects that kinematical predictions of the theory differ from that of the special relativity due to nonstandard dependence of energy and momentum on measurable quantities, see Eqs. (18), (11).
3,795.4
2004-09-23T00:00:00.000
[ "Physics" ]
An alternative method for the measurement of mechanical properties at intermediate strain rates: a numerical study A feasibility study of an innovative apparatus for dynamic characterization of materials at intermediate strain rates is presented. The working principle is based on the Split Hopkinson bar, but the wave propagation occurs through properly sized springs. The system is designed to generate and transmit tension or compression waves having a low propagation speed, in order to reduce the specimen strain rate at the impact. At first, a simplified theory is presented for the estimation of longitudinal wave speed in springs as a function of the main engineering parameters of the coil dimensioning. Then, a preliminary sizing of the apparatus is proposed based on basic considerations of wave propagation theory. Finally, a numerical model of a compression test is presented as a proof-of-concept. Introduction Nowadays, one of the most widely used method for the characterization of the dynamic behaviour of materials is based on the split Hopkinson pressure bar (SHPB) [1,2]. In this machine, a stress wave propagates along slender bars and dynamically deforms a specimen. By measuring the strain signals in the bars, it is possible to determine indirectly the dynamic properties of the tested material [3,4]. The control of test strain rate is primarily achieved by adjusting the bar particle velocity on the input side and by specimen length. Typically, Hopkinson bars are made of metallic materials, such as steel, aluminium and titanium, but also polymeric [5,6]. Since these materials are very rigid, the pressure wave speed is elevated and, consequently, the particle velocity is high as well. Moreover, the available stroke during dynamic testing depends on both the length of the preloaded bar and the intensity of the preload. This means that long preloaded bars are required for performing tests at moderate strain rate without increasing the transmitted load. To cover the range between quasi-static and high strain rate regimes, several measurement apparatuses have been developed, but none appears to be the dominant one [7]. These are driven by three main methods: servo-hydraulic drive, linear acceleration of masses or angular acceleration of masses. In particular, servo-hydraulic machines [8] have a similar architecture to those used for quasi-static testing but adapt the loading system to increase the specimen strain rate. On the other hand, drop towers [9] are machines in which the mechanical action is provided by the impact of a mass on a plate to which one end of the specimen is clamped. The mass is accelerated by gravitation only or with the assistance of a preloaded spring energy. Moreover, flywheel systems [10] are experimental apparatuses that exploit the circumferential velocity of a rotating disk combined with the high inertia of the disk itself to exert dynamic loading on the specimen. In some cases, the abovementioned apparatuses are modified by combining the loading system with Hopkinson output bar to improve the force measurement quality. Furthermore, modified versions of the SHPB have also been developed to make it suitable for intermediate strain rates; however, the long test time determined a significant output bar length, as in [11]. Most of these machines have to deal with measurement issues, such as large record oscillations, inertial effects, and short duration of mechanical loading. In this paper, an innovative machine concept -the "Hopkinson Spring" -for dynamic characterization of materials at intermediate strain rates is presented. The alternative method of dynamic driving is based on the generation and transmission of slow longitudinal waves which propagate along properly sized springs. The apparatus design is inspired by the Split Hopkinson bar system, and its performance has been then simulated in a compression test applied to a polymer. Theoretical background 2.1 Longitudinal wave propagation in helical springs A helical spring can be considered, in first approximation, as a one-dimensional pseudocontinuous elastic system characterized by a linear relationship between force and displacement. Therefore, the macroscopic longitudinal behaviour of the spring can be described with the same theory applied to solid bars. According to Wittrick's theory [12], the propagation speed of longitudinal waves in springs can be expressed using the following Equation (1): where is the spring elastic constant, ℎ 0 is the height of the spring in its relaxed state and is the total mass of the spring. Equation (1) can be developed to explicit the wave propagation speed as a function of the engineering characteristics of helical springs. Given a spring of medium diameter with wire diameter and helix angle , the elastic constant can be expressed by Equation (2), according to the classical linear elasticity theory of torsion of beams, whereas its mass is calculated from Equation (3) where and are Young modulus and Poisson ratio of the spring material. Substituting (2) and (3) into (1), it results in Equation (4) for the longitudinal wave speed in helical springs: = sin √ 4 (1+ ) (4) An interesting point about Equation (4) is that the wave propagation speed in springs depend both on material properties (as well as for bars), but also on geometric characteristics, such as helix angle and spring index ( = / ). Therefore, it is possible to adjust the propagation speed of longitudinal waves. Spring-bar analogy An equivalence between the spring and an analogous solid bar with the same diameter is here assumed to be valid. Their mechanical behaviour is considered similar too. Therefore, both the spring and the bar must be characterized by the same longitudinal wave speed and particle velocity . The analogous bar density ′ can be calculated with the respect of the bar volume, as in Equation (5) (6) where is the actual force in the spring and ′ is the analogous bar stress. Overall apparatus dimensioning Equation (4) shows that the wave speed in helical springs is much lower than in bars, mainly due to the small helix angle used to manufacture them, but also due the spring index. For example, in a steel spring with angle = 6° and index = 4.75 the longitudinal waves travel at 1/100th of the speed at which they would propagate in bars. Reasoning in terms of a Split Hopkinson bar-like apparatus, the single components (Prestressed, Input, and Output) can be designed much shorter than bars, ensuring a significant stress wave duration. According to the literature, a good loading wave duration for intermediate strain rate tests is around 10 ms. Here, assuming the design length of the Prestressed spring to be 400 mm with the above geometric parameters, a stress wave duration of 16 ms is derived by the well-known calculation = 2ℎ 0 / . Since the strain state in a spring is not constant through the wire cross section, it is more complex to process such signals than in the Hopkinson bar. However, the existing advanced non-contact measurement techniques allow to measure the specimen strain directly, without the need for indirect processing of bar signals. For these reasons, the input spring was sized minimizing its length just enough to guarantee the entire generated stress wave transmission. Regarding the Output spring, as a consequence of the results of preliminary numerical simulations, the length has been imposed to be a bit lower than the one necessary to the propagation of the whole stress wave generated by the Pre-stressed spring. For what concerns the size of the wire diameters, the choice has depended on the maximum required impact velocity on the specimen. In the design of this apparatus, a maximum impact velocity around 2 m/s and 20 kN of maximum transmitted force have been chosen. Table 1 reports the main springs characteristics. Numerical model and results The Hopkinson Spring concept has been numerically simulated to investigate further theoretical insights that simplified one-dimensional theory cannot capture. The three steel springs were sized in accordance with the geometric parameters of Table 1. One half inactive coil was added to each end of the springs to provide a flat surface for better load transfer between the Pre-stressed and Input springs and at the specimen interfaces. Moreover, the ends of the Input and Output springs pointing towards the specimen have been plugged by a 5 mm thick plate to allow spring-specimen contact. A cylindrical specimen of diameter = 7 mm and length = 8 mm was chosen to simulate a compression test. The specimen material is an ABS structural polymer which has been dynamically characterized by Johnson-Cook model [13]. Since specimen strain measurement is assumed to be performed by non-contact measurement techniques, the average nodal displacement results at its ends are considered for strain and strain rate calculations. On the other hand, force measurement is obtained indirectly by interposing a load cell (a bar of diameter = 18 mm and length = 220 mm) between the specimen and the Output spring. In this case it is assumed that the specimen and load cell are in dynamic equilibrium for the whole test duration since the wave propagation speed in the load cell is much higher than in the springs and the specimen is deformed slowly. A numerical simulation has been performed on a FE model with an explicit solver, in which a tension preload force of 10 kN is applied to the Pre-stressed spring and then instantaneously released. The simulation results have been stored every 0.01 ms, for a total of 35 ms of test history. The time history of the forces in the springs, specimen, and load cell have been measured through cutting planes perpendicular to the longitudinal axis. The details of the numerical model are shown in Figure 1. The results of the numerical model have been analysed to verify the proposed theoretical approach. Figure 2a shows the force curves in the springs at the cutting plane positions of Figure 1. It can be noted that, although the instantaneous preload release, the rise of the mechanical waves is less pronounced compared to those in traditional SHPB systems. This can be attributed to the size of the springs, which cannot be approximated to one-dimensional bodies. However, the proposed dimensioning formulas can roughly predict the constant part of wave signals, such as the stress wave period in Table 1, or the axial velocity in Input. In the example of Figure 2b, the velocity has been calculated through Equation (6) with the average force shown in Figure 2a. Finally, regarding the measurement of the specimen mechanical properties, it can be affirmed that the measured force on the load cell corresponds precisely to the specimen resistance (Figure 3a), while the strain calculation confirms the test feasibility at intermediate strain rates (Figure 3b). Conclusions In this paper, the feasibility of an innovative apparatus for performing mechanical tests on materials at intermediate strain rates has been presented. The proposed solution differs from the state of art both in the dynamic actuation and in the specimen strength measurement aspects. The working principle is based on the Split Hopkinson bar, but exploits the propagation of slow mechanical waves in springs. A simple theoretical description allows for the design of the Hopkinson Spring by varying the springs' geometric and material characteristics. The performance of the apparatus has been numerically verified.
2,580.2
2021-01-01T00:00:00.000
[ "Engineering" ]
Key Determinants of Indonesia’s Banks Financial Performance Depositors, investors, as well as public in general need easily accessible indicators that are important to differentiate various banks. This research addresses simultaneously two important issues: analyzing and identifying which key publicly available financial indicators of banks are important, as well as approximating the weight of the aforementioned indicators when banks’ comparisons are to be made. Utilizing the recent 2017 database from 90 conventional banks, this study analyzes 17 banking ratios using the method of principal component analysis. The calculations show that five components explain around 75 percent of total variation in the data. Those five components represent indicators on profitability, quality of capital, quality of loans, fee-based activities, and liquid assets in the balance sheets. Further, by combining five principal components, the result shows that even small banks can achieve good financial performances. INTRODUCTION The banking industry in Indonesia has undergone massive changes in the past two decades.Since the Asian financial crises of 1998 that brought the Indonesian economy to its knee with an economic contraction of 13.1 percent and inflation of 58.4 percent in 1998 [31], the landscape of the banking industry has changed.Whereas in June 1997 (prior to the crisis) there were 238 banks [25], by the end of 2000 only 151 commercial banks remained [20].By the end of 2016, the number has been further reduced to only 116 conventional commercial banks currently operating in Indonesia [13], with the numbers likely to be further reduced in the future. 1espite many positive developments occurring since 1999, crises (small or otherwise) have occasionally occurred.Take the example of the 2008 controversial and sudden closure of Bank Century 2 [29] which caught the general public offguard and created protests. Most banks' closures are controversial because banks' existence and activities have not only economic but political ramifications as well [29].Thus, the Indonesian government, including regulators (Bank Indonesia, OJK, and Indonesia Deposit Insurance -LPS), as well as the public, have profound interests in minimizing the emergence of such banking crises.In emerging market, a trouble in a single bank can be immediately translated into systemic crisis [14]. Theoretically, policy makers attributed two sets of factors to a banking crisis: macroeconomic environment as well as bank-specific financial ratios [17].For developed countries, capital adequacy, asset quality, management, earnings, and liquidity (CAMEL) indicators are often used to represent financial ratios. While the usage of CAMEL indicators for banks' evaluation seem ubiquitous in developed economics, the application of CAMEL in emerging market yields differing results.[6] argues that CAMEL approach works for Indonesia's bank prior to 1998 crisis, albeit with a different weighting from the regulatory weights.However, [17] finds that there are many reasons that CAMEL indicators do not work for banks in emerging markets.Indeed, [17] shows that profitability measure such as interest-rate spread is an important indicator to predict banking strength.In contrast, capital adequacy ratio is important for banks in developed economies [17]. For the public, the major drawback to CAMEL (or any model of banking supervision) as a model for evaluating banks is its lack of transparency.In general, regulators have much more inherent knowledge of various aspects that determine financial performance of banks.However, much of the knowledge are not known to the public (the banks' most important stakeholder).The general public can only resort to information available in the media as well as information available from the official websites of banks (which are required by regulators to publish their quarterly financial statements). The general public interest to protect its deposits in bank can be seen as a form of market discipline [4].Indeed, market discipline is one of important pillar of banking supervision as espoused by Bank for International Settlement (BIS) [3], exhibiting positive effects although the impact may not be optimal [16]. Despite the positive effect of market discipline and existence of such financial statements to the public, it is often unclear to the general public which indicators are more important in differentiating two banks (or among various banks, in general). International experiences regarding important indicators differs among countries, and hence cannot be used as a guidance.Profit performance in the Swiss banking sector was shown to be related to (among others) better capitalization, faster loan growth, and higher interest to income ratio [6].Another study regarding banks in China found that economic and political factors played a more important role compared to bank's characteristics in differentiating banks' performance [28].A comparative study found that liquidity and size of banks do not have influence on banks in China and Malaysia, while operating expenses (defined as Non-interest expenses/Average assets) play a key factor in banks' profitability [18]. In Indonesia, a recent study showed that Non Performing Loan (NPL), the Loan to Deposit Ratio (LDR), the size of the bank, the Cost Efficiency Ratio (CER), and the Capital Adequacy Ratio (CAR) are important variables that determine efficiency of banks in Indonesia [30].Another study utilizing data from the post-crisis era showed that three factors deemed important in determining profitability of banks are: operating expenses/ operating incomes, equity/assets, and credits/total assets [19].Finally, a study by [5] showed that capital adequacy ratio (CAR) and size of a bank exert a positive effect on return on assets (ROA), while ratio of operating income and operating costs exert a negative influence on the return on assets (ROA). Research Objective and Contributions The papers cited in the preceding section shows that different indicators appear in different studies.These raises further important, as yet unanswered, questions.For example: is loan quality more important than profitability, or is the opposite true?Further, suppose profitability is more important relative to loan quality, what is the ratio/ weight of importance between the two measures? There are several major contributions of this paper.This paper addresses the gap in research by sequentially answering two important issues: analyzing and identifying which key publicly available financial indicators of banks are important, as well as determining the weight of the aforementioned indicators when banks' comparisons are to be made.This paper also contributes to the empirical side in two further aspects.First, the contribution of the paper through its usage of the 2017 data from almost all conventional banks operating in Indonesia.Second, as will be shown in the next section, this paper is one of the pioneers in using principal component analysis to analyze banking industry in Indonesia. RESEARCH METHOD Principal component Analysis (PCA, henceforth) is chosen as the method in this research.The PCA is used because the method is particularly suited to answer several issues that have been outlined in the introductory part.First, given that there are many variables that can potentially be considered by the public to compare banks' performance, the PCA can reduce the large number of original variables to be considered down to a few components (which is a linear combination of the original variables).Other components contribute little to the data variation, thus can be considered as noises, and hence can be excluded from further analysis.Second, as a result of the construction of the so-called components, one can also obtain the weights of the original variables.Hence the relative importance of the original variables in explaining data variation can be obtained as well. 3here are different accounts regarding how PCA was first formulated.However the modern formulation and the name itself was first used by Hotelling in 1933 [1]. There are many applications of PCA across the scientific field in psychology, genomics, food sciences, and environmental science to name a few.In the field of economics, [27] used PCA to identify important variables to be fed as an input to Data Envelopment Analysis (DEA) to further identify efficient decision making unit in businesses [27].[15] also used PCA in a setting where purchasing managers must evaluate and retain suppliers which meet several performance criteria in bottling machinery industry [15].A study by [7] showed that principal components can be used in a formula to measure the level of e-government implementation [7]. In the banking sector, [26] also used PCA to identify important variables (and thus exclude unimportant ones) in order to avoid bankruptcy and avoid credit scoring problems [26].[18] used PCA to reduce variables in order to rank banks in Serbia.[24] used PCA to classify banks into different operational strategies groups [19].[33] utilized PCA to identify healthy and risky companies in order to help banking sector access the small-scale and medium-scale enterprises across Asia [33]. In Indonesia, the usage of PCA in the banking sector is rather limited.A study by [2] is the only research that we are aware of thus far.[2] conducted a survey incorporating slightly over 1,000 respondents in the Bengkulu province, yet failed to find relationships between the demographic of the respondents with benefits from banks [2].Thus, this research will also fill in the gap in the literature regarding the application of PCA in the banking sector in Indonesia. Mathematically, PCA seeks to transform the original data into a new set of orthogonal axes [9].To start, let X be an m-observations by n-columns of variables, where each column has a zero mean.Also, let S be the covariance matrix of X where S has the property of being symmetric.Since S is a real and symmetric matrix then the Spectral Theorem in linear algebra can be applied [10]. According to the theorem, if S is a real symmetric matrix, then there is an orthogonal matrix V that diagonalizes S. That is: V T SV = W, where W is diagonal.Following [10], since V diagonalizes S, diagonal elements of W are eigenvalues, while columns of V are eigenvectors of S. Both V and W are matrices consisting of real numbers, and more importantly, columns of V (the eigenvectors) are orthogonal and are known as the principal components of the matrix S. Without affecting previous results, one can arrange the obtained eigenvalues in decreasing order such that l 1 > l 2 > ... > l n .In the case the covariance matrix, the sum of the elements in the diagonal of matrix W (the trace of W) is also the total variance of the original data S. The fact that the trace of W also represents the total variance of the original data S yields two important facts that can be used later on.First, if one wishes to simply transform the original data to a new axis, then one can retain all columns of V and W albeit in a new axis.Conversely, one can discard certain small variances to focus on prominent features of the data.In the latter case, only certain columns of V and W will be retained according to certain (arguably subjective) criteria.These criteria will be explained later in the data analysis section. 4 Second, retaining a limited number of eigenvalues and eigenvectors lead to the fact that not all original variances will be replicated.In a sense, this is often desirable since some of the discarded variances are perhaps "noises" that does not contribute (and may even distort) identification of major features.This is especially true in machine learning where PCA is considered a major tool [23]. While the eigenvalues represent variance of the original data, the eigenvectors themselves have important interpretation as well.For example, the first column in the matrix V shows the contribution of all the n original variables made to the first principal component (i.e.first eigenvector).The same principle applies to other columns in the matrix V.In PCA, the sum of any eigenvector column is restricted to 1 (one), leading to how one can conduct interpretation of the eigenvector, as well as interpretation regarding the weight of the original variables.For example, assume the first eigenvector (which explains the most variance of the original data) has a high coefficient coming from the return-one-equity variable and a lower coefficient coming from the net interest margin.This implies the aforementioned variables exert a dominant feature to the data, i.e. large weights.Thus, the first eigenvector is a principal component reflecting profitability (i.e.profitability is the main feature of the data). Data and Descriptive Statistics Data used in this analysis are from the 2017 audited financial report publicly published in the bank's website.The basic set contains balance sheets, income statements, and statement of contingencies from 90 conventional banks (excluding Syariah banks as well as Bank Perkreditan Rakyat (BPR, rural lending institutions). Table 1 shows a brief summary of the asset data for banks included in this study.The data is categorized by BUKU classification 5 .There are only 5 banks in BUKU 4 and they contributed to 56.1 percent of the total asset of the 90 banks in the data set.Banks in BUKU 3 and 4 (a combined total of 26 banks) contributed to 87.0 percent of total assets.In terms of profitability, banks considered in BUKU 3 and 4 contributed 92.1 percent of total banking profit for data in this sample.Given the implied wide range in size among banks, as seen in Table 1, comparison using nominal amount must be minimized (if not eliminated altogether).Hence, to achieve fair comparison among banks, it is important that the data be converted to financial ratios prior to analysis. This paper considers profitability indicators, efficiency indicators, credit risk and market risk indicators, lending activity and liquidity indicators, capital indicators, and fee-based activity indicators.There are seventeen variables considered in this paper.The definitions are given in Table Appendix 1.Each of the 16 variables considered in this paper was also tested for normality using the Shapiro-Wilk test, providing a superior omnibus indicator of non-normality [32].The result given in Table 2 shows that most of the variables are not, with the exception of NIM variable, normally distributed. Table 2 also indicates the existence of outliers in the data.In this case, outlier is defined as observation that lies above or below 1.5 times the interquartile range.In the case of LDR and CAR, there can be as many as nine outliers in the 90 observations in each variable (ten percent of the data).Despite the non-normality of the data, the statistical method used in this paper (PCA) does not assume normality [22].Hence, PCA remains a valid method for analyzing the data. Table 3 provides comparisons among the 16 used financial ratios (the variables) considered for this study.In Table 3, for example, low capital BUKU-1 and BUKU-2 have high capital adequacy ratio (CAR) of 28.84 percent and 25.49 percent, respectively.In contrast, the same group of banks have relatively low return on assets (ROA) at 1.32 and 1.17 percent, respectively. In general, once the nominal effects are eliminated, higher capital and larger asset size (as represented by BUKU classification) do not necessarily lead to superior financial performance.Large banks (BUKU 3 and 4) dominate in terms of availability of low cost funding (low CASA_DPK ratio), efficiency (low BOPO), profitability (high ROA), and fee-based revenue.In contrast, small banks seem to do well with respect to providing large cushion for shocks (high CAR), profitability (high NIM), and the availability of liquid assets (ALIQ_ASET).Table 3 shows that the reliance on popular indicators to provide univariate measures of superiority simply do not work.Hence, Table 3 further emphasizes the need to find variables that are able to differentiate performance among various banks. To sharpen the result, a more refined set of variables are needed.Correlation analysis conducted to all 16 initial variables available shows several highly correlated variables.Table 4 shows variables whose correlation is above 0.8.Two direct measures of fee-based activities are also highly correlated.Thus, this paper excludes FEEBASE_OHEAD from the PCA analysis.Three direct measures of profitability (PROFIT_ AKPROD, ROA, ROE) are highly correlated.Since many analyst use ROA as a measure of profitability, this paper excludes PROFIT_AKPROD and ROE from the PCA analysis.The ratio of operating cost to operating revenue (BOPO), often used as an indicator of efficiency, is also highly correlated with other measures of profitability.A high correlation between BOPO and other profitability indicators simply indicates that profitability and efficiency go hand in hand.The final data set after dropping the four measures described above includes 12 variables. Results and Analysis The original data was standardized to minimize misallocation of relative weight of the original variables due to differences in measurement units.Standardization results in an average of zero and a standard deviation of one for each variable.This procedure is a standard practice in the PCA literatures [21].Given the standardized data, a covariance matrix is created to further undergo spectral decomposition. Implementing a PCA on the data set yields 12 distinct eigenvalues that corresponded to 12 eigenvectors.Eigenvectors represent the direction of data variance, while eigenvalues represent the amount of data variance in a certain direction.As PCA seeks to explain most of the variance in the data, the larger eigenvalues with the corresponding eigenvectors are A pair of eigenvalueeigenvector is defined as a component. Results of the eigenvalues calculations are shown in Table 5.The first principal component has an eigenvalue of 3.05, which explained approximately 25 percent of total variance of the data.The second principal component has an eigenvalue of 2.21, and explains 18 percent of total variance of the data.A combination of the first two principal components explains 43 percent of the total variance in the data.A common rule is to keep components with eigenvalue of 1 or greater [21].Only four eigenvalues in Table 5 are larger than one, and thus, four principal components must be retained for further analysis, explaining 68 percent of total variation in the data. Another common rule is that researchers select the number components to reach a certain threshold of cumulative "explained variance" [21].Based on this rule, adding another principal component (PC number 5) will boost the total variation explained to 76 percent.The five aforementioned components accounted for 25.4%, 18.5%, 12.6%, 11.6%, and 7.7% of total variance in the data. Since the fifth eigenvalue is quite close to one, it is included in the subsequent analysis.The result with five principal components is presented in Table 6.As discussed previously, a principal component is a linear combination of the 12 original variables.A dominant variable will contribute a large loading to a certain PC.Loadings are coefficients of an original variable used to measure importance of an original variable to a PC.Therefore, a higher positive loading on a variable implies a larger amount of positive influence a variable has over a principal component, and vice versa. The first PC, which accounted for 25 percent of total variance, is dominated by a few variables with loadings larger than 0.6.The largest loading (0.834) is INTREV_INTCOST.The other significant loadings in the first PC are: CASA_DPK, ROA, and NIM.These variables are related to profitability and ability to contain cost. One key management aspect of interest cost is represented by CASA_DPK.A larger current account and saving account (CASA) relative to third party funds (Dana Pihak Ketiga, DPK) results in the lower the interest cost/funding cost for banks.Positive effect of a lower cost-of-fund on profitability is represented by positive loading (0.604) in the CASA_DPK variable. The next step for bank, given interest cost, is to achieve higher gross margin through higher interest rate revenue.This is represented by INTREV_INTCOST variable, which enters the first PC with positive loading of 0.834.Finally, net interest margin (NIM) and return on assets (ROA) also enter profitability picture with positive loadings of 0.713 and 0.661, respectively.While a loading coefficient explains the contribution of a single variable to a particular principal component, score measures the effect of all variables to a particular component.For example, to obtain the PC-1 score for Bank BRI then one must multiply the loadings in the first column of Table 6 with the (standardized) data for BRI.A high score for BRI (mostly because its profitability indicators have high values) implies that the bank has a high score in PC-1. Table 7 shows the summary of key variables to group of banks sorted by scores in the first PC (PC-1).In general, the top ten banks in PC-1 have an average score of 1.64, compared to -1.71 average scores of the ten lowest banks.Thus low values in CASA_DPK, INTREV_INTCOST, ROA, and NIM all contributes negatively to profitability and PC-1. Ten banks with high scores in PC-1 have an average CASA_DPK ratio of 62.5 percent (thus only 37.5 percent funds from costly time deposits), which leads to a low cost-of-fund.In contrast, average CASA_DPK for the ten banks with lowest PC-1 scores is at 18.8 percent (thus 81.2 percent from costly time deposits).Such a low CASA_DPK ratio implies that the bank relies on costly time deposits as its funding base.Banks with high PC-1 scores also have higher values on profitability indicators (INTREV_INTCOST, ROA, NIM). Top ten banks ranked by scores of PC-1 are: BPD Kalimantan Tengah, Amar, ANZ Indonesia, Bank Central Asia, BPD Nusa Tenggara Timur, BPD Sulawesi Tenggara, BPD Yogyakarta, BPD Kalimantan Barat, BPD Maluku, and Bank Rakyat Indonesia.Unsurprisingly, six of the banks with largest PC-1 scores are regional government development banks (Bank Pembangunan Daerah, BPD).These banks have low funding cost since regional government budgets for day-to-day operations are placed in these banks.Bank Central Asia as well as Bank Rakyat Indonesia, two BUKU-4 banks, have low funding cost through their networks of ATM and mobile banking that position these banks as the leader in transactional banking.These banks are then able to convert the low funding cost into higher profits. The second PC (18 percent of total variance) is dominated by CAR, LIAB_EQ and LDR.The CAR and LIAB_EQ variables are balance sheet items that relate to equity of banks. 6The LIAB_EQ variable represents raw capital a bank has, as well as the third party funds the bank owes to the public.A larger LIAB_EQ leads to higher contribution to PC-2 (loading of 0.680).Against the tendency for banks to maximize its liability, CAR (a more refined measure of capital), enter with a negative loading (-0.860).Hence a bank with higher CAR will be penalized in the second PC.Table 3 indicates that small banks (BUKU-1 and BUKU-2) are banks with high CAR (28.84 percent and 25.49, respectively) compared to 20 percent for banks in BUKU-3 and BUKU-4.On the other hand, only banks in BUKU-3 have relatively high LIAB_EQ. Table 8 shows the summary of key variables to group of banks sorted by scores in the second PC (PC-2).The top ten banks in PC-2 have an average score of 0.98, compared to -2.02 average scores of the ten lowest banks.Table 8 shows that relatively low values of CAR (and LDR) in the 2017 data contribute to higher score in PC-2.However, banks with the highest scores in PC-2 correlate positively with banks with high liability to equity ratio.Top ten banks have an average Liability to Equity ratio (LIAB_EQ) of 8.08 times, compared to 2.25 multiple for the bottom ten banks. Note that banks with highest PC-2 scores have an average CAR at 15.98 percent that is still above the regulatory requirement.In contrast, banks with the lowest PC-2 scores have an average CAR of 51.82 percent, indicating a high capital but also a failure to convert the equity into good lending opportunity.9 reports the summary for PC-3, PC-4, and PC-5.The third PC (12.6 percent of variance) is clearly dominated by non-performing loans (NPL), and bad loans plus restructured loans (KKR), and hence the PC-3 can be interpreted as representing the quality of bank's credit.In the third PC, non-performing loans, NPL, and low quality loans, KKR, (including restructured credits in the bank's balance sheet) have large loadings of 0.683 and 0.617, respectively. Interpretation of the third PC is different from the first and second PC.Whereas higher profitability (PC-1) and better capital utilization (PC-2) correlates with positive result for banks in first and second PC, the opposite is true in PC-3.For PC-3, high scores imply negative results.High scores mean high proportions of bad loans in the bank's balance sheet, as well as high proportions of nonperforming and restructured loans (high KKR) in the banks' balance sheets. 7he fourth PC (which explain 11.55 percent of variance) represents fee-based activities.Note that interpretation of PC-4 is similar to interpretation of PC-3 since FEEBASE_PROFIT ratio enters PC-4 with a negative loading (-0.735).Hence banks with high PC-4 scores are actually banks that have low revenue coming from fee-based activities.Admittedly not too many banks can engage in fee-based activities (which includes activities such as trade financing credit card transactions).Table 2 shows that BUKU-3 and BUKU-4 can have revenue equivalent to 20 percent of profit.BUKU-1 and BUKU-2, given their small capital and limited allowed activities, only have small revenue from fee-based activities.Hence this PC is skewed against small banks.In summary, the fourth PC affects small banks negatively. Finally, the fifth PC (explaining 7.67 percent of total variance) represents how much liquid assets (such as bonds and other fixed income assets) a bank has in its book.There is a tendency for banks in Indonesia, especially those with low funding cost, to seek placements in safer investment (government bonds) rather than conducting risky lending activities.In summary, the fifth PC affects banks with liquid assets positively. One notable addition to the analysis is with regard to credit provision (NPL_CKPN) variable.Table 3 clearly shows that small banks have a tendency to provide smaller provisions and hence these banks are at risk should the loans turn sour.However, the NPL_CKPN variable only impart small loading in all of the previous five principal components.Hence, while NPL_CKPN may be important on its own, the PCA result merely shows that the NPL_CKPN ratio is not an important variable in explaining variation among banks. CONCLUSION Conclusion Using Principal Component Analysis, this study has identified five important components that differentiate bank's performance in Indonesia.Those components (in order of importance) correspond to measures of profitability, equity and its quality, quality of loans, revenue from fee-based activities, and availability of liquid assets in bank's book.Profitability measures are in the first PC, and it explains 25.4 percent of total variance.Loan quality indicators, on the other hand, are in the third PC and explain only 12.6 percent of total variance.The order of the PC (as well as value of loadings in each PC) can be used as a rough measure of importance among various indicators. Suggestions The results of this study are important to the public.Regulators have always had the upper hand vis-a-vis the public in terms of knowing updated and thorough information about conditions in any banks.The public (especially fund owners) is always wary about bank's closure and the closure's potentially negative impact on the public's wealth.This study provides a clear and limited set of variables that public and fund owners need to carefully watch.For example, this study provides a strong recommendation toward choosing banks with high NIM, ROA, CAR, and NPL.These variables are necessary (though not sufficient) conditions to watch for.The aforementioned variables become even more important if the public further knows the ROA of a bank when compared against the industry average (which is published regularly by the OJK). As a further result, with more in-depth analysis, this study is also useful for public or private institutions that are interested in publishing ranking of banks.Scores produced by PCA help identify banks by their performance in the five principal components and their loading.Analysts may proceed to rank the banks accordingly with less subjectivity involved. Table 1 . Summary of Assets and Profits of Banks in 2017, Categorized by BUKU (2017). Table 2 . Shapiro-Wilk Test for Normality and Identification of Outliers. Table 3 . Summary of Variables According to BUKU (2017 data). Table 5 . Eigenvalues and Variance Explained in Indonesia's Banking Data Table 6 . Loadings in the First Five Principal Components Table 7 . Characters of Top and Lowest Banks Sorted by the First PC Table 8 . Characters of Top and Lowest Bank Sorted by Scores of the Second PC Table 9 . Characteristics of Top and Lowest Bank Sorted by Scores of PC-3, PC-4, and PC-5
6,406.4
2019-11-04T00:00:00.000
[ "Economics", "Business" ]
Introduction to the special issue on socio-cultural role of technology in digital musical instruments ABSTRACT This special issue, arising from a symposium in Helsinki in 2019, presents contributions from a diverse group of practitioners, representing a broad range of approaches in the making, thinking and writing about digital musical instruments. The authors consider the socio-cultural role of technology in current and emerging digital music practices with changing social roles, historical and critical reflections. This introduction explains the context and motivation for the issue and summarises the contribution of each of the eight articles. Together they provide what we believe is a unique contribution to the research of new interfaces for musical expression and related areas. Introduction General development in technology has played a central role in the evolution of musical instruments, as instrument builders and makers adopt new equipment and methods in their work. Today, the increased use of digital technology in musical instruments, coupled with the demands of musicians to explore the potential of new instruments, has contributed to the growth of new musical practices. In addition to well-known synthesisers, digital audio workstations and digital instruments that simulate acoustic instruments, a domain of more experimental and innovative research and design practice called 'New Interfaces for Musical Expression' (NIME) has emerged in the past two decades, where current technologies in hardware and software are applied in the context of music making. NIME is an essentially investigative and explorative approach to new music making via new technologies and it is the context that this current issue positions itself in. While today's musical practices are saturated with digital technologies that shape the way we make, distribute and listen to music, music itself has always been inherently a socio-cultural activity, supporting a wide range of social interactions (Tahiroğlu et al., 2020). Music, as a social practice, builds a space for the expression of our thoughts, feelings and ideas. These human experiences are articulated throughout our history with sound and reach far back into prehistoric times (Magnusson, 2019 music technology and new interfaces for musical expression point towards common concepts and practices of socio-cultural conditions. We further question how we might trace their trajectories and extrapolate in the context of the new music-technological situation. In his article 'The Music-Culture as a World of Music,' Titon (2009) formulates a model that engages with the socio-cultural role of music and comprises the affective experience we have with it. In this model, the performance of music is built on 'procedures and agreed-on rules' that are guided and shaped by the more general music-culture. The music, then, is heard by the audience, a community that supports and influences the music as well as developing a collective memory and history of it. Titon's model brings attention to the underlying culturally informed factors in music, such as memories, practices and histories. In thinking about music, we note that technological and socio-cultural conditions impact upon the forms of musical creation, how music is performed, experienced, shared and distributed. Today, bar the hopefully isolated period of the Covid-19 pandemic, musicking is more of an interactive and social practice; going to a concert is as much a musical activity as performing the music (Small, 1998). Music is something we do; it is an action in which we participate. But, like humans, technologies have a certain agency and our interactions with new artefacts embody new relations between humans and technology, resulting in new music. In the following eight articles, our relationships with music and digital musical instruments are critically discussed, and we explore how these technologies and relationships shape embodied behaviours, expectations, beliefs, interpretations, perceptions and actions. We explore the socio-cultural role of technology in digital musical practices and look at how digital technologies condition us, frame the interactions and experiences we develop and enact. The articles in this special issue seek to open up a new field of inquiry around digital musical instruments focusing on the socio-cultural and technoscientific nature of our expressive condition. In this issue, we seek to bring these conditions to a detailed focus. The structure and the content of this issue are the outcome of a symposium organised by one of the guest editors, Koray Tahiroğlu, in Helsinki in November 2019 on 'Socio-cultural role of technology in digital musical interactions'. A diverse group of expert scholars, artists, musicians, practitioners from musicology, music performance, new interfaces for musical expression, sound and music computing and postphenomenology studies participated in a two days discussion on issues and challenges in social aspects of digital musical instruments and music technology. The symposium, followed by a one-day workshop, provided an opportunity not only for facilitating the exchange of thoughts and ideas but also to question, what does our relationship with music and musical instruments look like today? We are pleased to see the broad approach of the participants in the Helsinki symposium represented in the articles in this special issue of the Journal of New Music Research. The first article is by Marc Leman who investigates the concept of co-regulating timing in music. Leman proposes a hypothesis about a music ensemble's co-regulated timing based on the notion of constancy, or balance, considering the prediction of constancy to be a key element of a shared intentionality in a music ensemble. The main contribution of this paper is a new Bayesian listener algorithm (BListener), which is a perception model of constancy based on Bayesian principles. The algorithm's behaviour (which exists as R package) is explained and tested on several datasets. It is worth considering Leman's proposal in the context of digital musical interactions. Leman suggests that BListener could function as a perception module of an artificial musician capable of interacting with human musicians. The artificial musician would 'perceive and understand' the global dynamics of co-regulated timing with humans and adapt its proper actions. In the final part of the article, Leman suggests that constancy in timing (due to co-regulation) is correlated with interaction quality, and possible empowering effects when this quality can be realised through a performance. He claims that an achieved constancy is a homeostatic state that is correlated with feelings of control (or agency) in a music ensemble. Leman's concept of co-regulated timing can be understood in terms of collaborative actions and as a condition of desirable states of embodiment that has beneficial effects on participants. The second article by Simon Waters builds on his previous work which regards musical activity as taking place within a 'performance ecosystem'. Drawing on modes of thought influenced by Lucy Suchman and Philip Agre, among others, he attempts to place contemporary approaches to instrument making within the broader historical perspective of human entanglements with instruments generally. He contends that historical musical instrument development has much to teach those involved in digital music making, as such instruments embody not only acoustic behaviours, but patterns of use, thought and belief. He distinguishes this from the field of organology, however, regarding organology as too concerned with decontextualised objects, measurements, and abstract classification systems, and insufficiently embedded in the sociology of human use of instruments in musicking. Waters suggests that the digital music world's dependence on abstractions such as 'gesture' may be unhelpful in building compelling musical instruments (not least because designers tend to have an impoverished vocabulary of gesture and movement compared with those who really address the concept critically, such as dancers and choreographers). He regards the observation and analysis of situated action (Suchman, 1987) as crucial to understanding the qualities of successful instruments, and suggests that it may be as productive to study the conduct of and between musicians engaged in their practice, as studying the interaction between player and instrument. Waters's approach regards instruments as necessarily assemblages rather than objects and suggests that in historical terms the non-standard instrument can be seen to be typical of human/instrument entanglements. Thisessentially an instance of performance ecosystems viewed in close-up -can be seen to aid a critique of current concerns with the ephemerality of digital instruments: their intimate relationship with their player/inventors, and with organological concerns for 'preservation'. Next, Tarja Rautiainen-Keskustalo debates how the examination of material media theory could contribute to understanding music-making as a part of digital networks. Using a Bluetooth speaker as an example of an instrument, she aims to re-imagine the concept of musicking as a practice, which manifests the sonic experiences in digital environments. Important ideas in this sense are wayfinding and navigating introduced by the anthropologist Tim Ingold (2002). By applying these concepts, the article debates how acknowledging human material and multi-sensory situatedness in (material) networks and infrastructures bring forth political and ethical topics, such as the role of algorithms in our society and issues of planetary sustainability. In this way, the approach gives way to examine musicking and the relationships with the musical instruments in a way that exceeds simple ideas about representations and invites to critically consider the digital world's complex trajectories. In the following article, Koray Tahiroğlu explores how our current relationship with music as musicians, instrument builders, composers is conditioned by digital technologies. In the article, Tahiroğlu describes this as 'ever-shifting roles', a term to explore what consolidates in the social dimension of digital musical instruments. He further describes how the building, making, composing and performing with digital musical instruments has gone through a gradual socio-technological change. He presents his viewpoint on the growing use of artificial intelligence technologies in music. Tahiroğlu supports his investigation through particular musical instruments that provide advanced autonomous performance features. He argues that music-making still emerges as a social construct, a social activity even as a result of the mutual cooperation with human musicians and AI powered autonomous instruments. One of the most challenging tasks in this context is the analysis of the musical instrument in possible ways it incorporates its musical ideas through appropriate dimensions of its autonomous behaviour. In view of the autonomous behaviours of the AI-terity, Voyager and the GuitarBot instruments, Tahiroğlu presents the notion of the autonomous entity as not something like a possession of super-powers, but as an agency, that its characteristics in music-making brought into more direct focus through human musicians. Tahiroğlu discusses further in what ways this social activity in music is in a state of transition in its relationship with technology, by questioning the technological rationality in digital musical instruments. Tahiroğlu argues in the conclusion section that it is necessary to view the social factors and cultural conditions that make the course of technology in music-making present. This gradual change in building, making, composing and performing with digital musical instruments involves social and technological transformations in view of digital music practices. In this special issue, Don Ihde looks at selected examples of player-instrument relations beginning with a single string (identical with hunting bows, earliest images, beginning with the Ice Ages, etc.) and moves to digital synthesizers and other contemporary instruments. The article follows a musically experimental trajectory from non-mediated musical sound through many centuries of musical innovation from the simplest forms of resonation to today's synthesised musics in electronic -digital and synthesizer musics -and questions how changes in musical technologies play roles in the social dimensions of musical instruments. Ihde examines further in this article the earliest relationship between listener -music performance audience -and music. Ihde explores technological innovations and their impact on the musical instruments during the Renaissance of the fourteenth to seventeenth centuries. More specifically the article focuses on innovations in acoustic resonance, giving examples in classical musical instruments' shapes and electric amplification of 'electric' instruments. The article gives an overview of technical details of how the violin's sound changes with resonance holes as well as further discussions on the 'loudness' of rock music in relation to the amplification and its radical changes in audience-performance situation. Ihde brings in another angle to the discussion on social dimensions of digital instruments in the conclusion part of the article, reflecting a critical view on player -instrument and listener -music relations with electronic, digital and synthesizer instrument variants. Following that, Thor Magnusson looks into the question of how musical instruments establish themselves as part of culture. The article aims to explore the technoscientific conditions of musical instrument design as they emerge in local contexts and present the effect on wider global musical culture when instruments migrate. First, Magnusson examines the instruments as material objects and how they are adopted and adapted to new musical cultures. Magnusson argues that those cultures also change because of new musical instruments. The article presents the intense relationship we have with music and, through this material nature of both music and musical instruments, Magnusson questions further; 'how that material nature becomes parameters of culture specific context'. Applying the concept of ethno-organology the article will look into the complexity of the new music instrument design from both cultural and technological influences. Magnusson discusses music-technical transmission through using the conceptual cluster of ergodynamics, ergomimetics and ergophors, providing a systematic analysis of musical instruments. In conclusion, he suggests establishing a framework that will help better understand technological and cultural transmissions that take place when musical instruments travel. This will inform the development of a new method of inquiry that would allow us to look at the socio-cultural role of technology from a more critical stance as well as a more effective tool for our research in NIME. In the process of preparing this special issue, we felt that the composer's voice was lacking, someone who engages with instrument building and shapes new social contexts through their compositions, instruments and systems design. We could not think of a better person here than Claudia Molitor and we asked if we could interview her. The article is a conversation between Thor Magnusson and Claudia Molitor on certain aspects of her work, on the technical foundation and the idea of designing technology in the process of creating the social experience that is embodied by a new musical composition. Molitor strongly argues that composition is a human cultural technology and composition cannot be thought of as something separate from technology. Later in the interview, she also points out that culture and technology are inseparable as well. It is interesting to read how she has been reflecting her composition idea on her work, as a system in which the composer, the performer and the listener explore potential investigations and interpretations. She discusses further her work, Remember Me, No-where Land, You Touched the Twinkle on the Helix of my Ear and The Singing Bridge, giving us further opportunity to know her ideas as a composer. The final article by Taina Riikonen looks into digital anthropology by conceptualising multi-sensory listening in large-scale city sonic environments. Riikonen presents listening as an embodied, social and transforming phenomenon through the binaural recordings of Helsinki Metro tunnels. Reflecting on her own experience, Riikonen argues that listening is in fact a multi-sensory experience, not only through the ways different sensations could intertwine signals complementing the listening experience, but also pointing out the importance of the 'knowing' that emerges through the listening. She further argues that the listening act shifts from listening to music to listening to soundscapes and she explores her argument as a fossil capitalist kind of evaporation process. Combined, these eight articles comprise a special issue of the Journal of New Music Research 'Socio-Cultural Role of Technology in Digital Musical Instruments' and provide what we hope is a strong contribution to the research and studies in new interfaces for musical expression, musicology, music performance and related areas. The impact of new music technologies on the music of the world cannot be looked at only from the perspective of technology itself: we need to explore the social context of how these technologies emerge and are taken into use by musicians. That, of course, is a circular process, as the human is always mired in technology, and the way we look at the world is always through a technological perspective. The articles in this issue come from diverse academic and artistic directions and we hope that the multiple perspectives presented can shed a new light on the social role of technology in new musical instrument development.
3,745
2021-03-15T00:00:00.000
[ "Computer Science", "Sociology" ]
FBASHI: Fuzzy and Blockchain-Based Adaptive Security for Healthcare IoTs Internet of Things (IoT) is a system of interconnected devices that have the ability to monitor and transfer data to peers without human intervention. Authentication, Authorization and Audit Logs (AAA) are prime features of Network Security and easily attained in legacy systems, however, remains unachieved in IoT. The IoTs require due security considerations as the conventional security mechanisms are not optimized for such devices due to various aspects such as heterogeneity, resource constrained processing, storage and multiple factors. Additionally, the legacy systems are mostly centralized and thus introduce a single point of failure. In this research, a novel framework, FBASHI is presented that is based on fuzzy logic and blockchain technology to achieve AAA services. The proposed system is developed using Hyperledger that is a blockchain platform providing privacy and fast response capability, therefore, it is best suited for the healthcare IoT environments. This work proposes behavior driven adaptive security mechanism for healthcare IoTs and networks based on blockchain by utilizing fuzzy logic and presents a heuristic approach towards behavior driven adaptive security providing AAA services. FBASHI is implemented to analyze its security and practicality. Furthermore, a comparison is drawn with other blockchain-based solutions. I. INTRODUCTION IoTs have emerged as a revolutionary technology capturing the world at a fast pace. IoT combined with AI, blockchain and 5G are taking the world into era of contextual connectivity enhancing personalized human experience. The IoTs are epicenter of this revolution threatened by diversified attack vectors. Gartner has projected the IoT security expenditures to hit $3.1 billion by 2021 [1]. Being resource constrained, IoTs rely on traditional security mechanisms like passwords that are susceptible to variety of attack vectors. As a result IoTs can be easily compromised due to insecure remote access [2]. Electronic healthcare refers to the monitoring, maintenance and improvement of the health of a patient by the use of The associate editor coordinating the review of this manuscript and approving it for publication was Giovanni Pau . digital technologies and telecommunications. The healthcare sector is rapidly adopting the IoT technology and transforming the hospital centric healthcare services to home centric healthcare services. Either way the modern healthcare services are dependent on IoTs and trust is the foundation of security and privacy in healthcare. IoTs have the weakest link when it comes to trust as these devices are interconnected and usually dependent on traditional security mechanisms which usually imply a centralized architecture which is incompatible with IoTs. Blockchain is an emerging technology which has many intrinsic features including decentralized applications (Dapps), decentralized trust, transparency, immutability and provenance [3]. Due to its property of decentralized trust and immutability, blockchain has the potential of providing foolproof security for IoTs specially in the healthcare environment [41], [42]. The healthcare devices are service critical that record sensitive patient data for smart diagnosis, AI driven disease profiling, vitals management etc. Any malfunctioning within the IoT devices can lead to severe consequences, for instance, a smart ventilator machine's failure can be instantly fatal for an ICU patient. Recently a vulnerability was discovered in GE Aestiva and Aespire anesthesia devices that allowed a hacker to bypass the authentication mechanisms and manipulate the drug levels causing serious health injuries to patients which could be fatal [4]. Similarly, there is a breach of trust when insiders compromise the sensitive healthcare data of patients and sell it in the black market for personal gains. To protect the privacy of the individual's data, HIPAA and GDPR pose heavy fines on the organizations that mishandle or leak the data of the patients without their prior consent [5], [6]. Furthermore, often the IoT device manufacturers do not comply with the security standards while designing such healthcare devices and security is usually an after thought. This leads to the necessity of having adequate security mechanisms in place which are diverse and comply with modern health standards like HIPAA and GDPR. Access Control and Identity Management has been an Achilles heel for IoTs due to their heterogeneous nature and scalability issues. Ownership and identity relationships in the IoT are closely related to the authentication and authorization of the devices and the individuals respectively. The owner of an IoT device may change over time and may be asked for authentication. Moreover, the data collected by a device needs proper authorization mechanisms in order to ensure privacy and traceability. The conventional authentication mechanisms like passwords are no more effective and most of the devices are compromised due to folk model implementation of security in these devices by manufacturers. No standard security protocol exists for IoTs, hence, a number of proposed authentication and authorization protocols exist [10], [20], [21] [22], [25] [31]. These protocols lack different aspects in terms of security and efficiency for IoTs and subsequently discussed in succeeding section. In this research, we leverage blockchain technology to tackle highlighted issues in healthcare IoTs through Hyperledger's certificate based identity solution avoiding third party reliance and achieving distributed trust. Furthermore, fuzzy logic handles uncertainty of device behavior through context and trust-based driven logic providing adaptive security mechanism for IoT and other network devices. A. CONTRIBUTIONS The following contributions are made to the healthcare industry through this research: • This work addresses the authentication and trust issues in IoTs for healthcare through a novel approach using blockchain enhancing security. • This paper utilizes fuzzy logic for adaptive authentication and authorization mechanism providing AAA services without a central server, third party reliance and avoiding password-based security mechanisms. • The proposed system ''FBASHI'' is implemented and a comprehensive security and performance analysis is performed. FBASHI is proven to be practical and security-wise effective for IoT-based distributed architectures. B. ORGANIZATION Section II discusses the existing authentication protocols in healthcare IoTs including the ones utilizing blockchain by briefly discussing their pros and cons. In Section III preliminaries related to blockchain-hyperledger and fuzzy logic are presented to enhance the understanding of the proposed healthcare security framework. Section IV discusses the proposed framework via a scenario that aids to formalise design goals. Section V discusses the threat model by highlighting the attack vectors and the mitigation strategies that are put in place within the proposed framework. Section VI provides comparative analysis against the state-of-the-art. This section also gives performance-based analysis for practical usecase. Conclusion and future work is drawn towards the end in the Section VII. II. RELATED RESEARCH Many IoT-based authentication protocols have been designed but only few exist that are specific to healthcare-based IoTs [7] [8]. Amin et al. [10] proposed anonymous password based authentication protocol for wireless medical sensors. The protocol utilizes hash function and session key for mutual authentication verified by BAN logic model. Jiang et al. [9] improved password-based authentication work of [10] both protocols rely on password-based authentication which is susceptible to guessing attacks and weak password vulnerability. Ferag et al. [11] has carried out a comprehensive survey of around 40 authentication protocols designed for IoT. These protocols mostly cater for a specific attack in IoT domain and does not provide a comprehensive solution. Due to ubiquitous and heterogeneous nature of IoT, access control and identity management are a major concern. Riveria et al. [12] used OAuth 2.0 to define an access control model for IoT. The drawback of this model is that it relies on third-party services and centralized architecture. Significant work exists on authentication mechanism and access control but few of the approaches incorporate both [13]- [15]. Identity-based access control models have a central identity server or a trust server to manage the access control [16]- [18]. These servers induce a single point of failure and makes system less resilient to network attacks. DTLS protocol have been used to achieve security in IoTs [19]- [22] but all of these lack MFA, dynamic access control and are resource intensive. Blockchain has some intrinsic security properties such as distributed trust, transparency, immutabilty, etc, which can be utilized for achieving overall security for different systems [23], [24]. Zyskind et al. [25] used blockchain to ensure privacy of user data but only utilized blockchain VOLUME 10, 2022 for storing access control information thus wasted the true potential of blockchain. Similarly, Gauravaram et al. [26] utilized blockchain to store access control policies to achieve immutability and distributed property but did not apply identity management and authentication mechanisms. Furthermore, their approach underutilized blockchains computational capability. Ouddah et al. [27] utilized true computational potential of blockchain to achieve decentralized access control. They used access tokens for delegating access rights to other peers through transactions. The access control policy was part of a locking script which has to be unlocked by possessor to prove he has the token. The computational capability of locking script is limited than the smart contract thus this model is less efficient. Zhang et al. [28] utilized smart contract which is a feature of Ethreum Blockchain for access control in IoTs. Their architecture is designed around gateways and thus gateways are assumed as a trusted entity and not truly verified. Ramachandran [29] also utilized smart contract for access control but they only stored access control policies, time of day, signature of last change and logs etc. Qu et al. [30] used Blockchain to verify credibility of an IoT device. The model uses gateway as a trusted entity for connected IoTs. Azaria et al. [31] utilized Blockchain to access, store and modify health records. Their model only ensures security of health-related data instead of the underlying system. These approaches [28], [29], [31] are based on Proof of work consensus model which has inherited 51% problem making it vulnerable to cyber attacks. Kim and Lee [32] implemented Zero-knowledge proof on authentication server to protect data of smart meter stored on Blockchain. They used primitive method of username password-based authentication which necessitates the use of authenticating server introducing a single point of failure. Banerjee et al. [33] has suggested a blockchain-based solution for compromised firmware detection and self-healing. They stored the Reference Integrity Metrics (RIM) on the blockchain to ensure its integrity. Huh et al. [34] proposed a blockchain-based IoT management system which manages the electricity usage of a smart meter by implementing Ethereum smart contract. Different Blockchain solutions [35]- [37] were analyzed based on security, scalability and compatibility; and Hyperledger Fabric was found best suitable for healthcare domain being consortium blockchain ensuring privacy, scalability and compatibility with other systems. For IoT an efficient mechanism is required for authentication and authorization based on trust as many devices work mutually and if a single device acts maliciously it can compromise the whole network. Fuzzy logic-based systems can quantify trust to handle uncertainty in a better way and can be utilized for malicious behavior detection [38]. Mahalle et al. [39] have utilized fuzzy logic for access control in IoT but their approach is centralized in nature and introduces a single point of failure in the system. Furthermore, their approach has scalability issues as all trust logic is centrally located. Walker [40] has generalized the idea of risk-based authentication and emphasized upon its application in IoT domain. Thus, risk-based authentication forms the basis of our concept for adaptive security to achieve trust and access control in healthcare environment. III. PRELIMINARIES OF PROPOSED SCHEME The proposed scheme is based on Hyperledger Fabric and Fuzzy logic. This section explains the preliminaries for the understanding of Adaptive Security Framework. Each term is explained briefly below: A. BLOCKCHAIN ELEMENTS The entities involved in Blockchain and their associated terminologies are discussed below: • Client Clients are the end users which are not directly involved in blockchain process but the main entities involved in transactions In our case SP (Service Provider) and RE (Requesting Entities) are clients in our case and they interact with blockchain through Anchor peers which in case of SP is a gateway and in case of RE the device itself can also be designated as a peer (Doctor, Nursing Staff, Administrator). The client is also registered to the blockchain network, therefore he has a particular identity and certificate issued by the CA. The clients submit their transactions to blockchain through anchor peer and once a transaction is successful are responded by the same. • Anchor Peer It is an entity which directly interacts with blockchain. It can be RE themselves or a gateway in case of IoTs. It is an SDK client who submits actual transaction-invocation to the endorsers and broadcasts transaction proposals to the ordering service. • Peers Peers are the nodes which are active part of the blockchain network and they perform one or many roles in the blockchain. These are the nodes which are responsible for maintaining the ledger. Following are the types of peers in our blockchain network: 1) Endorser Endorser or endorsing peer is the one which simulates the transaction by running the chaincodes (smart contracts in Hyperledger) related to a particular transaction before it is committed to a block. Every chaincode specifies an endorsement policy which defines all the necessary conditions for a transaction to be termed as valid. Furthermore, the endorsers compare the generated Read Write (RW) sets with existing ones in the ledger and validate individually. Every endorser verifies all the signatures and identities associated with a transaction and each endorser forwards the signed transaction to the anchor peer now called ''Endorsed Transaction''. 2) Committing Peer It is the peer specified or selected by the Blockchain to commit the transaction to the Blockchain network. The Leading peer as discussed above is usually the committing peer. This peer commits the transaction to the block as specified by the ordering service and initiates the gossip protocol for ledger update by other peers of channel. This peer can be elected through consensus or may be assigned a specific role. 3) Ordering Service Ordering service provides the communication channel to all the participants of blockchain and guarantees deliveries. Ordering service can be implemented in variety of ways using different node fault models. It provides connectivity between clients and peers through channel. Clients broadcast their transaction requests which are broadcast to all peers. The channel supports atomic delivery of all messages. • Channel A channel is a mechanism for managing communication between entities participating in the blockchain network. Channel logically behaves like a LAN where all the data and transactions are private within channel and no data is shared with outside peers. In the healthcare environment data privacy is of utmost importance, therefore, each department has a separate channel and a device or entity can be part of more than one channel. For example, if a doctor has his duty in the medical department but also performs duties in the Emergency ward, in that case he will have two separate datasets for each channel, however his same identity will work across both channels. When a new channel is created, a genesis block is formed which stores the configuration information about the channel policies, members and anchor peers. When a new member is added to an existing channel either the genesis block or a more recent reconfiguration block, is shared with the new member. A leading peer is also elected which is the one which has the responsibility to determine which peer communicates with the ordering service on behalf of the member. If no leader has been designated, than a leader is chosen through consensus. The ordering service orders transactions and delivers them to each leading peer in the form of a block, which then distributes the block to its member peers, across the channel, using the gossip protocol. The propagation of data includes transaction information, ledger state and channel membership, and is restricted to only those peers which have verifiable membership for the channel. • Ledger Ledger provides verifiable history of all successful and unsuccessful transactions occurring over the blockchain. Ordering service is responsible for construction of ledger by maintaining ordered hashchain of blocks of transactions. Hashchain imposes the total order of blocks in a ledger, where each block is an array of totally ordered transactions which formulates an entirely ordered blockchain. All peers have ledgers and optionally orderer can also have a ledger which is called ''Order Ledger''. All other peers have peer ledgers and they can replay the history of transactions to update or reconstruct the ledger state. B. IDENTITY MANAGEMENT Identity is an integral part of any IT system; it aids in mapping various actors in an organization to their roles in the system. These actors then verify their identity through authentication mechanisms and are authorized to perform certain actions allowed by the system. Without a centralized identity management, it is a challenge for IT professionals to manage authentication and authorization across wide range of devices. X.509 certificates in Hyperledger Fabric are responsible for provision of detailed identity which is verifiable by the system administrators. In our blockchain-based framework two entities play vital role in identity management that are as follows: A Certificate Authority is an entity responsible to dispense certificates to various actors in a network. These certificates bind the public key of the principal with various associated attributes and are digitally signed by the CA. Consequently, if CA is a trusted entity and its public key is known, then one can trust the specific principal as he is having a valid certificate, and owns the included attributes and public key, by validating the CA's signature on the principal's certificate. Three kind of certificates are issued: enrollment certificate, transaction certificate and TLS certificate. CA can be of various types as shown in figure 1 e.g., Root CA, Department CA and local CA. If an entity, for instance, a patient is issued an identity by Root CA his identity will be available in every department. CA role includes: 1) Registration of Identities 2) Issuance of Certificates 3) Certificate Renewal and Revocation 2) MEMBERSHIP SERVICE PROVIDER Once an Identity is issued it must be verifiable. For this purpose, we require another entity known as MSP (Membership Service Provider). Trust has been further distributed in FABSHI by delegating the responsibility of verification to MSP instead of CA. MSP is also responsible for managing identities once they have been created by the CA. The MSP can also be deployed at any level and depends on the network C. AUTHENTICATION FIS In this research three main Fuzzy Inference Systems (FIS) are used and their designs and logic are discussed in this section for understanding of the architecture. The authentication mechanism is designed to achieve adaptivity through risk assessment based on parameters usually available in the network packets such as HTTP header. The RE will always initiate a transaction request in relation to the context of healthcare. Thus, all transactions must contain patient's ID along with RE and SP ID. We define a Mamdani FIS for our authentication system as shown in figure 2. The parameters for our framework are IP address, MAC address, time of day, Operating System and location. These parameters will be analyzed in conjunction with history of transactions maintained by blockchain. Each parameter will be analyzed separately and frequency distribution for that particular parameter will be calculated. This frequency distribution is normalized to get the membership functions for each fuzzy set associated with parameter. For example, in figure 3 three fuzzy sets for each parameter seldom, usually and always are shown. The membership function is along the y-axis and set values are along the x-axis. Mamdani FIS is used to calculate fuzzy output which is type of authentication mechanism. Based on 5 parameters and each having 3 fuzzy sets, 125 rules can be defined for fuzzy system, figure 4 shows 9 rules due to space constraint. In the stated example, the frequency distribution for parameters is 0.206 IP, 0.55 mac address, 0.179 for time of day, 0.133 for Operating System and 0.095 for location. Thus subsequent output of fuzzy system is 0.391 implicating Biometric authentication. The output contains 3 fuzzy sets of biometric, OTP (One Time Password) and CA and their membership functions are shown in figure 5 according to the given parameters, the MFA is applied and RE is required to authenticate through particular method given by fuzzy output. If the Membership function of device is max for Biometric, than the device will be authenticated through Biometrics. Furthermore, Biometrics and OTP-based authentication also involve an OTP being sent to patient device for endorsement. On successful authentication, a nonce generated by IoT during previous transaction is hashed with Hash of last valid transaction and new hash is treated as direct knowledge K d for RE. D. TRUST EVALUATION FIS The purpose of this function is to provide trust feedback based on previous transactions as input to the fuzzy logic of authorization transaction. The trust feedback along with authentication provides sufficient proof for fuzzy logic to apply rules to assign the type of access privileges the RE can have. The RE request is mapped to particular access right permission set accordingly with the trust feedback score. Trust of a device constitutes of three main elements [39]: 1) Experience: The transactions experience which is dependent on the previous transactions between RE and SP. The experience RE E SP is calculated by eq (1) Here, range of RE E SP ∈[-1,1]. E t is +1 for successful transaction and -1 for unsuccessful transaction. The membership functions and fuzzy sets of RE E SP are shown in figure 6. 2) Knowledge: K d is calculated in each transaction and if K d provided by RE is different from the one generated by SP then -1 or else 1 is given as value of K d and aggregate value of RE K SP is given by eq (2) In eq 2 RE K SP ∈[-1,1] and denotes the knowledge of RE with respect to SP. The membership functions and fuzzy sets for RE K SP are shown in figure 7. 3) Reputation: The last is the Reputation calculated by blockchain based on the experiences of all devices with pretext to RE. In this case the context is RE, thus reputation is given by eq (3) In eq 3 R RE ∈[-1,1] and denotes the experience of BAN SP devices with RE. The membership function and fuzzy sets associated with reputation are shown in figure 8. The fuzzy output in terms of trust is calculated based on 27 rules and shown in figure 9. E. ACCESS CONTROL FIS The last function is Access control function. In this function the Trust and Authentication linguistic values of previous functions is taken as input and Access Control is given as an output as shown in figure 10. The Access Rights are linguistically defined as {φ, Read, Read/Write, Read/Write/execute} and their membership functions are shown in figure 11. The authentication input provides a fresh behavior input of RE whereas the Trust function provides a feedback-based input and this way the access control is adjusted according to device behavior. For example, if trust is low and the device had authenticated through biometrics the output is No Access as shown in figure 12. The device access is revoked and it is VOLUME 10, 2022 asked to re-validate its certificate through admin and admin is notified. If trust is high and authentication is OTP based than access assigned is different. If a device is assigned NO Access, the RE is deemed as malicious, its access is revoked and it has to re-validate its certificate through CA and the transaction parameter of RE E SP is given -1 value accordingly for this transaction. Otherwise the access is granted on basis of least privilege. For example, if output access right is Read/Write whereas the permissions defined for device only contain read access the device will be granted only read access. IV. FBASHI-ADAPTIVE SECURITY FRAMEWORK Hospital is the core organization for testing and implementation of our framework. Hyperledger channels are deployed at departmental level and are part of the main chain run at Hospital level. Similarly, hospital can be part of a consortium thus forming part of a bigger blockchain. This way the network is layered in nature and scalable as well. This point onwards, framework will be discussed at departmental level and is equally applicable to every department and scenario in same way. As this architecture has been designed specific to IoT devices, these devices are mostly deployed for a specific service at a departmental level and it is highly unlikely that someone from some other department will seek access request to device data directly. Likewise, it is highly unlikely that a device is moved temporarily from one department to other and if such is the case the device will be re-registered in the new department. Figure 13 shows the basic layout of medical department. IoTs associated with a patient are connected to gateway which is part of Blockchain and acts as Anchor peer for IoT devices. Caregivers form integral part of blockchain network and are randomly assigned roles of blockchain peers according to their privileges defined in certificate. A. TRANSACTION FLOW IN BLOCKCHAIN To understand the transaction flow in semantic way a toy scenario is considered where a doctor wants to get ECG readings of a patient from an ECG machine which we call SP ECG and the doctor is RE D (Requesting Entity) in this case. The doctor can be serving in multiple departments in a hospital, for example a heart specialist will have emergency duty in Medical Emergency department, thus in order to carry out the transaction in focus which is in medical department he has to interact with blockchain using the id associated with this department. In healthcare environment patient's privacy is primary and is catered for by adding patient as context in every transaction. The transaction flow in Hyperledger is shown in figure 14 and each phase of the transaction is discussed below: 1) The RE D initiates a request access transaction by sending transaction parameters using blockchain protocol of Hyperledger. The clients are connected through anchor peers as already discussed. In this case the RE D itself is an anchor peer and can initiate transaction. The transaction packet contains following parameters T A = {ID RE || ID P ||ID SP ||Access Type||Nonce SP }. 2) The transaction parameters are verified by the Endorsing peers, in each case depending on the group of devices interacting. A set of Endorsing peers are nominated and these can be assigned weights or can use any pluggable consensus algorithm supported by Hyperledger fabric. The Endorsing peers simulate the read, write set of transaction meaning, they simulate the chaincodes and verify the inputs and outputs. 3) After the successful run of chaincode endorsing peers send back endorsed transaction (including their signatures) to Anchor peer. 4) The Anchor peer forwards the endorsed transaction to Orderer who verifies the Endorsed transaction. All the validations of transactions involve a local MSP running within each peer as separate module and it is responsible for verifying all the signatures of every transaction. 5) The Orderer after verifications assigns a block number to the transaction TR and initiates gossip protocol. Once gossip protocol is initiated all the Peers of concerned channel update their ledgers. 6) An Event is generated on completion of this transaction and the RE D is granted access according to the current access right set of doctor. After successful transaction SP generates a simple transaction to send a Nonce to RE which is also recorded on ledger. B. TRANSACTION LOGIC The main driving force of our adaptive security mechanism is the chaincode part of transaction. Here we try to utilize the computational power and rich features of chaincode for maximum benefit of driving security in a distributed fashion. In order to work efficiently the framework requires at least 50 transactions data stored on blockchain. Thus, biometric based verification will be done in initial transactions and predefined access rights will be used. After 50th transaction the framework will be initialized. The transaction logic is based on three functions for understanding purpose however constitute part of same chain code. Figure 15 gives the overview of chaincode logic described later in the algorithms 1,2 and 3. Algorithm 1 describes the process of authentication Fuzzy Inference System (FIS). Algorithm 2 is regarding the V. SECURITY ANALYSIS The framework was designed in MATLAB and tested for different use cases. The parameters were chosen at random to validate concept and analyze outputs of each function. The surface view in figure 16 shows the input/output domain of Ip Address and Mac Address. The frequency distribution of both inputs is directly proportional to Authentication mechanism in use. The MATLAB tested logic was then applied to Hyperledger fabric for function validity. The architecture is validated and as the number of transactions increases the Fuzzy Output gives more precise results. The system was found scalable as every device communicates on channel basis and transaction throughput is 10000 transactions per sec for Hyperledger Fabric. The threat model is presented: 1) Attackers Attackers whether insider or outsider mostly interact with system as a user. In healthcare monitoring systems the attacker can be an insider compromising EHR and selling them on black market or it can be an outsider with ill intentions to malign hospital reputation by disturbing the working mechanisms of medicals devices. As recently, a vulnerability was found in authentication of Anesthesia devices of GE Aestiva and Aespire [4]. This vulnerability allows a remote attacker to modify device parameters like changing gas density, 2) Assets Hospitals provide healthcare services which are life critical and thus any system or device dealing with any sort of healthcare data is treated as an asset. The data is collected from sensors, synthesized by special servers into intelligent information which can be translated to the patient's health records for analysis and treatment by Caregivers. This data is then stored may be on hospital database or integrated with cloud services for interoperability between various medical organizations, government and services like insurance. The following assets evolve through the proposed hospital monitoring system: a) Medical IoTs b) Caregivers c) Patients Health records d) Gateways, database servers involved in computations 3) Threats Healthcare faces more imminent threats because of high value of patient information in black market and large volume of sensitive data easily available as least importance is given to cyber security in healthcare. Protection against cyber threats in compliance with HIPAA can be challenging and any oversights could easily cost a breach or regulatory fine. Following are the threats identified in healthcare environment which are required to be mitigated by our suggested solution: 1) Unauthorized access to medical sensors and devices. 2) Tempering of recorded patient data. 3) Corruption of data by collusions of peers. 4) Leakage of information between various tiers (hospital, cloud services and other organizations). 5) Accidental or deliberate loss of data by caregivers. 6) Unauthorized access to medical data by users in contrast to assigned roles and responsibilities. 7) Manipulation of activities and audit logs. Table 1 enumerates the mitigation strategies against most common threats achieved through our framework to achieve security objectives for IoTs in healthcare. VI. COMPARATIVE ANALYSIS The main objective of our framework is to achieve adaptive security based on user behavior without depending on traditional security mechanisms like passwords and tokens. Moreover, centralized architecture presents single point of failure and thus vulnerable to many attacks like DOS attacks, ransomware attacks etc. Most of the research work in this domain relies on central architecture and very few have utilized the true potential of blockchain technology. Furthermore, most of the work relies on a single authentication mechanism which may be subverted by the adversaries thus our system adapts by applying second factor authentication based on users' attributes and behavior. Table 2 shows comparative analysis of our framework with existing solutions. A. USABILITY AND COMPARISONS WITH OTHER BLOCKCHAINS The permissionless or public blockchains face various challenges regarding performance parameters. The public blockchains like Bitcoin and Ethereum are mostly based on PoW consensus which is resource intensive involving high latency in order to achieve security. Some of the public blockchains like Litecoin have reduced block formation time of 2.5 minutes as compared to 10 minutes of Bitcoin. Consequently, Litecoin uses a smaller number of hashes to verify the block as compared to Bitcoin. This problem is absent in Hyperledger because the consensus is achieved through PBFT (Practical Byzantine Fault Tolerance) depending on predefined endorsers and trust is anchored by the governing body. Thus, virtually there is no deliberate latency for achieving security and the block is formed as soon being verified by the endorsers. The security and performance can be achieved in a similar manner as in traditional networks by limiting the channel users to the concerned parties as the concept of VLANs in traditional networks. This enables privacy and scalability at the same time by segregating different parts of networks from each other. Therefore, FBASHI was implemented on Hyperledger blockchain to ascertain the practical feasibility in comparison to existing state of the art and other blockchain based solutions. The performance is evaluated and compared to other blockchains below: 1) LATENCY Transaction latency is the time transaction takes starting from the point it is submitted to the network to the point it is committed by all peers to the ledger. Hence the performance and throughput somehow rely on this parameter. Latency is the pivot point for the performance of Hyperledger Fabric. As the blocksize increases the latency reduces because Orderer has fewer transactions in backlog when the transaction rate is high. But as shown in figure 17 the smaller blocksize is suitable for lower transaction rates but as in our case the higher blocksize is much suitable to achieve high tps. Thus, blocksize is a major tweaking parameter while configuring Hyperledger Fabric as per the application's demand. after due endorsement of each endorser. The performance can further degrade if we use endorsers from multiple medical departments. This is the reason behind configuring a separate channel on departmental basis and load balancing endorsement to achieve max performance for proposed architecture. Figure 18 clearly shows as we increase the number of endorsers the throughput in terms of tps will increase but this is only valid when their is load blanacing between endorsers and they are not from multiple organizations. We achieve this linearity by limiting the endorsers to departmental level thus reducing the lag. This can get worse if all endorsers are included for same task resulting in saturation, consuming all the available CPU resources allocated to the container. 2) THROUGHPUT Thus, these parameters must be tweaked accordingly as per requirements of the application. VII. CONCLUSION AND FUTURE WORK Over a period of time the device behavior must remain consistent and a user using a system in a hospital is most likely to use same machine with same IP, location and device. Thus, this behavior must fall within a specific range. This research normalizes device behavior through FIS using Hyperledger Fabric to achieve distributed trust, fuzziness and removing single point of failure from AAA services. Patient endorsement through OTP improves security and privacy sufficing HIPAA and GDPR compliance as patient must be in full control of his data. Our framework successfully detects malicious behavior and thwarts various types of threats against IoT in healthcare. In future, we intend to explore AI capability of blockchain by including more parameters and expanding this framework to other parts of network for foolproof security. CONFLICT OF INTERESTS There are no conflicts of interest for all authors. MEHREEN AFZAL graduated in mathematics and the Ph.D. degree in information security from NUST, Pakistan, in 2010. She is currently an Associate Professor at Air University, Islamabad, Pakistan. Her contributions include research articles on cryptanalysis and design of cryptographic algorithms/protocols. Her research interests include information security and cryptology. WASEEM IQBAL received the bachelor's degree in computer sciences from the Department of Computer Science, University of Peshawar, in 2008, and the master's degree in information security from MCS-NUST, in 2012, where he is currently pursuing the Ph.D. degree. He is currently an Assistant Professor with the Department of Information Security, NUST. He is also an Academician, a Researcher, a Security Professional, and an Industry Consultant. His professional services include, but not limited to an industry consultation, a workshops organizer/resource person, a technical program committee member, a conference chief organizer, an invited speaker, and a reviewer for several international conferences. He has authored over 35 scientific research articles in prestigious international journals (ISI-Indexed) and conferences. ABDUL REHMAN received the B.S. degree in software engineering from Foundation University Islamabad, Pakistan. He is currently pursuing the M.S. degree in information security with NUST, Islamabad. He is currently serving as a Research Associate with the National Cyber Security Auditing and Evaluation Laboratory (NCSAEL). His interests include cyber forensics, data security, and privacy. He is currently an Associate Professor in information security, a Supervisor of the Saudi Aramco Cybersecurity Chair, and the Dean of the College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Saudi Arabia. He has published two patents and more than 40 scientific articles in journals and premier ACM/IEEE/Springer conferences. His research interests include mobile security, authentication and identification, and ubiquitous wireless access.
8,645
2022-01-01T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Multiscale Modeling of a Chain Comprising Selective Laser Melting and Post-Machining toward Nanoscale Surface Finish The generation of rough surfaces is an inherent drawback of selective laser melted (SLM) material that makes post-treatment operation a mandatory process to enhance its surface condition and service performance. However, planning an appropriate and optimized chain to attain the best surface finish needs an integrated simulation framework that includes physics of both additive manufacturing and post-processing. In the present work, an attempt is made to model the alternation of surface roughness which is produced by SLM and post-processed by milling and sequential surface burnishing. The framework includes a series of closed-form analytical solutions of all three processes embedded in a sequence where the output of the preceding operation is input of the sequential one. The results indicated that there is close agreement between the measured and predicted values of arithmetic surface roughness for both SLM material and the post-processed ones. It was also found that a nanoscale surface finish is obtained by finishing milling and single pass rolling at a static force of 1500 N. In addition, the results of the simulation showed that elimination of the milling process in the chain resulted in a six-times-longer production time that requires three times bigger rolling force compared to a chain with an included milling operation. Introduction A rough surface and poor structural integrity are known as the main drawbacks of materials which are produced by SLM.Post-processing including thermal, mechanical, and thermo-mechanical treatments and their combination are known as plausible methods to enhance the surface condition of additively manufactured materials through removing surface anomalies, refining the microstructure, and generating compressive residual stress [1].The selection of the appropriate post-processing techniques and their sequences needs knowledge about the properties of additively manufactured materials to be produced and characteristics of post-processing treatments. Machining, as a mechanical surface treatment, is usually used for enhancing the surface roughness of an SLM material through removing the rough surface layer.Nonetheless, it has great impact on the quality of the surface's finish and does not significantly change the mechanical and metallurgical aspects of the surface integrity.Therefore, other mechanical non-metal removal post-processing methods like peening and rolling (and their different alternatives) are used as a sequential treatment after machining [2]. Surface rolling and its similar alternatives like burnishing have been extensively utilized for the post-processing of SLM material.The process can be either used as a sequential treatment immediately after the SLM process or as a downstream process after machining.There are several research efforts which have used a chain additive manufacturing process followed by machining for surface property enhancement of metals produced by additive manufacturing. Rotella et al. [3] used a post-processing chain heat treatment, turning, and burnishing to analyze the fatigue life and surface integrity of samples produced by laser powder bed fusion.They found that rolling speed and force have a great impact on surface integrity and fatigue life.Teimouri et al. [4] applied a surface rolling process to enhance the roughness of stainless SLM parts.They reported that the surface roughness can be significantly reduced by increasing rolling depth up to a certain level.Zhang and Liu [5] used sequence of turning and burnishing for property enhancement of laser clad material Cr-Ni-based stainless steel.They showed that the surface integrity of the material processed in the chain depends on the initial condition of the material before reaching the chain's last operation burnishing.Varga et al. [6] applied sliding friction burnishing for property enhancement of Ti-6Al-4V fabricated by selective laser melting.They revealed that the final surface roughness of post-processed samples depends on the surface roughness of the as-built material.Sayyadi et al. [7] revealed that burnishing can be used as a final post-treatment operation after selective laser melting and shot peening to enhance the fatigue life of stainless steel 316.They confirmed that the sequence of shot peening-burnishing yields better roughness compared to solely shot peening or burnishing.Zhang et al. [8] applied warm ultrasonic surface rolling followed by heat treatment for additively manufactured Fe-based layers.They proved a significant improvement in states of residual stress, porosity, and hardness compared to as-cladded material and the one processed in cold conditions or warm conditions without heat treatment.Raaj et al. [9] applied burnishing for the post-processing of alloy 718 built by electron beam additive manufacturing.They revealed that superior surface integrity (roughness, hardness, and residual stress) is obtained when burnishing is used after grinding of the as-built material.Yaman et al. [10] applied surface rolling for surface integrity enhancement of Inconel 718 produced by SLM.They showed that increasing the rolling force from 250 N to 750 N significantly enhances the microhardness and wear resistance of an SLM material.However, the improvement of surface hardness was not that significant as the single-pass burnishing process was carried out on an SLM material.Hao et al. [11] used ultrasonic surface rolling for post-treatment of AM Inconel 718 fabricated by high-speed laser cladding.They proved that surface hardness and wear resistance are greatly improved following ultrasonic-surface rolling process.Sunny et al. [12] used interlayer surface rolling for enhancement of the residual stress state of Inconel 625 fabricated by SLM.They reported residual stress changes from tensile to compressive type for SLM material following the strategy of SLM + burnishing. All the above-reviewed research used experimental approaches to assess the surface integrity evolution of materials produced by SLM and post-processed by mechanical treatments.However, the chain design based on this experimental approach is costly, time-consuming, and full of uncertainties.To overcome this problem, the chain should be simulated through a series of mathematical expressions where the output of a process is used as an input of sequential one.Numerical Simulation based on the finite element method (FEM) has been widely used for the modeling of additive manufacturing processes including SLM or direct energy deposition [13,14]; however, the method is really time demanding and makes process optimization challenging.Moreover, applying FEM for the simulation of a chain is too complicated as the output of the downstream process needs to be used as the input of the upstream one.However, simulation based on closed-form analytical models provides means to effectively identify the physics of the variations in a chain, as well as to optimize it in terms of process quality characteristics and production time.Therefore, in the present work, an analytical framework is developed to simulate the surface roughness alternation of an SLM material 3D printed by the SLM process and post-treated by milling and surface rolling.In this study, we are going to identify how the chain can be optimized by adjusting the parameters of each operation to attain the minimum surface roughness, subjected to a high production rate. Materials and Methods This section includes two main parts; in the first part, the simulation framework which was used for the modeling of the surface roughness is described in detail.In the second part, the experimental approach for confirmation of the developed simulation model will be explained. Simulation Framework In order to simulate the surface roughness evolution in the chain starting with selective laser melting and followed by milling and burnishing, mechanics of the surface generation of each process should be firstly identified through series of analytical formulations.Then, based on their sequence in a chain, the surface roughness value of the upstream operation will be considered as the initial value of the downstream one. Analytical Modeling of Surface Generation in Selective Laser Melting As a strong and proven assumption, surface roughness generation of the selective laser melting process is mainly attributed to the formation of the caps on the top of each solidified melt pool as result of surface tension, as shown in Figure 1a,b.Then, it is duplicated over the surface through 3D printing which moves following a zig-zag pattern, as shown in Figure 1c.According to the figure, the surface profile is a function of melt pool width (W), melt pool height (h), melt pool depth (d), and hatch spacing (S).The melt pool geometry itself is a function of the distribution of temperature of the laser beam over the volume of bounded powder.It was proved in different studies [15][16][17] that a point moving heat source derived from general convection diffusion problems and simplified and solved taking into account the steady-state condition can describe well the distribution of temperature from a laser heating source. where T is the temperature of each point in xz plane, T 0 denotes room temperature, P is the heating source power, η is the powder absorption coefficient, V is the printing in y direction, k denotes the heat conduction coefficient, and ρ is the density of the material. In order to calculate the melt pool geometries, the melting temperature should be embedded in Equation (1).Accordingly, the met pool depth (d) is calculated by solving the following non-linear equation at x = 0 [4]. By calculating the melt pool depth, the profile of the melt pool underneath the zero surface that is in shape of a paraboloid can be calculated using following equation. The equations describe the characteristics of the melt pool in the melting stage.However, in order to calculate the roughness values, the melt pool geometries should be modified after solidification while a hump is formed on the top surface of the solidified melt pool.According to previous studies, it was reported that the depth and width of the melt pool remains unchanged after solidification.Accordingly, in order to calculate the size of the hump, mass conservation law is applied to the volume of the melt pool after and before solidification.It was also proved that considering the hump profile as part of the ellipse will bring accurate results compared to experimental values, while printing stainless steel 316L [16,17].Therefore, the height of the melt pool is calculated using the following formula. where t stands for layer thickness and W 0 denotes powder band width. Once the equation of the cap and height of the melt pool are identified, the surface roughness generation profile (as shown in Figure 1c) is derived using the following expression. Analytical Modeling of Roughness Alternation by Milling In this work, a sequence of post-processing methods including milling and burnishing is designed for enhancing the surface roughness of the 3D-printed material.Once the equation of the cap and height of the melt pool are identified, the surface roughness generation profile (as shown in Figure 1c) is derived using the following expression. Analytical Modeling of Roughness Alternation by Milling In this work, a sequence of post-processing methods including milling and burnishing is designed for enhancing the surface roughness of the 3D-printed material. Here, firstly the surface generation of the milling process is modeled using the principle of the face milling process.By having the Rz value of the surface that is generated after the SLM process (which equals to cap height Rz), the axial milling depth of the cut in face milling conditions can be identified.Accordingly, based on the values of H, the milling depth of cut, and milling cutter inserts, the new milling surface is generated.Considering that the face milling cutter insert has a squared profile with a nose radius of r ε and approach angle of α, the engagement of the milling cutter and the surface results in generation two new surface conditions. During the milling of an SLM material's surface, in most cases, the surface roughness can be finished by only a single-pass machining process since the maximum height of the roughness is usually less than the maximum allowable depth of cut.Accordingly, the modified roughness profile after milling (as shown in Figure 2) depends on the height of roughness from the previous operation (i.e., Rz 0 of SLM sample), milling axial depth of cut (a p ), geometry of cutting insert (cutter angles), and the value of feed per each cutting insert (fz).When the difference between the depth of cut and SLM roughness (i.e., a p − Rz) is less than the tool nose radius (r ε ), the roughness profile depends on the values of feed per cutting insert.Then, a series of intersected circles with straight lines between them are developed, as shown in Figure 3. Here, the tool workpiece engagement length AB is defined as follows: milling depth of cut, and milling cutter inserts, the new milling surface is generated.Considering that the face milling cutter insert has a squared profile with a nose radius of rε and approach angle of α, the engagement of the milling cutter and the surface results in generation two new surface conditions. During the milling of an SLM material's surface, in most cases, the surface roughness can be finished by only a single-pass machining process since the maximum height of the roughness is usually less than the maximum allowable depth of cut.Accordingly, the modified roughness profile after milling (as shown in Figure 2) depends on the height of roughness from the previous operation (i.e., Rz0 of SLM sample), milling axial depth of cut (ap), geometry of cutting insert (cutter angles), and the value of feed per each cutting insert (fz).When the difference between the depth of cut and SLM roughness (i.e., ap − Rz) is less than the tool nose radius (rε), the roughness profile depends on the values of feed per cutting insert.Then, a series of intersected circles with straight lines between them are developed,as shown in Figure 3. Here, the tool workpiece engagement length AB is defined as follows: Accordingly, when fz is smaller than AB, the surface profile includes a series of circles (as shown in Figure 3b) which can be calculated using the following equation. When fz is greater than AB, the surface profile includes a series of circles and straight lines between them (as shown in Figure 3c).milling depth of cut, and milling cutter inserts, the new milling surface is generated.Considering that the face milling cutter insert has a squared profile with a nose radius of rε and approach angle of α, the engagement of the milling cutter and the surface results in generation two new surface conditions. During the milling of an SLM material's surface, in most cases, the surface roughness can be finished by only a single-pass machining process since the maximum height of the roughness is usually less than the maximum allowable depth of cut.Accordingly, the modified roughness profile after milling (as shown in Figure 2) depends on the height of roughness from the previous operation (i.e., Rz0 of SLM sample), milling axial depth of cut (ap), geometry of cutting insert (cutter angles), and the value of feed per each cutting insert (fz).When the difference between the depth of cut and SLM roughness (i.e., ap − Rz) is less than the tool nose radius (rε), the roughness profile depends on the values of feed per cutting insert.Then, a series of intersected circles with straight lines between them are developed,as shown in Figure 3. Here, the tool workpiece engagement length AB is defined as follows: Accordingly, when fz is smaller than AB, the surface profile includes a series of circles (as shown in Figure 3b) which can be calculated using the following equation. When fz is greater than AB, the surface profile includes a series of circles and straight lines between them (as shown in Figure 3c).Accordingly, when f z is smaller than AB, the surface profile includes a series of circles (as shown in Figure 3b) which can be calculated using the following equation. When f z is greater than AB, the surface profile includes a series of circles and straight lines between them (as shown in Figure 3c). where I is the counter that can be set i = 1, 2, . . ., n based on the number of surface profiles. Analytical Modeling of Roughness Alternation by Burnishing As a final finishing process, based on the characteristic of the roughness and properties of the material, burnishing can be used either immediately after production by SLM or after milling.In this work, through developing a simulation model of the chain, we are going to identify either if it is possible to use burnishing immediately after selective laser melting to minimize the time of production, and if so how many pass numbers are required to achieve the desirable roughness; or if a milling process is required to achieve the desired roughness. As burnishing is used on the rough surface, the principle of contact of a rigid solid roller with a rough surface is used to model the alteration of the roughness profile after burnishing.The deformation of every individual roughness which is generated by each operation (SLM-induced roughness and milling-induced roughness under condition #1 and #2) is different when they are subjected to the first rolling pass.According to the geometry of contact shown in Figure 4a, the deformation of the roller with roughness induced by SLM follows the contact of two cylinders.Here, in order to obtain the deformation depth and width, i.e., δ and a, the principle of contact mechanics of two cylinders should be applied.Accordingly, during the plastic deformation of two cylinders in the rolling contact process, the force corresponding to plastic deformation can be obtained using the following equation [18]. where F R denotes the amount of load which is applied on every individual roughness that corresponds to the number of roughness which is being deformed by each roller (N) and can be calculated by dividing the contact width of the cylinder and flat surface based on the Hertz equation and scan speed during SLM, as expressed in Equation (10).P is the average contact pressure which equals 3σ s [18], a is the contact width, and L is the contact length equal to the length of the roller.Accordingly, the relationship between the contact depth and contact length (as shown in Figure 4a) can be obtained based on geometry considering neglection of the second order of small terms [18] using Equation (11). By identifying the depth of the contact, the modified surface roughness height after the first rolling pass can be obtained by the difference of the SLM roughness height and plastic deformation depth, i.e., Rz b = Rz 0 − δ. In burnishing a milled surface profile, depending on the type of roughness (as shown in Figure 3), the deformation during rolling is different.While surface rolling a milled surface with surface profile #1, it is assumed that the intersected circles make a wedge as shown in Figure 4b [19].Then, the surface profile is altered by deformation of this wedge and calculating the corresponding plastic deformation depth.According to the previous research, the plastic deformation depth and corresponding roughness can be calculated using the following equation. where Rzb is the maximum roughness height after burnishing, F n is the force applied by each roller, f z is the wedge width that equals to the feed per each cutting insert, Rz m is the milling roughness height.For milling, 2α can be obtained based on the geometry of the wedge.And σ s is the material flow stress.However, when surface rolling is carried out on the surface with the milling condition #2, the roughness does not have the shape of a wedge and Equation ( 9) is no longer valid for the calculation of Rzb.In this condition, the asperities are deformed like a simple compression of a trapezoid with a height of Rz m and width of fz (bigger) and b (smaller).Therefore, assuming it is an elastic-work hardening material, the stress-strain relationship and corresponding deformation can be obtained using the following equations. where b and Fw are the smaller width of the trapezoid and force applied to each roughness, respectively, that can be calculated using following equations.K denotes the Hollomon power law coefficient and m is the strain hardening exponent of the material.In burnishing a milled surface profile, depending on the type of roughness (as shown in Figure 3), the deformation during rolling is different.While surface rolling a milled surface with surface profile #1, it is assumed that the intersected circles make a wedge as shown in Figure 4b [19].Then, the surface profile is altered by deformation of this wedge and calculating the corresponding plastic deformation depth.According to the previous research, the plastic deformation depth and corresponding roughness can be calculated using the following equation. where Rzb is the maximum roughness height after burnishing, Fn is the force applied by each roller, fz is the wedge width that equals to the feed per each cutting insert, Rzm is the milling roughness height.For milling, 2α can be obtained based on the geometry of the wedge.And σs is the material flow stress. However, when surface rolling is carried out on the surface with the milling condition #2, the roughness does not have the shape of a wedge and Equation ( 9) is no longer valid for the calculation of Rzb.In this condition, the asperities are deformed like a simple When the surface rolling process is carried out at the multipass, it is assumed that a wedge shape is no longer appropriate for the SLM surface profile and milled surface profile under condition #1.Here, like explained above, the deformation of roughness is based on the compression of a trapezoid.Accordingly, as described in Equation ( 13), the smaller width of the trapezoid which is under compression and corresponding roughness height at further pass numbers can be obtained using the following formula: After calculation of the modified roughness height induced by burnishing, the surface profile follows the pattern of SLM or milling with modified Rz values; also, the Ra can be calculated using the following formula: where λ is the preferred length based on the roughness cut-off distance. Experimental Work In order to confirm the results which were derived from an analytical model, a series of milling and surface rolling experiments were carried out on 3D-printed samples made of stainless steel 316L.the material was selected because of its superior printability and application in different industries [20].The samples were selectively laser melted using an EOS 280 machine with a 1100 nm wavelength discontinuous Yb-fiber laser following the optimized standard conditions with a laser power of 195 W, scan speed of 1083 mm/s, 80 µm hatch spacing, and 20 µm laser thickness.The parameters were selected in such a way that the volumetric energy density reaches 100 J/mm 3 .The physical and mechanical properties of the 3D-printed material are provided in Table 1.The 3D-printed samples were then subjected to the face milling process using a tool with three squared inserts with a nose radius of r ε = 0.8 mm and approach angle of κ r = 15 • .The samples were face milled using a depth of cut of 0.2 mm with a spindle speed of 800 RPM and feed velocity of 200 mm/min.Then, the 3D-printed and machined samples were subjected to the surface rolling process with a tool comprising four rollers with a diameter of 4 mm and length of 10 mm.The multi-roller tool is installed to a universal milling machine head with a maximum power of 15 hp and spindle speed of 3000 rpm. During the surface rolling, the static forces were measured using a 3-component force dynamometer KISTLER 9257B.Accordingly, the penetration of the tool into the material continues until achieving the exact value for the force, and then the rolling of the surface begins.Our preliminary experiments showed that the only surface rolling parameter that has significant effect on roughness change in multi-roller face surface rolling was static force.Therefore, the surface rolling experiments were carried out under different values of forces, i.e., 750 and 1500 N, and different pass numbers, i.e., 1 and 3. Also, other parameters like spindle speed and linear transverse velocity were kept constant at 800 RPM, 200 mm/min, respectively.Table 2 demonstrates the experimental plan.The ranges of process factors provided in Table 2 have been selected because of our simulation model to achieve a nanoscale surface finish and to identify the effectiveness of process factors more comprehensively.Moreover, our lab experience and previous research carried out on the same material gave us insight into how to make the experimental plan.Since the surface rolling spindle speed and linear velocity were known to be insignificant on roughness alternation, it was decided to set them as high as possible to obtain the minimum processing time.On the other hand, the pass number and force were first identified in our simulation model and then set on the machines and modified based on process limitations.Moreover, regarding the milling process factor, the spindle speed and feed rate was selected based on the limitation of the machine and cutting insert in terms of vibration and tool wear.Also, the depth of cut was set more than the roughness maximum height remaining from the SLM process to generate a new surface roughness.The as-built specimens together with the milled and burnished samples were subjected to surface roughness measurements using a TylorHobson contact-based scanning machine and the 2D and 3D surface topographies; as well, their main roughness indices, i.e., Ra and Rz, were measured.During the measurement, the cut-off length for the as-built samples was set to 2.5 mm as the roughness values were between 2 and 10 µm; for milled and burnished samples, it was set at 0.8 mm because of the significant reduction in roughness. For each set of experiments shown in Table 2, three runs were carried out and the average values of roughness have been reported in this paper.Also, the error bar has been provided based on the deviation from the average values.To compare the results driven by the simulation and the experimentally measured ones, the prediction error was defined as follows: Results To use the simulation framework as a practical tool, the derived values from the predictive model should be verified with confirmatory experiments.The obtained results regarding the values of arithmetic roughness Ra and maximum roughness height Rz have been presented in Figure 5.According to the figure, it is seen that there are compatible results between the measured values of roughness with those derived by the simulation framework. The average prediction error for Ra is about 10%, while this value for Rz is around 7%.The results agree with previous work by this author, while a different approach for modeling of the surface roughness alternation was developed [21]. The variations in errors corresponding to the difference between the measured and predicted values are interesting, as shown in Figure 5. Accordingly, it is seen that the minimum prediction error is for data #6 that corresponds to the sample which is built by SLM and post-processed by milling.As the milling is a mechanical material removal process, the surface can be generated by having the geometry of contact and duplication of the engagement region following kinematics of motion.Moreover, as the milling is carried out at a finished machining regime, factors such as chatter vibration do not have a significant impact on the surface roughness; hence, neglecting this effect does not produce a significant error in our simulation model. Results To use the simulation framework as a practical tool, the derived values from the predictive model should be verified with confirmatory experiments.The obtained results regarding the values of arithmetic roughness Ra and maximum roughness height Rz have been presented in Figure 5.According to the figure, it is seen that there are compatible results between the measured values of roughness with those derived by the simulation framework. (a) (b) According to Figure 5, it can be inferred that following data number #6, the prediction error of samples 7 to 10 is lower than others.It can be attributed to the fact that the developed simulation model corresponding to these data sets is for the surface roughness values of two contact-based mechanical post-treatments, where their noise factors are minimized compared to melting-based processes.However, the main source of error in this batch of data is supposed to be due to neglecting the effect of surface elastic rebound that results in underestimating the prediction results. The prediction error of data #1 corresponding to the as-built material is ranked #3 among the data batches.As the process follows melting and solidification, there are several noise factors which may cause errors while developing a predictive model.The source of error can be counted as neglecting the effect of inclusion as result of the formation of non-molten particles.However, this effect is not as significant as the main mechanism of roughness generation, i.e., cap formation over the melt pool surface. The biggest values of prediction errors correspond to the samples which were postprocessed by surface rolling immediately after SLM.In this condition, as the surface roughness of the as-built material is high enough, inducing a high amount of mechanical plastic deformation (that is not cutting), this results in the formation of scratches and flakes on the surface which were not considered in the simulation model.It is seen that nevertheless by increasing the pass number and static force, the surface roughness in both the simulation model and experimental values is being decreased, but the prediction error increases.It is attributed to the fact that increasing further the mechanical work (as a result of the bigger force and pass number) results in increasing the work hardening of the material.In such conditions, the material becomes more brittle and an excessive amount of load causes the formation of fractures in the material during plastic deformation.As this effect was neglected in our model, the error values in this batch of data are relatively higher than other data sets; however, they are still less than 10%, which is acceptable based on the values reported in other literatures. Another point that is interpreted from Figure 6 is that regardless of the initial roughness which comes from either SLM or SLM + Milling, the prediction error increases when increasing the surface rolling force and pass number.As result of the increase in force and pass number, the work hardening and springback effects which were not considered in our model might be more emphasized.Accordingly, neglecting this fact may result in further prediction error. Discussion Once the model is verified by confirmatory experimental values of roughness, and the source of errors were identified, it can be used to analyze and influence process parameters for variations in surface roughness.Here, through this discussion we are going to identify how we can optimize the chain to achieve samples with the lowest surface roughness in shortest period of the time.Then, the evolution of the surface roughness through the best chain is identified and presented through 3D surface topographies. Figure 7 illustrates the interaction effect of the pass number and force when they are used for post-processing of an SLM sample (Figure 7a) and SLM + Milled sample (Figure 7b).According to Figure 7a, it is seen that when the process is carried out on an SLM Discussion Once the model is verified by confirmatory experimental values of roughness, and the source of errors were identified, it can be used to analyze influence process parameters for variations in surface roughness.Here, through this discussion we are going to identify how we can optimize the chain to achieve samples with the lowest surface roughness in shortest period of the time.Then, the evolution of the surface roughness through the best chain is identified and presented through 3D surface topographies.Figure 7 illustrates the interaction effect of the pass number and force when they are used for post-processing of an SLM sample (Figure 7a) and SLM + Milled sample (Figure 7b).According to Figure 7a, it is seen that when the process is carried out on an SLM sample, by increasing the pass number and static force the surface roughness of the as-built material decreases up to 60%.It means that the surface roughness of the as-built material from 6.38 µm reaches 2.45 µm when the force is 1500 N and the pass number is 3.It can be also seen that at a static force of 750 N, increasing the pass number does not have a significant influence on the roughness.However, when the static force reaches 1500 N, it is seen that the surface roughness decreases at further pass numbers.This effect can be attributed to the work hardening rate of the material at 750 N that results in no more reduction in roughness values at further passes, while the applied load is same.However, for bigger numbers of static loads, the surface roughness decreases by increasing the pass number and reaches 1.8 µm in the best conditions.According to the abovementioned explanations, the optimized path which covers both criteria of the product's quality and production rate (minimum production time) is when the SLM samples are post-processed by a single-pass milling and single-pass surface rolling with a static force of 750 N that results in a 42 s post-processing time.Accord- On the other hand, according to Figure 7b, it is seen that by applying the milling process after burnishing, the surface roughness reaches a value of 0.4 µm.It is seen that when a single-pass surface rolling with a static force of 750 N is carried out on the SLM + Milled sample, the roughness values reaches 0.15 µm.Moreover, it is seen that the increase in pass number and static force to 3 and 1500 N, respectively, results in a reduction in roughness but their effects are saturated and no more big changes in roughness values are achieved. The production time is an important factor for decision making in a chain and can be calculated by sumation of the time of SLM, time of milling, and time of the surface rolling process.As the SLM time is fixed in the chain (all the materials are produced by same SLM setting), the production time is mainly determined by time of milling and surface rolling.On the basis of process kinematics, the milling and surface rolling times are calculated by dividing the length of the materials to be processed and linear velocity, i.e., t (min) = L (mm)/V f (mm/min).As the length of the workpiece is 70 mm and the linear velocity for both the milling and burnishing was set 200 mm/min, the time of processing for each individual operation is 21 s.For multi-pass surface rolling, this amount is multiplied for the number of surface rolling passes. According to the abovementioned explanations, the optimized path which covers both criteria of the product's quality and production rate (minimum production time) is when the SLM samples are post-processed by a single-pass milling and single-pass surface rolling with a static force of 750 N that results in a 42 s post-processing time.According to the simulation results, it is found that by eliminating the milling process, even when increasing the surface rolling time and force (that consumes lots of energy), the desirable surface roughness cannot be accessed.To better understand this finding, Figure 8 presents the variation in surface roughness under bigger values of force and roughness.It is seen that only by performing surface rolling with a force of 3000 N and 6 pass number (that equals 126 s post-processing time) are surface roughness values below 0.2 µm.However, setting this amount of force in practice is impossible and results in excessive tool wear and band breakage of rollers and the tooling system during the surface rolling process.Nevertheless, the burnishing pass can be performed as many times as possible, as has been reported in the literature [21,22].However, inducing this amount of force for multiple burnishing passes may result in failure of the tool due to high friction, wear, and breakage.On the other hand, this work is going to show how a chain can be designed and optimized in the design stage as an advantage of developing simulation models. Therefore, it can be inferred that applying a single milling process in a chain as middle operation between the SLM and surface rolling results in reducing the time of production by six times and the required force by three times to achieve same surface roughness.Accordingly, a process is added in the chain but it results in a significant reduction in the process time and the corresponding energy consumption that is function of time and force. Figure 8 represents the evolution of the 3D surface topography for the optimum chain which is initiated by the SLM process and then post-processed by milling and burnishing.In the figure, both the simulated and measured surface topographies were included that confirm that the developed simulation framework can be used as a practical tool for the optimization of a chain including an AM process and following mechanical post-treatment. Nevertheless, the burnishing pass can be performed as many times as possible, as has been reported in the literature [21,22].However, inducing this amount of force for multiple burnishing passes may result in failure of the tool due to high friction, wear, and breakage.On the other hand, this work is going to show how a chain can be designed and optimized in the design stage as an advantage of developing simulation models. Conclusions An analytical simulation model has been presented to model the alternation in surface roughness that is generated by selective laser melting and post-processed by milling and surface rolling.Here, the surface is generated by the formation of caps over the melt pool in the SLM process and duplicated following laser head kinematics; then, the generated surface roughness is modified by milling and the generation of new roughness profile and surface rolling by flattening the roughness height.The model was verified from the surface roughness values of 10 samples built and post-processed under different experimental conditions.The goal of this research was to identify how planning of an optimal chain in the design stage can significantly optimize the product's quality and minimize the production time.The obtained results show that: • The obtained results which were derived from the simulation framework were compatible with the experimental values, while the average error for prediction of the arithmetic roughness was 10.1% and for prediction of the maximum distance between roughness peaks and valleys was 7.3%. • It was found that the simulated surface roughness which is modeled in a chain of SLM + Milling + Surface rolling has a lower prediction error than the chain of SLM + Burnishing because of existing finished milling processes. • It was found that by eliminating milling from the chain the production time will significantly increase by three times and further forces and energy are required to obtain same roughness.Therefore, elimination of a process from the chain does not guarantee minimizing the production time. Figure 1 . Figure 1.Schematic diagram of (a) melt pool in melting stage, (b) formed cap over the melt pool surface after solidification, (c) roughness generation through duplication of caps formed on melt pool surface following 3D-printing pattern. Figure 1 . Figure 1.Schematic diagram of (a) melt pool in melting stage, (b) formed cap over the melt pool surface after solidification, (c) roughness generation through duplication of caps formed on melt pool surface following 3D-printing pattern. Figure 2 . Figure 2. Schematic diagram showing modification of roughness of SLM surface after milling. Figure 3 . Figure 3. (a) A sketch of engagement of cutting insert and workpiece in finished milling process, (b) surface generation profile when the feed rate is less than tool-workpiece engagement width, (c) surface generation profile when the feed rate is greater than tool-workpiece engagement width. Figure 2 . Figure 2. Schematic diagram showing modification of roughness of SLM surface after milling. Figure 2 . Figure 2. Schematic diagram showing modification of roughness of SLM surface after milling. Figure 3 . Figure 3. (a) A sketch of engagement of cutting insert and workpiece in finished milling process, (b) surface generation profile when the feed rate is less than tool-workpiece engagement width, (c) surface generation profile when the feed rate is greater than tool-workpiece engagement width. Figure 3 . Figure 3. (a) A sketch of engagement of cutting insert and workpiece in finished milling process, (b) surface generation profile when the feed rate is less than tool-workpiece engagement width, (c) surface generation profile when the feed rate is greater than tool-workpiece engagement width. Figure 5 .Figure 5 . Figure 5.Comparison between the measured and predicted values of roughness for data sets of Table 2 (a) Ra (b) Rz. Figure 6 . Figure 6.Variation of prediction errors for Ra of data sets. Figure 6 . Figure 6.Variation of prediction errors for Ra of data sets. Materials 2023 ,Figure 7 . Figure 7. Variation of roughness under different values of rolling force and pass number for (a) SLMed samples (b) SLMed + Milled samples. Figure 7 . Figure 7. Variation of roughness under different values of rolling force and pass number for (a) SLMed samples (b) SLMed + Milled samples. Figure 8 . Figure 8. Alteration of simulated and measured optimum 3D surface topography in optimum chain design.(a) SLM sample, (b) milled sample, (c) burnished sample.
9,422
2023-12-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Increased toll-like receptors and p53 levels regulate apoptosis and angiogenesis in non-muscle invasive bladder cancer: mechanism of action of P-MAPA biological response modifier The new modalities for treating patients with non-muscle invasive bladder cancer (NMIBC) for whom BCG (Bacillus Calmette-Guerin) has failed or is contraindicated are recently increasing due to the development of new drugs. Although agents like mitomycin C and BCG are routinely used, there is a need for more potent and/or less-toxic agents. In this scenario, a new perspective is represented by P-MAPA (Protein Aggregate Magnesium-Ammonium Phospholinoleate-Palmitoleate Anhydride), developed by Farmabrasilis (non-profit research network). This study detailed and characterized the mechanisms of action of P-MAPA based on activation of mediators of Toll-like Receptors (TLRs) 2 and 4 signaling pathways and p53 in regulating angiogenesis and apoptosis in an animal model of NMIBC, as well as, compared these mechanisms with BCG treatment. Our results demonstrated the activation of the immune system by BCG (MyD88-dependent pathway) resulted in increased inflammatory cytokines. However, P-MAPA intravesical immunotherapy led to distinct activation of TLRs 2 and 4-mediated innate immune system, resulting in increased interferons signaling pathway (TRIF-dependent pathway), which was more effective in the NMIBC treatment. Interferon signaling pathway activation induced by P-MAPA led to increase of iNOS protein levels, resulting in apoptosis and histopathological recovery. Additionally, P-MAPA immunotherapy increased wild-type p53 protein levels. The increased wild-type p53 protein levels were fundamental to NO-induced apoptosis and the up-regulation of BAX. Furthermore, interferon signaling pathway induction and increased p53 protein levels by P-MAPA led to important antitumor effects, not only suppressing abnormal cell proliferation, but also by preventing continuous expansion of tumor mass through suppression of angiogenesis, which was characterized by decreased VEGF and increased endostatin protein levels. Thus, P-MAPA immunotherapy could be considered an important therapeutic strategy for NMIBC, as well as, opens a new perspective for treatment of patients that are refractory or resistant to BCG intravesical therapy. Background Bladder cancer (BC) is the fourth most incidence tumor in men and the ninth in women, showing high morbidity and mortality rates [1,2]. More than 70 % of BC is superficial (non-muscle invasive bladder cancer) and classified into 3 stages: pTis (flat carcinoma in situ), pTa (papillary carcinoma non-invasive) and pT1 (tumor invading mucosa or submucosa of the bladder wall) [3,4]. Despite the prognosis associated with non-muscle invasive bladder tumours, almost 50 % of patients will experience recurrence of their disease within 4 years of their initial diagnosis, and 11 % will progress to muscle invasive disease [3]. The primary treatment for high-grade NMIBC is based on surgery by transurethral resection of bladder tumor (TURBT), followed by intravesical immunotherapy with Bacillus Calmette-Guerin (BCG) [5]. The response induced by BCG reflects induction of a T-helper type-1 (Th1) response to prevent recurrence and to reduce tumor progression [5][6][7]. However, BCG therapy shows several undesirable effects that are observed up to 90 % of patients, such as fever, chills, fatigue, irritative symptoms, haematuria and until major complications as sepsis and death [8,9]. Based on this background, compounds activating the immune system, including vaccines, biological response modifiers and tumor environment modulators are, considered potential candidates for the development of new NMBIC treatments aiming to obtain greater therapeutic effect combined with lower toxicity. Toll-like receptors (TLRs) agonist compounds may represent a potential antitumor therapeutic approach, as these receptors are implicated in the pathogenesis of some tumors, including NMIBC [10][11][12]. TLRs play key roles in innate immunity and their activation can trigger two different responses in tumors: they stimulate immune system to attack tumor cells and/or eliminate the inhibitory machinery to the immune system [13][14][15]. TLRs signaling consist of two pathways: MyD88-dependent (canonical) and TRIF-dependent (non-canonical) pathways [13][14][15]. Except for TLR3, the MyD88-dependent pathway activates NF-kB and MAPK, resulting in inflammatory cytokines release, such as Tumor Necrosis Factor α (TNF-α) and interleukin-6 (IL-6) [13,14]. Conversely, the TRIFdependent pathway activates Interferon Regulatory Factor 3 (IRF-3) for the production of interferon [13][14][15]. TLR4 is the only receptor that uses the four adapter molecules (MyD88, TRIF, TRAM and TIRAP) in a signal cascade [13][14][15]. Most TLRs genes respond to p53 via canonical as well as non-canonical promoter binding sites [16]. The p53 protein is responsible for cell cycle regulation, and it acts as tumor suppressor [16,17]. Studies of response element promoter sequences targeted by p53 suggest a general role for p53 as a regulator of DNA damage and as a control of TLRs gene expression [16]. Furthermore, several studies suggested that antiangiogenic therapy is sensitive to p53 status in tumors, indicating an important role of p53 in the regulation of angiogenesis [18,19]. Angiogenesis plays a fundamental role in initiation and progression in different tumors [20]. The vascular endothelial growth factor (VEGF) stimulates all aspects of endothelial function such as: proliferation, migration, production of nitric oxide (NO) and endothelial cell layer permeability [18,[20][21][22]. The angiogenesis inhibitors have been developed to target endothelial cells and blocking tumor blood supply [18,23]. Endostatin is a potent endogenous inhibitor of angiogenesis and induces apoptosis in both endothelial cells and tumor cells [18,19,24]. Immunotherapy using compounds that act as TLR agonists could be a valuable approach for cancer treatment, whether used alone or in combination with existing therapies. Protein aggregate magnesiumammonium phospholinoleate-palmitoleate anhydride (P-MAPA) a biopolymer isolated in the 70′s [25] and characterized in the years 90′s [26][27][28] currently under development by Farmabrasilis (a nonprofit research network) [29], has emerged as a potential candidate for intravesical therapy for NMIBC. P-MAPA is a biological response modifier obtained by fermentation from Aspergillus oryzae that demonstrates important antitumor effect in several animal models of cancer, including NMIBC [11,12,[26][27][28]. Recent studies of our research group demonstrated that P-MAPA modulates TLR 2 and 4 in both infectious diseases and cancer [11,12,30]. The strategy of research and development of the drug P-MAPA is based in the concept of open source model, with the researchers linked by a virtual research network [29]. A complementary strategy adopted by Farmabrasilis aims to booster the production of data to accelerate the development of the compound as drug candidate for cancer, including NMIBC, involves the selection of compounds already in clinical use, and when available, compounds equally able to act together with P-MAPA, such as BCG, used in parallel or in conjunction with experiments in vivo. The use of immunomodulatory compounds already known against NMIBC with mechanisms of action partially elucidated, such as BCG, in comparative studies with P-MAPA using the same animal model, may facilitate the visualization of commonalities, as well as the differences in the mechanisms of action. Of note, these data may also be relevant to understand the mode of action of P-MAPA, aiming the elaboration of new strategies focusing the future use of the compound for treatment of some conditions that emerge in the treatment of NMIBC, such as BCG refractory and BCG relapsing diseases. Thus, this study presents the first comprehensive view of the mechanisms of a potential therapeutic agent for NMIBC, P-MAPA biological response modifier, based on activation of mediators of TLRs 2, 4 and p53 signaling pathways in regulating the angiogenesis and apoptosis processes. NMIBC induction and treatment Forty female Fischer 344 rats, all 7 weeks old, were obtained from the Multidisciplinary Center for Biological Investigation (CEMIB) at University of Campinas (UNI-CAMP). For the experiments the protocol followed strictly the ethical principles in animal research (CEUA/ IB/UNICAMP-protocol number: 2684-1). Before each intravesical catheterisation via a 22-gauge angiocatheter treatments, animals were anesthetized with 10 % ketamine (60 mg/kg, i.m.; Ceva Animal Health Ltda, São Paulo, Brazil) and 2 % xylazine (5 mg/kg, i.m.; Ceva Animal Health Ltda, São Paulo, Brazil). The animals remained anesthetized for approximately 45 min after catheterization to prevent spontaneous micturition. Ten control animals (CONTROL group) received 0.30 ml of 0.9 % physiological saline every other week for 14 weeks. Thirty animals received 1.5 mg/Kg of n-methyl-n-nitrosourea (MNU) dissolved in 0.30 mL of sodium citrate (1 M pH 6.0); each intravesically every other week for 8 weeks [11,12]. Two weeks after the last dose of MNU, all animals were submitted to retrograde cystography and ultrasonography to evaluate the occurrence of tumor. Both negative and positive contrast cystography enabled the bladder wall, mucosal margin and lumen to be visualised. For positive or negative contrast cystographies, animals were submitted to intravesical catheterisation via a 22-gauge angiocatheter to drain all the urine from the bladder, instilled 0,3 mL of positive contrast medium or 0,3 mL of air (negative contrast) into the bladder until becomes slightly turgid (judged by palpation of the bladder through the abdominal wall) and taken lateral and ventrodorsal radiographs. The ultrasounds were evaluated using a portable, software-controlled ultrasound system with a 10-5 MHz 38-mm linear array transducer. The animals from CONTROL group showed no mass infiltrating the bladder walls, as well as, there were no vesicoureteral reflux and neither bladder filling defect ( Fig. 1a, b, c and d). Negative contrast cystography and ultrasonography of urinary bladder from MNU group showed a mass (average tumor size 3,5 × 5,1 mm) infiltrating the ventral, dorsal and cranial bladder walls (Fig. 1e, f and h). Positive contrast cystography demonstrated several bladder filling defects and vesicoureteral reflux unilateral (Fig. 1g) in 80 % of animals and bilateral in 10 % of animals. MNU treated animals were further divided into three groups (ten animals per group): the MNU group received 0.30 ml of 0.9 % physiological saline; the MNU-BCG group received 10 6 CFU (40 mg) of BCG (Fundação Ataulpho de Paiva, Rio de Janeiro, RJ, Brazil); the MNU-P-MAPA group received 5 mg/kg dose of P-MAPA (Farmabrasilis, Campinas, SP, Brazil). All animals were treated every other week for 6 weeks. After the treatment, the animals were euthanized and their urinary bladder were collected and processed for histopathological, immunological and Western Blotting analysis. Histopathological analysis Samples of urinary bladders were used (n = 5) of each group and fixed in Bouin solution for 12 h. Then, after the fixation, the fragments were washed in 70 % ethanol, and dehydrated in an ascending series of alcohols. Subsequently, the fragments were diaphanized in xylene for 2 h and embedded in the plastic polymer (Paraplast Plus, ST. Louis, MO, USA). Subsequently, the samples were cut on a rotary microtome Slee CUT5062 RM 2165 (Slee Mainz, Mainz, Germany), 5 μm thick, stained with hematoxylin-eosin and photographed with a Leica DM2500 photomicroscope (Leica, Munich, Germany). A senior uropathologist analyzed the urinary bladder lesions according to Health/World International Society of Urological Pathology Organization [4]. The immunohistochemistries were measured in five animals in each experimental group, the same samples as for histopathological analysis. Ten microscopic fields per animal were measured with 40·objective lens and corresponded to a total area of 92,500.8 μm 2 . TLR2, TLR4, MyD88, IRF-3, IKK-α, BAX, NF-kB, iNOS, TNFα, TRIF, IFN-γ, IL-6 antibodies were scored semiquantitatively by recording percentage of only urothelial cells. At least 1,000 urothelial cells, for each group (200 urothelial cells per animal), were counted by the software LAS V 3.7 (Leica, Munich, Germany) while the examiner classified them as positive or negative cells. The PLC values were categorized into four scores as follows: 0, no immunoreactivity; 1, 1-35 % positive urothelial cells; 2, 36-70 % positive urothelial cells; 3, > 70 % positive urothelial cells. The software LAS V 3.7 (Leica, Munich, Germany) was used to quantify the intensity of brownishcolor immunostaining. For each antibody, the same photomicrographs used for determining the PLC were considered. Ten randomized labeled nuclear and/or cytoplasmic regions from different urothelial cells were indicated, with the same-sized square (software LAS V 3.7). The average optical density (OD) of these areas was automatically calculated and represents the average of red, green, and blue color composition (RGB) per area of nucleus and/or cytoplasm analyzed, expressed in optical units per micrometer squared (ou/μm 2 ). The same procedure was applied to obtain the background optical density (BOD) from an area without tissue or vascular space for each photomicrograph. A single area was enough, since the background was constant in each photomicrograph. The absolute white colour that corresponds to the maximum optical density (M ax OD) was composed by the totality of red, green, and blue; and black was the absence of these colors. Therefore, the optical density values calculated by the software make up a decreasing scale in which the high values correspond to the colours that are visually clear. The equation below was used to calculate the digital immunostaining intensity (ITI dig ) for each antibody, whose values make up an increasing scale, equalized by the BOD, proportionally to the optical density of absolute white: The intensity of reactivity was recorded as: weak (1+, ITI dig average = 49.3 μm 2 ), moderate (2+, ITI dig average = 71.3 μm 2 ) and intense (3+, ITI dig average = 95.1 μm 2 ). Determination of the proliferative index Samples of the urinary bladders were randomly collected from 5 animals in each group, the same used for Ki-67 immunodetection and histopathology, and used for determination of the proliferative index. Ten fields were taken at random and measured per animal, resulting in 50 fields per group with an × 40 objective lens and the total number of Ki-67 staining positive cells was expressed as the percentage of these total cells, including luminal and basal epithelial cells. Sections were lightly counterstained with methyl green. Detection of apoptosis and determination of the apoptotic index Samples of the urinary bladders from five animals in each group, the same used for immunodetection and histopathology, were processed for DNA fragmentation (TUNEL) by means of Terminal Deoxynucleotidyl Transferase (TdT), using the Kit FragEL™ DNA (Calbiochem, La Jolla, CA, USA). The apoptotic nuclei were identified using a diaminobenzidine chromogen mixture (Kit FragEL™ DNA). Ten microscopic fields were randomly taken and analyzed per sample, resulting in 50 fields per group, using a Leica DM2500 (Leica, Munich, Germany) photomicroscope with a × 40 objective. Sections were lightly counterstained with methyl green. The apoptotic index was determined by dividing the number of apoptotic nuclei by the total number of nuclei found in the microscope field. Statistical analyses Western Blotting, proliferative and apoptotic indexes and proliferation/apoptotic ratio (P/A) were statistically compared among the groups by analysis of variance followed by the Turkey's test with the level of significance set at 1 %. Results were expressed as the mean ± standard deviation. Histopathological analyses were evaluated by proportion test. The difference between the two proportions was tested using test of proportion. For all analyses, a type-I error of 5 % was considered statistically significant. Conclusion Taking in account these present available data, the mechanism of action of P-MAPA was clearly distinct in relation to BCG. These important findings are relevant concerning the treatment of patients with NMIBC presenting high risk of progression that are refractory or resistant to intravesical therapy with BCG. P-MAPA reverses the histopathological changes induced by MNU The urinary tract from the CONTROL group showed no microscopic changes (Fig. 2a, b and c; Additional file 1: Table S1). The normal urothelium was composed of three layers: a basal cell layer, an intermediate cell layer, and a superficial layer composed of umbrella cells (Fig. 2a, b, c). In contrast, the urinary bladders from the MNU group showed histopathological changes such as tumor invading mucosa or submucosa of the bladder wall (pT1) (Fig. 2d, e and f ), papillary carcinoma non-invasive (pTa) and flat carcinoma in situ (pTis) in 40, 40 and 20 % of the animals, respectively (Additional file 1: Table S1). The keratinizing squamous metaplasia was found in 60 % of the animals (Fig. 2d and e). The most frequent histopathological changes in the urinary bladder from the MNU-BCG group were pTa (Fig. 2g, h and i; Additional file 1: Table S1) low-grade intraurothelial neoplasia and papillary hyperplasia in 40, 40 and 20 % of the animals, respectively (Additional file 1: Table S1). The microscopic features of the urinary bladders from the MNU-P-MAPA group were similar to those found in the CONTROL group (Fig. 2j, k and l). Normal urothelium was found in 60 % of the animals (Fig. 2j and k; Additional file 1: Table S1). The histopathological changes in the MNU-P-MAPA group were flat hyperplasia (20 %) and papillary hyperplasia (20 %) ( Fig. 2l; Additional file 1: Table S1). Urinary calculi and macroscopic haematuria were only observed in the MNU and MNU-BCG groups; they were absent in the MNU-P-MAPA group. BCG activates MyD88-dependent pathway The highest TLR2 protein levels were found in the MNU-P-MAPA group as compared to the CONTROL, MNU-BCG and MNU groups, showing intense immunoreactivities in the urothelium (Figs. 3a, g, m, s and 4; Additional file 2: Table S2). The highest MyD88 protein levels were found in the MNU-BCG and MNU-P-MAPA groups as compared to the other experimental groups. These groups showed intense immunoreactivities in the urothelium (Figs. 3b, h, n, t and 4; Additional file 2: Table S2). However, MyD88 levels were significantly higher in the CONTROL group than in the MNU group; these groups exhibited moderate and weak immunoreactivities, respectively (Figs. 3b, h, n, t and 4; Additional file 2: Table S2). IKK-α protein levels were significantly higher in the MNU-BCG group in relation to the MNU, MNU-P-MAPA and CONTROL groups, which showed intense, moderate, weak and weak immunoreactivities in the urothelium, respectively (Figs. 3c, i, o, u and 4; Additional file 2: Table S2). The highest NF-kB protein levels were found in the MNU group as compared to the MNU-BCG, CON-TROL and MNU-P-MAPA groups (Fig. 4). The NF-kB immunoreactivities were weak in the cytoplasm of the urothelial cells from the CONTROL group, intense in both nucleus and cytoplasm of the urothelial cells from the MNU group, moderate in both nucleus and cytoplasm of the urothelial cells from the MNU-BCG group, and weak in the cytoplasm of the urothelial cells from the MNU-P-MAPA group (Figs. 3d, j, p and v; Additional file 2: Table S2). TNF-α protein levels were significantly higher in the MNU-BCG group than in all other experimental groups, exhibiting intense immunoreactivities in the urothelium (Figs. 3e, k, q, w and 4; Additional file 2: Table S2). However, these levels were significantly higher in the MNU-P-MAPA and MNU groups in relation to the CONTROL group, which showed weak, intense and weak immunoreactivities, respectively (Fig. 3e, k, q, w and 4; Additional file 2: Table S2). IL-6 protein levels were significantly higher in the MNU-BCG and MNU groups in relation to the MNU-P-MAPA and CONTROL groups. These groups displayed intense, intense, weak and weak immunoreactivities in the urothelium, respectively (Figs. 3f, l, r, x and 4; Additional file 2: Table S2). Table S2). However, these levels were significantly higher in the CONTROL and MNU-BCG groups than in the MNU group. The three latter groups showed moderate, intense and weak immunoreactivities, respectively (Figs. 5a, g, m, s and 6; Additional file 2: Table S2). P-MAPA intravesical immunotherapy activates interferon signaling pathway and increases iNOS levels TRIF protein levels were significantly higher in the MNU-P-MAPA group in relation to the other experimental groups, which showed intense immunoreactivities in TNF-α immunoreactivities (asterisks) were weak in the urothelium from the CONTROL (e) group, intense in the MNU (k) and MNU-BCG (q) groups and weak in the MNU-P-MAPA (w) group. IL-6 immunoreactivities (asterisks) were weak in the urothelium from the CONTROL (f) group, intense in the MNU (l) and MNU-BCG (r) groups and weak in the MNU-P-MAPA (x) group. a-x Ur urothelium Fig. 4 Representative Western Blotting and semiquantitative determination for TLR2, MyD88, IKK-α, NF-kB, TNF-α and IL-6 protein levels. Samples of urinary bladder were pooled from five animals per group for each repetition (duplicate) and used for semi-quantitative densitometry (IOD -Integrated Optical Density) analysis of the TLR2, MyD88, IKK-α, NF-kB, TNF-α and IL-6 levels following normalization to the β-actin. All data were expressed as the mean ± standard deviation. Different lowercase letters (a, b, c, d) indicate significant differences (p <0.01) between the groups after Tukey's test the urothelium (Figs. 4b, h, n, t and 6; Additional file 2: Table S2). However, TRIF levels were higher in the MNU-BCG and MNU groups than in the CONTROL group. The three latter groups exhibited moderate, weak and weak immunoreactivities respectively (Figs. 5b, h, n, t and 6; Additional file 2: Table S2). Protein levels for IRF-3 were significantly higher in the MNU-BCG and MNU-P-MAPA groups in relation to Samples of urinary bladder were pooled from five animals per group for each repetition (duplicate) and used for semi-quantitative densitometry (IOD -Integrated Optical Density) analysis of the TLR4, TRIF, IRF-3, IFN-γ, iNOS, and p53 levels following normalization to the β-actin. All data were expressed as the mean ± standard deviation. Different lowercase letters (a, b, c, d) indicate significant differences (p <0.01) between the groups after Tukey's test the CONTROL and MNU groups. These groups showed moderate, intense, weak and weak immunoreactivities in the urothelium, respectively (Figs. 5c, i, o, u and 6; Additional file 2: Table S2). The highest IFN-γ protein levels were found in the MNU-P-MAPA group compared to the MNU-BCG, MNU and CONTROL groups. These groups exhibited intense, moderate, weak and weak immunoreactivities in the urothelium, respectively (Figs. 5d, j, p, v and 6; Additional file 2: Table S2). iNOS protein levels were significantly higher in the MNU-P-MAPA and MNU-BCG groups than in the MNU and CONTROL groups. These groups showed intense, moderate, weak and weak immunoreactivities in the urothelium, respectively (Figs. 5e, k, q, w and 6; Additional file 2: Table S2). NLRC5 protein levels were significantly higher in the MNU-P-MAPA group in relation to the other experimental groups (Fig. 7). Furthermore, these levels were significantly higher in the CONTROL and MNU-BCG groups than in the MNU group (Fig. 7). P-MAPA immunotherapy increases wild-type p53 protein levels, decreases proliferation and increases apoptosis p53 protein levels were significantly higher in the MNU-P-MAPA and CONTROL groups in relation to the other experimental groups (Fig. 6). Furthermore, these levels were significantly higher in the MNU-BCG group in comparison to the MNU group (Fig. 6). The apoptotic index revealed different kinetics for cell death for each treatment (Additional file 3: Figures S1a, S1c, S1e, S1g; Fig. 8). This index was significantly higher in the animals from the MNU-P-MAPA group in relation to the other experimental groups. The MNU and MNU-BCG groups, in turn, showed significantly higher average values of the apoptotic index than the CON-TROL group (Additional file 3: Figures S1a, S1c, S1e, S1g; Fig. 8). BAX protein levels were significantly higher in the MNU-P-MAPA group compared to the MNU, MNU-BCG and CONTROL groups. The groups exhibited intense, moderate, moderate and weak immunoreactivities in the urothelium, respectively (Figs. 3f, l, r, x and 6; Additional file 2: Table S2). Proliferative activity was significantly increased in animals from the MNU group in relation to the other experimental groups (Additional file 3: Figures S1b, S1d, S1f, S1h; Fig. 8). The MNU-P-MAPA group displayed significantly lower average values of proliferative index than the MNU-BCG group, although these values were significantly higher than those found in the CONTROL group (Additional file 3: Figures S1b, S1d, S1f, S1h; Fig. 8). Furthermore, the proliferation/apoptotic ratio (P/A) was significantly higher in the MNU and MNU-BCG groups when compared to CONTROL group (Fig. 9). However, the P/A ratio in the MNU-P-MAPA was significantly lower in relation to the other experimental groups, indicating predominance of the apoptotic process (Fig. 9). P-MAPA intravesical immunotherapy suppresses angiogenesis VEGF protein levels were significantly higher in the MNU group in relation to the other experimental groups (Fig. 7). Furthermore, these levels were significantly higher in the MNU-BCG group compared to the MNU-P-MAPA and CONTROL groups (Fig. 7). Endostatin protein levels were significantly higher in the MNU-P-MAPA and CONTROL groups when compared to the MNU-BCG and MNU groups (Fig. 7). Discussion Although the use of TURBT with adjuvant chemo and immunotherapy represents a clear advance in the treatment of NMIBC, the management of this disease, mainly for high grade tumors remains a challenge, because the high rates of recurrence and progression to muscle invasive and/or metastatic stages. Following episodes of high grade NMIBC recurrence after BCG therapy, several conventional chemotherapy agents have been used including gemcitabine, mitomycin, gemcitabine plus mitomycin, docetaxel and valrubicin. In addition, immunotherapy (Interferon-alpha or Interferon alpha-plus BCG) has also been used [31]. Mycobacterium phlei cell wall-nucleic acid complex (MCNA) has been proposed for intravesical treatment of NMIBC at high risk of recurrence or progression in patients who failed prior BCG immunotherapy (e.g., in patients who are BCG-refractory or BCG relapsing) and are not candidates for or refuse cystectomy [32]. However, none of these drugs had been shown superiority over BCG and remains considered investigational [14]. In the specific case of BCG-refractory CIS, Valrubicin, a semisyntetic analog of doxorubicin, the only FDA-approved drug for treatment of such condition, shows effectivity in less of 10 % of treated patients at 2 years and none with coincident stage T1 disease [33]. The surgical option for such cases, partial or total cystectomy, is often associated with significant morbidity and mortality. Furthermore, for some patients, cystectomy is not an available option due to the presence of concomitant comorbidities. Consequently novel therapies are highly needed for treatment of high grade NMIBC, to prevent disease progression and to allow bladder preservation and ensure life quality for patients and finally, to provide an option for those that are ineligible for cystectomy. The P-MAPA Biological Response Modifier, which shows novel therapeutic properties compared to standard treatments, appears a valuable candidate drug in the treatment of NMIBC. In our previous studies we have shown several beneficial properties of P-MAPA [11,12]. Here, using the NMU animal models for the study of NMIBC, we clearly show that P-MAPA treatment enables better histopathological recovery from the cancer state than no treatment (MNU group) or BCG treatment (MNU-BCG group). Agonists of TLRs are the subject of intensive research and development for the treatment of cancer, including bladder cancer [11,12,33]. TLRs, which are expressed in immune as well as in some epithelial cells, play an important role in activating both innate and adaptive immune responses [33] and [33,34]. Bladder tumors, especially non-muscle-invasive, show decreased TLRs expression [35,36]. TLR-mediated BCG immunotherapy for NMIBCs suggests alternative TLR-based immunotherapies Fig. 7 Representative Western Blotting and semiquantitative determination for VEGF, Endostatin, BAX and NLRC5 protein levels. Samples of urinary bladder were pooled from five animals per group for each repetition (duplicate) and used for semi-quantitative densitometry (IOD -Integrated Optical Density) analysis of the VEGF, Endostatin, BAX and NLRC5 levels following normalization to the β-actin. All data were expressed as the mean ± standard deviation. Different lowercase letters (a, b, c, d) indicate significant differences (p <0.01) between the groups after Tukey's test might also be successful strategies for this type of cancer. The BCG antitumor effects seem to be related to local immunological mechanisms since after BCG instillation, a transient increase in several cytokines and the presence of activated immunocompetent leukocytes were found in the urine within 24 h [37]. Local lymphocytic infiltration and cytokine production were found in the bladder wall of most patients receiving intravesical BCG and was demonstrated that this local response was highly complex [37][38][39]. TNFrelated-apoptosis-inducing ligand (TRAIL) is released from polymorph nuclear neutrophils (PMNs) via stimulation of TLR2 by BCG [33]. Secretion of interleukin-8, a strong chemoattractant for monocytes and T-cells, is also induced from PMNs by BCG infection via MyD88-dependent TLR2 and TLR4 activation [33,40] whereas BCG activation of TLR2 and TLR4 induces TNF-α secretion from dendritic cells (DCs) [33,41,42]. TNF signaling pathway may induce carcinogenesis by up-regulating NF-kB leading to the up-regulation of other proteins that cause cell proliferation and morphogenesis [40]. Using TNF knockout mice the development of skin carcinomas by chemical carcinogen DMBA (7.12-dimethylbanz[a]-antracene) and tumor promoter TPA (12-0-tetradecanoyl-phorbol-13-acetate) decreased compared to wild type mice [43,44]. Using pentoxifylline, which was shown to inhibit TNF and IL-1a gene expression, the growth of DMBA/TPA induced papillomas were inhibited [45]. These results suggest a chemical tumor promoter can induce the secretion of TNF-α from different cells types and TNF can act as an endogenous tumor promoter in vivo [46]. TNF-α was identified as the major host-produced factor that enhances the growth of metastases in the lung cancer animal model, in part through activation of NF-kB in the tumor cells [47]. We have demonstrated here that BCG increased TLR2 and TLR4 protein levels in NMIBC model, which corroborated with our previous study [11,12]. These induces MyD88-dependent pathway as shown by increased MyD88, IKKα and NF-kB protein levels. The induction of MyD88-dependent pathway or canonical pathway increases inflammatory cytokines (IL-6 and TNF-α) protein levels. Accordingly, activation of immune system by BCG treatment, via MyD88-dependent pathway, (Additional file 4: Figure S2a), was essential for histopathological recovery from the cancer state. TLR4 activation of host macrophages resulted in the production of several different inflammatory cytokines that influenced tumor growth. However, TLR4 signaling also induces cytokines (IFN) that have antitumor effects by induction of TRAIL, a potent inducer of tumor cell death [47]. Shankaran et al. [48] showed the tumorsuppressor function of the immune system to be critically depend on the actions of IFN-γ, which, at least in part, are driven to regule tumor-cell immunogenicity. IFN-γ stimulates several antiproliferative and tumoricidal biochemical pathways in macrophages and in tumor cell lines, as well as has a profound impact on solid tumors growth and metastasis and seemingly plays an early role in protection from metastasis [49][50][51][52][53][54][55]. IFN-γ produced by IL-12-activated tumor-infiltrating CD8+T cells directly induced apoptosis of mouse hepatocellular carcinoma cells [52,53]. The NLRCs, a class of intracellular receptors that respond to pathogen or cellular stress, has recently been identified as a critical regulator of immune responses [56,57]. While NLRC5 is constitutively and widely expressed, its levels can be dramatically induced by interferons during pathogen infections. Both in vitro and in vivo studies have demonstrated NLRC5 is a specific and master regulator of major histocompatibility complex (MHC) class I genes as well as related genes involved in MHC class I antigen presentation [56,57]. In this study, we demonstrated TLR2 and TLR4 protein levels were significantly higher in the P-MAPA group in relation to the BCG group in the NMIBC animal model. Also, P-MAPA treatment led to increased TRIF and IRF-3 protein levels, indicating an activation of MyD88independent pathway (Additional file 4: Figure S2b). The induction of MyD88-independent pathway (non-canonical pathway or TRIF-dependent pathway) by P-MAPA led to increased IFN-γ and iNOS (macrophages type 1 -M1) protein levels. In contrast to BCG treatment, P-MAPA immunotherapy led to distinct activation of innate immune system TLRs 2 and 4-mediated, resulting in increased interferons signaling pathway (Additional file 4: Figure S2b), which was more effective in the NMIBC treatment. Also as result of interferon signaling pathway (IFN-γ and IRF-3) induction by P-MAPA, the proliferation/apoptotic ratio was significantly lower in animals treated with P-MAPA, indicating predominance of the apoptotic process. Accordingly, P-MAPA immunotherapy increased NOD like receptor 5 (NLRC5) protein levels, which were fundamental to induction of interferon signaling pathway (Additional file 4: Figure S2b). Thus, the activation of interferon signaling pathway was more effective in the induction of immunogenic cell death in relation to inflammatory cytokines signaling pathway. The IFN-γ produced by tumor-infiltrating T cells might play two distinct roles in antitumor activity: activation of antitumor T cells and direct tumoricidal activity by generating inducible nitric oxide synthetase (iNOS) [48,58]. NO is considered one of the main factors responsible for the macrophage cytotoxic activity against tumor cells [50,59]. Previous data showing increased NO concentrations in the urinary bladder from patients treated with BCG [59][60][61], suggests NO as a critical factor in the BCG mediated antitumor effect [56]. NO can stimulate cell growth and cell differentiation when present at low concentrations, whereas high concentrations often result in cytotoxic effects [59]. Tate et al. [50] demonstrated iNOS induction within the renal carcinoma cells (CL-2 and CL-19) in response to IFN-γ caused a robust and sustained accumulation of endogenous NO that resulted in an 80-85 % growth inhibition of CL-2 and CL-19 cell lines. Patients with bladder cancer who had received BCG treatment, iNOS-like immunoreactivity was found in the urothelial cells but also in macrophages in the submucosa [56]. Koskela et al. [59] verified that endogenously formed NO was significantly increased in the BCG treated patients and they had a ten-fold increase in mRNA expression for iNOS compared to healthy controls. In culture supernatant from macrophages stimulated by P-MAPA in both healthy and visceral leishmaniasis, infected dogs NO production was increased [62]. Thus, it can be concluded interferon signaling pathway activation induced by P-MAPA led to increase of iNOS protein levels in the NMIBC animal model, resulting in increased apoptosis process and histopathological recovery (Additional file 4: Figure S2b). Furthermore, cell death may depend on NO-stimulated signaling pathways leading to gene expression, involving the tumor suppressor p53 [63][64][65]. Activation of p53 by NO has been observed in many cell types [66,67]. NOinduced p53 contributes to various cell type-specific biological effects of NO, such as induction of apoptosis, inhibition of proliferation and tumor suppression [66][67][68]. Besides that, p53 controls a remarkable number of physiologic functions, including energy metabolism, differentiation, and reactive oxygen species production and is stabilized and activated in response to diverse stresses signals, such as DNA damage, hypoxia, oncogene activation, drugs, nucleotide depletion [64]. Cells possessing a fully functional p53 pathway can either arrest and repair damages caused by these untimely stresses or undergo p53dependent apoptosis. BAX is considered an important target gene required for p53-dependent apoptosis [64]. Induction of p53 by NO is preceded by a rapid decrease in Mdm2 protein, which may enable to elevate p53 levels early after exposure to NO [67]. Wang et al. [67] showed NO promoted p53 nuclear retention and inhibited Mdm2mediated p53 nuclear export, indicating this effect to be mediated by ATM-dependent phosphorylation of p53 on Serine 15. Also, In conclusion, these findings imply that, through augmenting p53 nuclear retention NO can sensitize tumor cells to p53-dependent apoptosis. Several studies suggest antiangiogenic therapy is sensitive to p53 status in tumors, implicating a role for p53 in the regulation of angiogenesis [18,19,69]. A connection between p53 and tumor angiogenesis was revealed in 1994 when Dameron et al. [69] proposed suppression of angiogenesis by thrombospondin-1 could represent a new mechanism for tumor suppression by p53. Other evidence emerged that wild-type p53 could prevent incipient tumors from becoming angiogenic [70]. Teodoro et al. [19] demonstrated p53-tumor suppression was mediated in part by at least two potent angiogenesis inhibitors, endostatin and tumstatin. In addition, these authors showed ectopic expression of α (II) collagen prolyl-4-hydroxylase in human tumor cells implanted into immunodeficient mice resulted in "near-complete" tumor suppression compared with mice implanted with tumor cells that did not express α (II) collagen prolyl-4-hydroxylase, and associated this results with suppression of tumor angiogenesis by endostatin or tumstatin. Thus, this study demonstrated an important antitumor effect of P-MAPA immunotherapy, based on increase of endostatin protein levels and decrease of VEGF protein levels in the NMIBC animal model. Therefore, interferon signaling pathway induction and increased wild-type p53 protein levels by P-MAPA led to important antitumor effects, not only suppressing abnormal cell proliferation, but also by preventing continuous expansion of tumor mass through suppression of angiogenesis. Conclusions Taking in account these present available data, the mechanism of action of P-MAPA was clearly distinct in relation to BCG. These important findings are relevant concerning the treatment of patients with NMIBC presenting high risk of progression that are refractory or resistant to intravesical therapy with BCG.
8,251.6
2016-07-07T00:00:00.000
[ "Biology", "Medicine" ]
Semi-supervised active learning using convolutional auto- encoder and contrastive learning Active learning is a field of machine learning that seeks to find the most efficient labels to annotate with a given budget, particularly in cases where obtaining labeled data is expensive or infeasible. This is becoming increasingly important with the growing success of learning-based methods, which often require large amounts of labeled data. Computer vision is one area where active learning has shown promise in tasks such as image classification, semantic segmentation, and object detection. In this research, we propose a pool-based semi-supervised active learning method for image classification that takes advantage of both labeled and unlabeled data. Many active learning approaches do not utilize unlabeled data, but we believe that incorporating these data can improve performance. To address this issue, our method involves several steps. First, we cluster the latent space of a pre-trained convolutional autoencoder. Then, we use a proposed clustering contrastive loss to strengthen the latent space's clustering while using a small amount of labeled data. Finally, we query the samples with the highest uncertainty to annotate with an oracle. We repeat this process until the end of the given budget. Our method is effective when the number of annotated samples is small, and we have validated its effectiveness through experiments on benchmark datasets. Our empirical results demonstrate the power of our method for image classification tasks in accuracy terms. Introduction In recent years, computer vision has made significant advancements, primarily driven by machine learning and, more specifically, deep learning.However, these methodologies are highly dependent on having a substantial number of labeled samples.Acquiring such a large volume of data poses a significant challenge for several reasons.Initially, the process of annotating images is time-intensive, ranging from a few seconds for simple image classification to several hours for more complex image segmentation tasks.This makes it impractical to annotate a large data set in a short time frame.Additionally, image annotation often requires specialized expertise, adding another layer of complexity.In some cases, annotations require professionals, which increases the cost and complexity of the annotation process. An effective strategy to address these issues involves employing an active learning methodology.Active Learning, often abbreviated as AL, entails the process of selecting and prioritizing data that require labeling to have the most significant impact on the training of a machine learning task.Through the utilization of AL, machine learning algorithms can enhance their accuracy using a reduced number of training labels, thereby economizing time and resources during model training.Settles (2009) provides a comprehensive overview of various AL techniques in machine learning.In essence, there are three primary scenarios where active learning can be beneficial for those seeking to maximize accuracy while minimizing the number of labeled instances, typically involving the submission of queries in the form of unlabeled data instances to be labeled by an oracle, such as a human annotator.These scenarios include membership query synthesis (Angluin, 1988), stream-based selective (Atlas et al., 1989) sampling, and pool-based sampling.In this research, we will be focused on the third scenario, pool-based sampling (Lewis, 1995). In numerous practical scenarios, it is often straightforward to gather a substantial amount of unlabeled data, which serves as a driving force behind the adoption of the pool-based sampling method.Let us consider a pool of unlabeled data P u alongside a limited quantity of labeled data P l .In pool-based sampling in each query, we will sample a small amount of data from P u and annotate it with human oracle, then add it to P l .Assuming we have a good query that selects the most relevant samples from P u , P l will be a good representative group of P u . Employing a pool-based sampling active learning approach, where the model selects samples for annotation, can decrease the quantity of labeled data required to achieve a similar model accuracy.This represents a significant benefit of active learning for deep learning tasks, which has only recently started to be investigated (Gal et al., 2017;Sener and Savarese, 2017;Sinha et al., 2019). As previously mentioned in numerous practical scenarios, there is a significant volume of unlabeled data, which motivate our study.In this research, we present a novel approach that utilizes poolbased active learning to fully exploit all unlabeled data.The method we suggest begins by clustering the unlabeled data in the latent space.Then, it proceeds to choose the samples with the highest entropy based on their representation in the latent space and the clustering within that space.Our central concept involves clustering the unlabeled data from P u , querying samples with the highest entropy for human annotation, and employing labeled data from P l to refine the clustering via our suggested clustering contrastive learning.The above process iterates until either a satisfactory level of accuracy is achieved, the model converges, or the annotation budget is exhausted. In addition to addressing the challenges posed by limited labeled data, our research holds promise for real-world applications where unlabeled data is abundant.By leveraging a pool-based active learning approach, our method enables the effective utilization of unlabeled data in scenarios where acquiring labeled samples is impractical or costly, such as medical imaging diagnosis, satellite image analysis, and industrial inspection.This capability maximizes the efficiency and effectiveness of machine learning models in practical settings, facilitating improved accuracy and insights from limited labeled samples.Furthermore, our approach can identify and prioritize hard examples for labeling, ensuring that the annotated data provide the most informative training signal for the model. The contributions of the research are: • A new approach is proposed to integrate Deep Clustering and Deep Active Learning (DAL) in order to maximize the extraction of information from both labeled and unlabeled data. • Propose a novel contrastive clustering loss (CCL) that has the potential to enhance the transition from unsupervised clustering to a semi-supervised framework. • Achieving a high level of accuracy in image classification with a reduced number of labeled samples. Previous work . Deep clustering There has been significant research on deep clustering in recent years.Most deep clustering algorithms can be categorized into two groups.The first group includes two-stage clustering algorithms that first generate a data representation before applying clustering.These algorithms leverage existing unsupervised deep learning frameworks and techniques.For instance, Tian et al. (2014) and Peng et al. (2016) utilize autoencoders to learn lowdimensional features of original data samples and subsequently apply conventional clustering algorithms like k-means to the learned representations.Mukherjee et al. (2019) introduces ClusterGAN a generative adversarial network that clusters the latent space by sampling latent variables from a combination of one-hot encoded variables and continuous latent variables.The second group comprises approaches that simultaneously optimize feature learning and clustering.These algorithms aim to explicitly define a clustering loss, resembling the classification error in supervised deep learning.Yang et al. (2016) propose a recurrent framework that integrates feature learning and clustering into a unified model with a weighted triplet loss, optimizing it end-to-end.Xie et al. (2016) suggests a clustering loss that operates on the latent space of an autoencoder, enabling the simultaneous acquisition of feature representations and cluster assignments.Building upon this, Guo et al. (2017) DCEC (Deep Clustering with Convolutional Autoencoders) enhances the method by proposing Convolutional Autoencoders (CAE), which surpasses DEC while ensuring the preservation of local structure.This study directly adopts the clustering loss and clustering layer from DCEC. We briefly review their definitions: The trainable parameters of the clustering layer are µ j k 1 which represent the cluster center.The intuition behind the math operation of that layer is it maps each embedded point in the latent space z i into a soft label q i by the student's t-distribution (Van der Maaten and Hinton, 2008). Where q i j is the jth entry of q i , representing the probability of z i belonging to cluster j. The clustering loss is defined as: where P is the target distribution, defined as: . Active learning Active learning is a subfield of machine learning empowering algorithms to select and prioritize the most informative data points for labeling, aiming to enhance model performance using less training data.Active learning scenarios commonly occur in three main contexts: 1. Membership Query Synthesis: In this scenario (Angluin, 1988), the learner synthesizes new instances to be labeled by an oracle, aiming to generate maximally informative instances, particularly beneficial when labeled data is scarce or expensive to obtain.2. Stream-Based Selective Sampling: This scenario (Atlas et al., 1989) involves a continuous stream of unlabeled instances, with the learner making real-time decisions on which instances to label based on the current model state and incoming data.Such scenarios are common in sequential data streams like online learning or sensor data.3. Pool-Based Sampling: Here (Lewis and Gale, 1994), the learner is presented with a fixed pool of unlabeled instances and selects a subset for labeling, aiming to identify the most informative instances.This approach involves evaluating the informativeness of unlabeled samples, often utilizing query strategies like uncertainty sampling (Lewis and Gale, 1994), recently Liu and Li (2023) had an extensive work to explain this strategy even further, or query-by-committee (Seung et al., 1992).Active learning plays a crucial role in determining which data should be labeled to maximize the effectiveness of training supervised models.Traditional active learning methods are comprehensively reviewed by Settles (2009), while Ren et al. (2021) offer insights into the more contemporary Deep Active Learning (DAL) approach, integrating active learning with deep learning methodologies. Notable active learning methodologies are Uncertainty Sampling (Lewis and Gale, 1994) and Variational Adversarial Active Learning (VAAL) (Sinha et al., 2019).VAAL integrates variational inference and adversarial training, leveraging a generator network to produce informative data points and a discriminator network to differentiate between real and generated instances, aiding in sample selection.Additionally, LADA (Kim Moreover, approaches like the Core-Set Approach (Sener and Savarese, 2017) and Bayesian Active Learning (BALD) (Houlsby et al., 2011) offer strategies for selecting informative instances, with Core-Set identifying a compact, diverse subset of unlabeled data, and BALD leveraging Bayesian inference for strategic instance selection.These methodologies collectively contribute to enhancing model training efficiency and performance in active learning settings. . Semi-supervised learning Semi-supervised learning (SSL) is a specialized form of supervised learning that involves training on a small set of labeled data along with a large set of unlabeled data.Positioned between supervised and unsupervised learning, SSL is commonly used in scenarios where the availability of labeled data is limited due to constraints such as budgetary restrictions or data ambiguity, where the class of a sample is uncertain.Semi-supervised algorithms are L rec compute using Eq. 5 13 L clu compute using Eq. 2 14 L ccl compute using Eq.7 18 if updateCentroids is True then 19 P ← updateP(z ul ) compute using Eq. 1 Algorithm .Contrastive active learning. designed to address such challenges.In this study, we propose an SSL approach for the classification of image data, aiming to leverage the benefits of both active learning (AL) and SSL.To achieve this, we suggested clustering contrastive loss (CCL) in conjunction with unsupervised training. . Entropy Entropy Shannon (1948) is an information-theoretic measure of uncertainty.It quantifies the amount of information needed to encode a distribution.In active learning, entropy is widely used to select the most uncertain or ambiguous samples for annotation.The entropy can be shown as: Method This study proposes a novel active learning approach based on pool-based sampling.It involves training a convolutional autoencoder (CAE) (Masci et al., 2011) to learn a low-dimensional latent space for both labeled and unlabeled samples.The latent space is then clustered using a clustering layer.After each iteration of the active learning process, a subset of data points associated with the latent space vectors is selected for annotation.To leverage information from the labeled data, the study introduces the contrastive clustering loss (CCL), which is a modified version of the contrastive loss (Chopra et al., 2005).The CCL operates on the latent space vectors, pulling samples of the same class toward their . Problem definition and notation The main focus of this study is a semi-supervised active learning approach designed for image classification.Assuming there is a large set of unlabeled images P u and a small set of labeled images P l , along with a predetermined annotation budget, the goal is to select the most informative samples from the unlabeled set P u to enhance the classification accuracy.These selected samples will be labeled by a human annotator and incorporated into the labeled set P l .The initial step involves training a Convolutional Autoencoder (CAE) to learn a condensed representation of the images, referred to as latent space features.Each image i is transformed by the CAE into a feature vector z i in the latent space.Subsequently, all latent space features z i , ∀i ∈ P l ∪ P u are clustered into clusters, denoted as µ j where j represents the centroid of the j − th cluster.Finally, the proposed cluster contrastive loss L ccl (see Eq. 7) is applied to the labeled samples z l , ∀l ∈ P l .This loss function aims to attract the feature vectors z j toward µ j while pushing them away from µ n ∀n = j.for all n = j. . Suggested method The primary objective of this study is image classification, aiming to categorize images into their respective classes with optimal accuracy by leveraging labeled images from the restricted labeled data pool P l .To achieve this, we introduce a pool-based active learning strategy that integrates contrastive learning and clustering, mutually enhancing their performance in every training cycle.Our approach follows a human-inthe-loop methodology, in which an active learning loop comprises model training, image quering, and annotation by an oracle.This iterative process continues until the budget is fully utilized. The model consists of a CAE (Masci et al., 2011) and a clustering layer (Xie et al., 2016).Samples from P l and P u are fed into the model based on the active learning training stage.During each iteration of the active learning process, samples from P u are chosen for labeling.The proposed module is depicted in Figure 1. Prior to commencing the active learning iteration, certain initial steps are carried out.Initially, our CAE is pre-trained by reconstructing images from P u and P l using the MSE loss (Eq.5).This process allows the CAE to acquire knowledge of lower-dimensional features within the dataset.Once the network is trained, the resulting latent space provides a feature z i ∀i ∈ p i ∪ p l .Subsequently, the cluster centroids in the clustering layer are initialized with the average values of the vectors in the latent space of each class in our labeled pool P l as depicted in Eq. 6. Next, we incorporate clustering into the training of the CAE by clustering the acquired latent space with the utilization of a clustering layer (Guo et al., 2017) and employing a Kullback-Leibler divergence loss (Csiszár, 1975) as shown in Eq. 2. The primary objective of this stage is to organize the latent space into clusters, ensuring that similar image pairs produce proximate feature vectors within the latent space. In the final stage, we incorporate the image labels from P l .To utilize these labels effectively, we employ the suggested cluster contrastive loss L ccl as shown in Eq. 7 on all vectors in the latent space derived from P l , meaning that solely annotated images are taken into account by this loss.The CCL loss works by either pulling or pushing the feature vectors Z i in the latent space toward their respective cluster center µ i , or away from other cluster centers µ j where j = i.This method allows us to enhance the purity of clusters while using a limited number of labeled images from P l , during this stage we continue to make use of the previous clustering stage.Finally, we add all those losses and update the parameters of the model.The process is reiterated until reaching convergence or utilizing the entire annotation budget. At the end of every active learning iteration, we perform query sampling to choose the n-th image that exhibits features with the highest entropy compared to all other clusters.These features are the most ambiguous in terms of their cluster assignment, and by labeling them, we gain valuable insights that the model failed to generalize.Algorithm 1 presents a generic pseudo-code for this approach, in Table 1 the symbols used in the algorithm are elucidated, providing clarity on their respective meanings and roles within the context of the algorithm. FIGURE An intuitive explanation of the contrastive clustering loss is that the black dots correspond to samples assigned to cluster # , the blue dot symbolizes the cluster center, and the green dot represents a sample from a di erent cluster.This loss function aims to move the black dots closer to the blue dot while pushing the green dot farther away from the blue dot. . . Cluster contrastive loss The cluster contrastive loss (CCL) is a revised variant of the supervised contrastive loss introduced in Khosla et al. (2020).To enhance the purity of the clusters, the proposed approach incorporates the labeled images from P l into the clustering procedure.Consequently, this results in the adoption of the proposed CCL.The mathematical expression for the CCL is displayed below: Where c ∈ C is the class index, I c is the set of all the samples indexes in class c, I c ′ is the set of all the samples indexes in all the classes beside class c. z i is the i-th sample in the latent space and µ is the center of the cluster, τ ∈ R + is a scalar temperature parameter.An intuition of the loss can be shown in Figure 2. This loss involves both pulling samples toward their cluster center and pushing from other unmatched centroids centers simultaneously.It specifically affects the labeled data points.The CCL serves as a complementary approach to the unsupervised methods we currently employ, and empirical experiments indicate their mutual benefit.Figure 2 provides a visual representation of CCL as defined in Eq. 7. . . The need for the contrastive clustering loss During the training for CAE, we are provided with representation vectors in the latent space.In order to group the latent space into clusters corresponding to each class, as elaborated in Section 2.1, the clustering layer is utilized.This layer aims to streamline the process of image classification.Nevertheless, the clustering mechanism is proficient in grouping vectors with high certainty, which may result in certain images not being grouped together, particularly those from the same class that map to distant vectors in the latent space.Therefore, the integration of the suggested contrastive clustering loss becomes essential.This suggested CCL loss function works on adjusting vectors that were not properly aligned by the clustering process.Through this loss function, we can enhance the separation of classes in the latent space, even when dealing with a limited number of labeled images or when images are challenging to cluster due to the low confidence in the P-distribution of the clustering process. . . Pre-training During the initial phase, we train the convolutional autoencoder.We are using all the images from the unlabeled data pool P u and the labeled data pool P l .Each image x i ∼ P l ∪ P u inferences through the encoder and provides z i a lower dimension latent vector z i = σ (x i * W)) where w is the weights of the encoder layers, σ is a nonlinear activation function, and * is a convolution operation.The latent vector z i is inference through the decoder which provides an x which is a reconstruction of the original image x i .x = σ (z i * U) where U is the weight for the decoder.xi and x i are entered to MSE loss (Eq.5) which provides a high loss when x i looks different from xi and a low loss when they are similar.At the end of this step, the CAE has trained weights W and U. . . Initialization and update centroids Once the CNN is pre-trained, the centroids in the clustering layer are initialized using the average value of each class projection from P l in the latent space.Subsequently, every 80 iterations, the distribution of P is updated by the following (Eq.3).As detailed in Section 2.1, the centroids represent the weights of the clustering layer, and therefore they are adjusted during each training iteration. . . Query samples In this stage, our objective is to acquire image annotations by engaging a human annotator in the active learning procedure.At this point, we have already acquired a clustered latent space generated by the model itself.Any vectors within the latent space that are not clustered or are distant from the cluster center are identified as hard examples, representing images that require annotation.We select samples linked to vectors in the latent space that do not clearly belong to any cluster and annotate them based on the uncertainty criterion detailed in Eq. 4.More specifically, we target the vectors that exhibit the highest entropy in the cluster distribution.A visual representation of this approach is shown in Figure 3.By focusing on a small number of samples associated with feature vectors located far from the cluster center, we gain insight into these samples and the clusters they are associated with, thereby enhancing the overall clustering process. . Combination of contrastive learning and clustering When the suggested clustering method is applied to the latent space, there may be instances where some feature vectors are not accurately clustered.This situation can arise when feature vectors within the latent space that should belong to the same cluster are spatially distant from each other.As a result, the clustering layer may encounter challenges in grouping these feature vectors effectively.To address this issue, we introduce our proposed CCL, which works to minimize the distance between distant feature vectors that belong to the same cluster while maximizing the separation between those that do not.Furthermore, we incorporate a query mechanism to select challenging examples (i.e., samples that are significantly distant from their corresponding cluster center) for manual annotation.By integrating these strategies and progressively bringing the feature vectors closer together in a semisupervised fashion, followed by clustering using the clustering layer, we improve the purity of the clustering outcomes. . Implementation details In this work, we used a convolutional autoencoder for our model.The encoder consists of 3 convolutional layers, a batch normalization layer, and a linear embedding layer with a size of 10.The decoder consists of a linear de-embedding layer, 3 deconvolutional layers, and a batch normalization layer.The clustering layer weights are initialized with the mean of the latent space clusters using the starting labeled images in P l , and are then updated with the kl-loss using the Q and P distribution as described earlier.The P-distribution, or target distribution, is initialized every 80 steps.Each benchmark dataset is split into a 20% validation set and 80% training set, which is further divided into two data pools: a labeled data pool P l and an unlabeled data pool P u .First, we pretrained the model for 50 epochs.Then each active learning training iteration was set to 10 epochs and for the duration of overall 20 active learning loops.In each active learning loop, we query 250 image samples using the uncertainty strategy for annotation. . Datasets We have evaluated our method in image classification tasks.We have used MNIST (LeCun, 1998), FashionMNIST (Xiao et al., 2017), and USPS (Hull, 1994) datasets.Both the MNIST and the FashionMNIST datasets have 60K grayscale images of size 28x28.Examples of MNIST and FashionMNIST datasets can be viewed at Figure 4, and USPS has 9298 grayscale images of 16x16 size.An example of USPS dataset can be viewed at Figure 5. . Performance measurement We evaluate the performance of our method with the image classification task by measuring the accuracy over different amounts of labeled images from 500 to 5k images with a raising of 250 images from query to query.The results of all our experiments are averaged over 3 runs. . Experiments details We begin our experiments with an initial labeled pool of the size of 250 and in each iteration of the training loop we provided another 250 images that were annotated by the human oracle and added to the initial labeled pool P l .Training is repeated on the new training set with the new labeled images.We assume that the dataset is balanced and the oracle annotations are ideal. E ectiveness of the CCL In Table 2, we present an ablation study comparing our proposed method with the use of clustering alone.The study evaluates the performance of both approaches on the Mnist and USPS datasets.The results demonstrate that integrating the CCL with clustering, using only 3% of labeled data, significantly improves model performance.The CCL operates by encouraging the model to learn discriminative representations within clusters while simultaneously enforcing compactness among cluster centroids.By incorporating this loss function into our framework, we guide the clustering process to yield clusters that not only capture inherent data structures but also ensure inter-class separability.This results in more coherent and well-separated clusters, facilitating better decision boundaries and ultimately leading to improved classification accuracy.Additionally, Figure 8 visually illustrates the difference between using clustering alone and incorporating the CCL into the clustering process. . Comparing with other methods We conducted a comprehensive evaluation of our proposed method across multiple datasets, including MNIST, FashionMNIST, and USPS, as detailed in Tables 3-5.Our results showcase significant performance improvements over baseline methods, particularly evident in scenarios with limited labeled data.When compared to state-of-the-art techniques such as Core-Set Approach (Sener and Savarese, 2017), Variational Adversarial Active Learning (VAAL) (Sinha et al., 2019), and Bayesian Active Learning by Disagreement (BALD) (Houlsby et al., 2011), our approach consistently demonstrates competitive performance.Figures 9-11 showing our method comparing to the others (Notably, leveraging pre-trained) Notably, leveraging pre-trained clustering models contributes to achieving relatively high accuracy, particularly in scenarios with a scarcity of labeled samples. . Experiment analysis To comprehensively validate the efficacy of our approach, we conducted an in-depth analysis of clustering quality throughout the training process.We monitored the evolution of clustering performance and visualized the t-SNE projections of learned latent space representations, as depicted in Figures 6, 7, 12.These visualizations offer insights into the structure of the learned representations, revealing distinct clusters corresponding to each class.The observed trends in clustering align well with the accuracy improvements reported in Tables 3-5, corroborating the effectiveness of our method. In addition to accuracy comparisons, it's imperative to delve deeper into the performance metrics of our approach compared to baseline methods.For instance, on the MNIST dataset, our method achieves an accuracy of 91% with only 3% labeled data, outperforming the Core-Set Approach, which attains 80.5% accuracy.This notable performance gain underscores the superiority of our method in leveraging limited labeled data effectively. Discussion The integration of convolutional autoencoders, clustering, and a novel clustering contrastive loss in our semi-supervised active learning approach presents a unique and promising strategy for leveraging both labeled and unlabeled data in image classification tasks.By combining clustering with active learning, our method offers a distinctive approach that distinguishes it from previous methodologies. A significant strength of our approach lies in its ability to extract valuable insights from unlabeled data by organizing it into clusters, thereby guiding the query selection process in active learning.However, the effectiveness of our method may depend on the quality of clustering initialization, which could potentially limit performance, particularly in scenarios involving complex, high-dimensional data.Exploring the applicability of our approach beyond image classification domains warrants further investigation.Despite these potential limitations, our research represents a notable advancement in the realm of semi-supervised active learning.By integrating deep clustering, active learning, and contrastive learning principles, we address challenges associated with data scarcity, thereby enhancing model performance in resource-constrained settings.Moving forward, future research endeavors could explore the development of more robust clustering techniques, alternative representation learning methods, and synergistic combinations with other active learning strategies to further enhance performance and generalization capabilities. Theoretically, the clustered representations derived by our approach hold promise for facilitating various downstream tasks, including data augmentation, domain adaptation, and the incorporation of weak or noisy labels.Such capabilities could prove invaluable in addressing the challenges posed by limited annotation scenarios.While our work contributes to the field, it also underscores the inherent challenges and opportunities associated with semi-supervised learning in realworld applications, paving the way for continued advancements and innovation in this domain. It is essential to acknowledge the use of a smaller model architecture in our experiments.The complexity introduced by clustering necessitated the use of a smaller model to maintain tractability and computational efficiency.While this choice may have influenced our absolute performance metrics, it enabled us to explore the feasibility and efficacy of our approach within practical constraints.It is plausible that in subsequent studies, researchers may employ larger, more complex models to further improve performance. Conclusions and future work In this study, we have introduced a novel approach to image classification through a pool-based semi-supervised active learning technique.By integrating deep clustering and deep active learning, we aim to enhance classification accuracy by using fewer labeled images.Our method involves clustering feature vectors in the latent space that corresponds to images from P l and P u , thereby obtaining a more informative representation of the latent space to support the active learning procedure.We have also incorporated a clustering contrastive loss to enhance the clustering of the latent space even with a limited number of labeled images.Cases where feature vectors in the latent space are not well grouped together or are far from their respective cluster centers are recognized as hard examples and are then queried for annotation by a human oracle. Our empirical experiments demonstrated that our method achieves high classification accuracy even with a small number of annotations.The iterative combination of clustering with the suggested contrastive learning and query method leads to a more separated latent space, which in turn facilitates the classification process.Thanks to the clustering step, our method achieves high accuracy from the beginning.However, the clustering step may have a drawback for complicated datasets, as it can be challenging to cluster them effectively.We believe that future work can improve the clustering process to provide better clustering initialization even for complex datasets. We used a convolutional autoencoder (CAE) to map samples to the latent space, but future work could explore more robust methods like a variational autoencoder that creates smoother and more connected latent spaces, which will help to improve clustering.Furthermore, our method is currently designed for image classification tasks, but it could be extended to other computer vision tasks such as semantic segmentation and object detection by inserting a suitable network head to the model for the requested task. FIGURE FIGUREVisual representation of proposed methodology.Images from p l and p u are inferred through the CAE and provide feature vectors in the latent space the feature vectors are clustered by the clustering layer and the contrastive clustering loss then the n-th feature vectors from the latent space with the highest entropy are queried and annotated by a human oracle this process is repeated until the end of the annotation budget or model convergance. FIGUREFrontiers FIGURETSNE visualization of the query method the red circle represents samples with high entropy. FIGURE FIGUREVisualization of the USPS dataset. FIGURE FIGUREVisualization of MNIST and FashionMNIST datasets at the left is the FashionMNIST and on the right is the MNIST dataset. FIGUREFrontiersFIGURE FIGURETSNE visualization of the clustered MNIST latent space after convergence of our method with % of annotated samples. FIGURE FIGUREOn the left: t-SNE visualization after clustering.On the right: t-SNE visualization after applying CCL in conjunction with clustering. FIGURE FIGUREAccuracy of our method compared to other state-of-the-art methods as a function of the number of labeled images for the MNIST dataset. FIGURE FIGUREAccuracy of our method compared to other state-of-the-art methods as a function of the number of labeled images for the FashionMNIST dataset. FIGURE FIGUREAccuracy of our method compared to other state-of-the-art methods as a function of the number of labeled images for the USPS dataset. FIGURE FIGURETSNE visualization of the clustered USPS latent space after convergence of our method with % of annotated samples. TABLE Algorithm symbols and their explanations. TABLE MNIST accuracy results on entropy sampling (Wang and Shang, ) BALD (Gal et al., ) Vaal (Sinha et al., ) Core-set (Sener and Savarese,) and our method with , , , and % of the data labeled.
7,513.8
2024-05-30T00:00:00.000
[ "Computer Science" ]
Spatial distributions of railway-generated ground vibrations at and around sleeper passage frequencies In this paper, spatial distributions of ground vibrations generated by railway trains travelling at conventional speeds on straight tracks are investigated theoretically. The main attention is being paid to calculations of spatial distributions of ground vibrations generated by a single axle load at and around sleeper passage frequencies that are defined by train speeds and sleeper periodicity. It is demonstrated that, at sleeper passage frequencies, generated ground vibrations represent plane waves propagating symmetrically away from the track in the normal directions to it. At frequency components that are slightly above or below sleeper passage frequencies, generated ground vibrations still remain plane waves, but they are now propagating at certain angles in respect of the normal directions to the track. Introduction Over the last decades, high-speed railways underwent rapid development throughout the world. As many other means of transportation, high-speed railways encounter a number of environmental problems. In particular, ground vibrations, mostly Rayleigh surface waves, generated by high-speed trains is one of the major environmental problems that must be mitigated to allow high-speed trains to be used in densely populated areas (see e.g. the recent book [1]). The intensity of railway-generated ground vibrations generally becomes larger at higher train speeds. The increase in amplitudes of generated ground vibrations can be especially large when train speeds approach the velocity of Rayleigh surface wave in the supporting ground. As was theoretically predicted by the present author [2][3][4][5], if a train speed v exceeds the Rayleigh wave velocity cR in the supporting soil, a ground vibration boom occurs. This phenomenon is similar to a sonic boom from supersonic aircraft, and it is associated with a very large increase in generated ground vibrations, as compared to the case of conventional trains. The existence of ground vibration boom at trans-Rayleigh train speeds has been later confirmed experimentally [6,7]. The increased attention to the problems of ground vibrations from high-speed trains is reflected in a growing number of theoretical and experimental investigations in this area (see e.g. [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27]). In addition to ground vibration problems typical for trains approaching or exceeding Rayleigh wave velocity in the ground, there are also many problems associated with trains travelling at lower (conventional) train speeds. In particular, for the important case of generation of ground vibrations at the so-called sleeper passage frequencies, very little or no research into the structure and directions of generated waves has been carried out so far. The aim of the present paper is to investigate spatial distributions of ground vibrations generated at and around sleeper passage frequencies by high-speed trains travelling at conventional speeds, i.e. at speeds not exceeding Rayleigh wave velocity in the supporting ground. It will be demonstrated that spatial distributions of generated Rayleigh waves in this case can change significantly, depending on train speeds, frequency components, and distance between adjacent sleepers. Theoretical background The main mechanisms of railway-generated ground vibrations are the wheel-axle pressure onto the track, the effects of joints in unwelded rails, the dynamically induced forces of carriage-and wheel-axle vibrations excited due to random roughness of wheels and rails, as well as variations in sleeper-ground contact from sleeper to sleeper. In this paper we limit our consideration with only the first mentioned generation mechanism, which is always present, even for ideally flat rails and perfectly round wheels -namely, the quasi-static pressure of wheel axles acting onto the track. This universal generation mechanism is responsible for the above-mentioned railway-generated ground vibration boom, and it is also responsible for ground vibrations generated at sleeper passage frequencies considered in this paper. It should be noted that at other frequencies the unevenness of real wheels and rails usually plays a dominant role in generation of ground vibrations (see e.g. [14,15,[18][19][20]). The quasi-static pressure generation mechanism results from load forces applied to the railway track from each wheel axle. These load forces cause downward deflections of the track that are moving together with the loads, thus producing a wave-like motion along the track at speed of the train, which results in a distribution of each axle load over all the rail sleepers that are within the track deflection distance. Each sleeper, in turn, acts as a vertical force applied to the ground during the time necessary for a deflection curve to pass through the sleeper. These vertical forces can be approximately considered as point forces applied to the ground's surface, assuming that shortest wavelengths of generated ground vibrations are still essentially larger than sleeper dimensions. To determine the track deflection curve and thus the time dependence of the forces applied from each sleeper to the ground the system of track and ground is modelled as an Euler-Bernoulli beam resting on Winkler foundation. According to the earlier developed general theory [2][3][4][5][9][10][11], in order to calculate ground vibrations generated by a train due to the quasi-static pressure mechanism, one needs to take into account the superposition of waves generated by each elementary source of ground vibrations (sleeper) activated by wheel axles of all carriages, with the time and space differences between sources (sleepers) being taken into account. In doing so we take into account only the contribution of generated Rayleigh surface waves, because they make the main contribution to ground vibrations causing environmental problems. Being interested in fundamental features of railway-generated Rayleigh waves, in this paper, for the sake of simplicity, we will consider ground vibrations generated by a single axle load only. In this case, we can use the following expression for the vertical vibration velocity vz of Rayleigh waves generated on the ground surface (z = 0) at the point of observation having the coordinates x, y by a single axle load moving along the straight track (located on the x-axis) at constant speed v (see e.g. [9][10][11]): Here ρn is the distance from each sleeper characterised by a number n to the point of observation, ω is a circular frequency, v is the train speed, d is a sleeper periodicity, cR is the Rayleigh wave velocity, and γ = 0.001 -0.1 is a non-dimensional loss factor describing the attenuation of Rayleigh waves in soil. The function P(ω) in Equation (1) represents the Fourier spectrum of a force acting from each sleeper to the ground. The expression for P(ω) has the form (see [5,10,11] for detail): where T is the axle load, cmin is the minimal phase velocity of track flexural waves propagating in a track/ground system (this velocity is related to cR via the parameters of Winkler foundation expressed in terms of the ground elastic parameters, and it is usually larger than cR by 10-20 %), β is the parameter dependent on the elastic properties of track and ground [10] and measured in m -1 , and η is a non-dimensional track damping parameter. For relatively low train speeds, i.e., for v < cR, the 'dynamic' solution (2) for the force spectrum P(ω) goes over to the quasi-static one [10,11]. When train speeds increase and approach or exceed the minimal track wave velocity cmin, the spectra P(ω) become broader and larger in amplitudes, and a second peak appears at higher frequencies. The analysis of the Equation (1) shows that maximum radiation of ground vibrations takes place if the train speed v is larger than Rayleigh wave velocity cR [2][3][4][5][9][10][11] (such trains are often called 'trans-Rayleigh trains'). Under this condition, a ground vibration boom takes place, i.e., ground vibrations are generated as quasi-plane Rayleigh surface waves symmetrically propagating at angles Θ = cos -1 (cR/v) in respect to the track, and with amplitudes much larger than in the case of conventional trains. This phenomenon is similar to the well-known phenomenon of sonic boom from supersonic aircraft. It should be noted that for trans-Rayleigh trains these symmetrically propagating Rayleigh surface waves are generated equally well on tracks with and without railway sleepers, whereas for conventional (sub-Rayleigh) trains the presence of sleepers for the possibility of generation of Rayleigh waves is paramount. Without them no propagating waves are generated by a force moving at constant sub-Rayleigh speeds in the framework of the quasi-static pressure generation mechanism. Only quasi-static localised displacements are present in this case that are moving along with the force. However, if the same force is moving along a railway track supported by discrete sleepers, Rayleigh waves are generated even at sub-Rayleigh load speeds. For a single moving load, the efficient generation takes place mainly at and around the so-called sleeper passage frequencies fsp (m) where v is the load speed, d is the sleeper periodicity, and m = 1, 2, 3... . In this paper, we will be interested in spatial distributions of ground vibrations (Rayleigh surface waves) generated by a single moving load at and around sleeper passage frequencies. Numerical calculations and discussion In what follows, we describe the results of the calculations of spatial distributions of railwaygenerated ground vibration fields vz(x,y , ω) (i.e. wave snapshots shown in arbitrary linear units) over a certain surface area with a railway track in the middle of it (located along the xaxis) for some interesting situations corresponding to frequency components at and around sleeper-passage frequencies. Calculations have been carried out according to Equations (1)-(4) over the surface area of 40 × 40 m to obtain spatial distributions of ground vibrations (Rayleigh waves) generated at different frequency components f by a single axle load travelling along a straight railway track at speeds v. The summation in Equation (1) has been taken over 120 sleepers in all cases for each point of observation, and the magnitude of the axle load has been considered to be the same in all cases. The results will be presented as greyscale contour plots. It will be assumed that the velocity of Rayleigh waves in the ground is cR = 80 m/s for all cases. Other relevant parameters are as follows: d = 0.7 m, β = 1.28 m -1 , which is typical for British railway tracks, and γ = 0.001. It is instructive to start with the distinctive case of ground vibrations generated by a single axle load travelling at a trans-Rayleigh speed v > cR, which is associated with generation of ground vibration boom by high-speed trains. Because the amplitudes of generated ground vibrations are the largest in this case, the obtained results can be used as a useful reference for comparison with ground vibrations generated at lower (conventional) train speeds. As it can be seen from Fig. 1, the spatial distribution of ground vibrations (Rayleigh waves) shows a typical picture of plane waves generated symmetrically in respect of the track at Mach angles Θ = cos -1 (cR/v). It should be noted that Mach angles do not depend on frequency in the case under consideration. Therefore, for the same trans-Rayleigh speed, ground vibrations will be generated at the same Mach angles for any frequency components. The amplitudes of generated ground vibrations (in arbitrary linear units) can be estimated from the surface plot corresponding to Fig. 1 (not shown here for shortness). The vertical scale in this surface plot gives the range of vertical components of ground vibration velocities between -1.551 and 1.819, which can be compared with the amplitudes of ground vibrations generated at sub-Rayleigh speeds that will be calculated below. As it can be seen from Fig. 2, the spatial distribution of ground vibrations generated by a single load at the first sleeper passage frequency represents plane waves propagating symmetrically away from the track in the normal directions to it. This is due to the fact that at sleeper passage frequencies all sleepers are radiating in phase. The amplitudes of generated ground vibrations here (in arbitrary linear units) can be also estimated from the surface plot corresponding to Fig. 2 (not shown here). The vertical scale in the surface plot in this case gives the range of vertical components of vibration velocities between -0.031 and 0.009, which demonstrates that the amplitudes of generated Rayleigh waves in this case are roughly by 85 times smaller than in the case of generation of ground vibration boom by a single load illustrated in Fig. 1. Distribution of ground vibrations generated by the same single axle load at the frequency component f = 21 Hz, which is slightly above the first sleeper passage frequency for this case, is shown in Fig. 3. One can see that the wave fronts in this case change symmetrically their directions of propagation from the normal directions to the track, as in the case of the first sleeper passage frequency (see Fig. 2), to the directions that are slightly deflected from the normal directions. The reason for this deflection is the phase differences between vibrating sleepers at this frequency. The amplitudes of generated ground vibrations in this case are roughly the same as in Fig. 2. For even higher frequency component f = 23 Hz, the ground vibrations generated by the same single axle load undergo further evolution, as shown in Fig. 4. It can be seen that the directions of propagation of the wave fronts in this case deflect symmetrically even further from the normal directions to the track, and the amplitudes of generated ground vibrations in this case are roughly the same as in Figs. 2 and 3. It should be noted that spatial distributions of ground vibrations in Figs. 3 and 4 look similar to that shown in Fig.1 for the case of ground vibration boom. One should remember though that the associated physical mechanisms are absolutely different. Rayleigh wave radiation by a load travelling at sub-Rayleigh speeds illustrated in Figs. 3 and 4 is entirely due to the presence of sleepers. Therefore, radiation angles of generated waves depend here on frequency and on sleeper periodicity, as it follows from the comparison of Figs. 3 and 4. Figure 6 shows the spatial distribution of ground vibrations generated by the same axle load at the lower frequency component f = 18 Hz. It can be seen that ground vibrations in this case represent slightly distorted plane waves that propagate at larger angles in respect of the normal directions to the track. For even lower frequency component, f = 17 Hz, some changes in the spatial distribution of generated ground vibrations become obvious, as can be seen from Fig. 7. Namely, the generated Rayleigh waves cease to be plane and become cylindrical, which indicates the end of constructive interference between waves generated by different sleepers for frequency components that are far away enough from the sleeper passage frequencies. The amplitudes of generated ground vibrations are further reduced in this case, as it can be expected from consideration of the frequency spectra of generated ground vibrations at the sides of the frequency peaks associated with sleeper passage frequencies. Thus, resuming the above, it can be concluded that ground vibrations at and around the first sleeper passage frequency are generated as plane waves that are steering around the normal directions to the track either in the direction of the load motion -for higher frequencies, or in the opposite direction -for lower frequencies. Such a behaviour is very similar to the behaviour of electromagnetic 'travelling-wave antennas' used in microwave technology. So far we discussed ground vibrations generated at and around the first sleeper passage frequency fsp (1) [Hz]. It can be shown that for the second sleeper passage frequency fsp (2) = 2(v/d), which is equal to 40 Hz in the case under consideration, the situation is very similar. This is illustrated by Fig. 8 showing the spatial distribution of generated ground vibrations by the same moving load at the frequency component f = 40 Hz. A similar behaviour can be observed also for the third sleeper passage frequency and for higher order sleeper passage frequencies. However, because of the limited widths of railwaygenerated ground vibration spectra (usually in the range of 0 -100 Hz), only a small number of lowest order sleeper passage frequencies can be important. Conclusions It has been demonstrated in this paper that, at sleeper passage frequencies, railway-generated ground vibrations represent plane waves propagating symmetrically away from the track in the normal directions to it. At frequencies slightly above or slightly below sleeper passage frequencies, generated ground vibrations still remain plane waves, but their radiation takes place at certain angles in respect of the normal directions to the track. These radiation angles depend on train speed, on frequency and on sleeper periodicity. For frequencies slightly above sleeper passage frequencies, the above-mentioned radiation angles are counted towards the direction of the train (axle load) motion, whereas for frequencies slightly below sleeper passage frequencies, the radiation angles are counted toward the opposite direction. The above-mentioned properties of railway-generated ground vibrations at and around sleeper passage frequencies could be used for their more efficient observation and monitoring.
3,906
2022-02-02T00:00:00.000
[ "Engineering" ]
Quinone and Hydroquinone Metabolites from the Ascidians of the Genus Aplidium Ascidians of the genus Aplidium are recognized as an important source of chemical diversity and bioactive natural products. Among the compounds produced by this genus are non-nitrogenous metabolites, mainly prenylated quinones and hydroquinones. This review discusses the isolation, structural elucidation, and biological activities of quinones, hydroquinones, rossinones, longithorones, longithorols, floresolides, scabellones, conicaquinones, aplidinones, thiaplidiaquinones, and conithiaquinones. A compilation of the 13C-NMR spectral data of these compounds is also presented. problems. Aplidium are clearly prolific producers of bioactive prenylated quinones and hydroquinones in the marine environment, and many other Aplidium species need to be investigated. Quinones Quinones can be derived by the oxidation of appropriate phenolic compounds, with 1,2-dihydroxybenzenes and 1,4-dihydroxybenzenes, yielding ortho-quinones and para-quinones respectively. Therefore, quinones can be formed from phenolics compounds by either the acetate or shikimate pathways, affording a catechol or quinol system. A range of quinone derivatives and related structures that contain a terpenoid fragment or shikimate-derived portion are also wide spread. For example, ubiquinones (coenzyme Q) have important biochemical functions in electron transport systems for respiration [13]. Figure 1 describes quinones that occur in Aplidium. The simple linear Prenylquinone (1) was isolated from Aplidium californicum [14]. The verapliquinones A-D (2)(3)(4)(5) were isolated from an unidentified Aplidium sp. (Ascidiacea) collected off the Breton coast. Verapliquinones A/B and C/D were characterized in a mixture of E,Z-isomers by NMR spectroscopy. The NMR data revealed that Verapliquinone B and D had a neryl group at C-2 instead of geranyl [15]. Davis and co-workers [16] reported a simple and versatile route to 1,4-benzoquinones based on the Claisen rearrangement, and applied to the synthesis of verapliquinones A and B, which had not previously been synthesized [16]. Chan and co-workers (2011) also reported verapliquinone A from A. scabellum [17]. Glabruquinone A (6) (or desmethylubiquinone Q 2 ) and Glabruquinone B (7) were isolated from Aplidium glabrum and synthesized. The main difference between 6 and 7 is that compound 6 contains a geranyl side chain instead of neryl as in 7 [18]. Glabruquinone A (6) displayed cancer preventive activity in the anchorage-independent transformation assay against mouse JB6 P + Cl 41 cells transformed with an epidermal growth factor, with an inhibition of the number of colonies C 50 (INCC 50 ) value of 7.3 µM. The INCC 50 values for 6 were 12.7, 17.5, and 50.5 µM against HCT-116, MEL-28, and HT-460 human tumor cells, respectively. Compound 6, at 10 µM, increased the UVB-induced p53 transcriptional activity of JB6 P + Cl 41 cells [18,19]. Glabruquinone A was also evaluated in vivo on mice inoculated with Ehrlich carcinoma tumors and found to inhibit tumorgrowth. Compound 6 inhibited the phenotype expression of HT-460, HCT-116, and SK-MEL-28 human tumor cells and induced apoptosis of these cell lines, as well as that of HL-60 and THP-1 tumor cells [20]. Hydroquinones Prenylhydroquinone (9), Figure 2, isolated from A. californicum, exhibited activity in vivo against P388 lymphocytic leukemia. The potential cancer protective properties of prenylhydroquinone (9) were also evaluated by employing a modified-Ames assay for mutagenicity against Salmonella typhimurium; when prenylhydroquinone was added to experiments, the mutagenic effects of the carcinogens were drastically reduced [14]. Compound 9 was able to form stable semiquinone radicals according to Cotelle and collaborators, and in presence of glutathione, 9 was involved in a redox cycle with the consumption of oxygen. This process triggered the formation of free radicals and decreased the glutathione content, which is considered to be one of the major defenses against oxidative damage. Even although not fully elucidated, the antitumor properties of 9 can be correlated with its redox properties and reactivity toward glutathione [22]. Prenylhydroquinone also inhibits superoxide anion production in rat alveolar macrophages and in the xanthine/xanthine oxidase system. The antioxidant activity of 9 may be attributed to a direct reaction of the superoxide anion rather than to an enzymatic inhibition or a membrane signal transfer [23]. Prenylhydroquinone (9) and geranylhydroquinone (10), isolated from Aplidium sp., have exibited antiproliferative activity (IC 50 : 41 and 9.5 μM, respectively) in a P388 murine leukemia cell-line. Geranylhydroquinone (10) at doses up to 30 μM was inactive against the solid tumor cell lines A375 (human melanoma), A549 (human breast), HepG2 (human hepatic), and HT-29 (human colon), as well as normal human liver cells (WRL-68) [24]. Both prenylhydroquinone (9) and geranylhydroquinone (10) exhibited anti-inflammatory activity in an in vitro anti-inflammatory assay with activated human peripheral blood neutrophils by inhibing superoxide production. Compound 10 was also tested in the DPPH radical scavenging assay, and was considered inactive [24]. Geranylhydroquinone exhibited cytotoxicity against the leukemia cell lines of Rous sarcoma and mammary cincinoma in vivo [33]. Cytotoxic activity was also observed for 10 against P-388 leukemia (IC 50 0.034 µg/mL) and KB human epidermoid carcinoma cells (IC 50 4.3 µg/mL). Geranylhydroquinone (10) has also demonstrated antibacterial activity, with a minimum inhibitory concentration (MIC) of 64 µg/mL against Staphylococus aureus and S. faecalis, and an MIC of 128 µg/mL against Serratia marcescens. The minimum bactericide concentrations (MBCs) were determined to be 2, 1, and ˃4 µg/mL, respectively [27]. Additionally, Compound 10 was more potent than two standard antioxidants in terms of its inhibitory effects on lipid peroxide formation in rat liver microsomes and on soy-bean 15-lipoxygenase [34]. Geranylhydroquinone (10) and hydroxydiprenylhydroquinone (11) displayed significant cytotoxicity against four tumor cell lines, in particular, against P-388 mouse lymphoma suspension culture (IC 50 0.81 and 4.5 µM, respectively) indicating that the hydroxylation of the prenyl chain in 11 may result in a marginal decrease in its cytotoxicity [26]. Methoxyconidiol (16) was isolated from A. aff. densum [35] and its effects were tested on sea urchin embryos during cell division; compound 16 disturbed the mitotic spindle assembly leading to a cell cycle arrest during the metaphase/anaphase transition [36]. The antibacterial activity of 16 on Escherichia coli and Micrococcus luteus was also evaluated by microtiter broth dilution method; it had no antibacterial effect [37]. Methoxyconidiol inhibited egg division, with IC 50 values of 0.80 μM for Paracentrotus lividus eggs and 4.30 μM for Sphaerechinus granularis eggs. When tested against the human carcinoma cell lines: MCF7 (breast), PA1 (ovary), PC3 (prostate), CEM-WT (acute lymphoblastic leukemia), and L-929 murine immortalized cells, as well as normal human fibroblasts, 16 was non-toxic to all these cell lines with an IC 50 > 100 µM [35,37]. Methoxyconidiol caused antimitotic action during the first division of sea urchin embryos, and the mechanism of action of methoxyconidiol may be mediated by the disruption of microtubule dynamics [29]. Hence, Simon-Levert et al. 2010 [37] concluded that methoxyconidiol was ineffective against human cancer cells but effective in sea urchin cells. This finding could be explained by a difference between these two type of cells in either membrane permeability and/or intracellular transport of 16. The compound 2-geranyl-6-methoxy-1,4-hydroquinone-4-sulfate (17) was isolated from A. scabellum and inhibited superoxide production by PMA-stimulated human neutrophils in vitro, with an IC 50 of 21 μM. To determine the effect of different treatments on cell survival, drug-treated neutrophils were stained with the fluorescent markers for necrosis (propidium iodide) and apoptosis (Annexin V-FITC), and analyzed by flow cytometry. Treatment with 17 had no effect on neutrophil viability, but the results indicated that 17 did inhibit neutrophil superoxide production [17]. Rossinones Rossinones (Figure 3), particularly rossinone B (20) and its derivatives, are linearly fused in a 6,6,5-ring core of rossinone B, which is an extremely rare skeleton, known for only three plant-derived natural products. Rossinone B was isolated for the first time from an unidentified Antarctic species of Aplidium and then from A. fuegiense [24,38]. Therefore, the isolation of 20 extended the evolutionary range of the requisite biosynthetic terpene cyclase(s) from Plant kingdom to Animalia [24]. To date, five rossinones (18)(19)(20)(21)(22) have been discovered. Rossinones A (18) and B (20) exhibited anti-inflammatory activity in an in vitro anti-inflammatory assay with activated human peripheral blood neutrophils by inhibiting superoxide production. However, in the DPPH radical scavenging assay, 18 and 20 were found to be inactive (at doses up to 30 μM), indicating that these rossinones are considerably less effective as superoxide scavengers than as suppressors of superoxide production by neutrophils. Rossinones A and B have exhibited selective antiviral activity against the DNA virus HSV-1, versus the RNA virus PV-1, with both compounds exhibiting antiviral activity at 2 μg/disk. Both compounds also exhibited antimicrobial activities against Bacillus subtilis and the fungus Trichophyton mentagrophytes [24]. Longithorones, Longithorols and Floresolides Longithorones ( Figure 4) and longithorols ( Figure 5) are unique farnesylated quinones/hydroquinones isolated from A. longithorax (Monniot). Their complex structures are characterized by the presence of a metacycophane and/or paracyclophane system built in a farnesyl quinone or hydroquinone formally by the rarely encountered cyclization of farnesyl quinones/hydroquinones [12]. One special characteristic of those compounds is the atropisomerism that is caused by the restricted rotations of their macrocyclic rings. Eleven compounds have been isolated (longithorones A-K), including monomeric prenylated quinones (24)(25)(26) and dimeric prenylated quinones (23 and 27-31), and the cyclofarnesylated quinones (longithorones J and K; 32-33) [40,41]. The biosynthesis of dimeric longithorones, which have been supposed to originate by both intra-and intermolecular Diels-Alder reactions, has been speculated about. Fusion of the two farnesyl-quinone units can be envisioned as arising via a Diels-Alder cycloaddition of suitably unsaturated precursors, whereas rings B and C could arise by a transannular Diels-Alder reaction. The co-isolation of the monomers provides some support for this proposal [12]. Longithorone A (23) displayed cytotoxicity against P388 murine leukemia cells with an IC 50 of ~10 mg· mL −1 [42]. In addition, the longithorone J (32) was tested for cytotoxicity against the cell lines SHSY5Y (human neuroblastoma), HEK293T (SV40 T antigen transformed human embryonal kidney cells) and A549 (human non-small cell lung carcinoma). Compound 32 did not exhibit any cytotoxicity in the A549 cell assay when tested at 2 and 20 mg/mL. However, 32 displayed minimal activity at 20 mg/mL against the SHSY5Y and HEK293T cells, with cell deaths of 28% and 16%, respectively [43]. Zacarian and collaborators synthesized the highly rigid macrocyclic carbon skeleton of longithorone C (25) by exploiting quadrupolar interactions as synthetic strategy [44]. Despite the fact that cyclophanes have been extensively synthesized and evaluated because of their unique physical and chemical properties, examples of the isolation and total synthesis of cyclophane-containing natural products are rare and challenging. This has attracted the interest of synthetic chemists in search of new and efficient strategies for syntheses with a reduced number of steps [45,46]. Longithorols A (34) and B (35) are prenylated paracyclophane and metacyclophane hydroquinones, and longithorols C (36) and D (37) are para-substituted cyclofarnesylated hydroquinones. The hydroquinones (longithorols A-D, 34-37) were also isolated from A. longithorax, moreover 34 and 35 were isolated as their pentaacetate forms because of their instability [47,48]. Floresolides are monomeric cyclofarnesylated hydroquinones with an endocyclic -lactone. They are members of longithorone/longithorol class of meroterpenes. Floresolides A-C (38-40) ( Figure 6) have been isolated from Aplidium sp. collected in Indonesian. All of these floresolides exhibited moderate cytotoxicity against KB tumor cells [49]. The synthesis of floresolide B (40) hydroquinone lactone core was performed by employing ring-closing metathesis approach by Briggs and Dudley [50]. The total synthesis of racemic floresolide B was reported by Nicolaou and Xu, who used an olefin metathesis-based strategy for the formation of the macrocyclic lactone portion [51,52]. Scabellones Chan et al., 2011 [17], have described the isolation of the pseudodimeric meroterpenoid scabellones A-D (41-44) (Figure 7). Scabellone B (42) was able to inhibit the superoxide production by PMA-stimulated human neutrophils in vitro, with an IC 50 of 125 μM. In contrast, 42 had no effect on neutrophil viability. Scabellone B (42) was also evaluated against the neglected disease parasite targets Trypanosoma brucei rhodesiense, T. cruzi, Leishmania donovani, and Plasmodium falciparum. This compound exhibited selectivity towards only P. falciparum (a K1 chloroquine-resistant strain), with an IC 50 of 4.8 μM, and demonstrated poor cytotoxicity (in a L6 rat myoblast cell line, IC 50 : 65 μM). The core benzo[c] chromene-7,10-dione scaffold of scabellones A-D is rare among natural products and has previously been associated with antiproliferative or apoptosis-inducing biological properties. Conicaquinones, Aplidinones, Thiaplidiaquinones and Conithiaquinones Two cytotoxic terpene quinones, the isomeric prenylated quinones conicaquinones A and B (45, 46) (Figure 8), which have an unusual 1,1-dioxo-1,4-thiazine ring added to their quinone moiety, have been isolated from Mediterranean ascidian A. conicum. Both compounds were evaluated in vitro on rat glioma (C6) and rat basophilic leukemia (RBL-2H3) cell lines and demonstrated selectivity against rat glioma cells [53]. A follow-up study with this species, conducted by the same research group resulted in the isolation of three new geranylated quinones, aplidinones A-C (47-49) [54] and two new prenylated benzoquinones, designated as thiaplidiaquinones A and B (50,51), with an unprecedented tetracyclic skeleton in which a chromenol unit is attached to a p-benzoquinone ring condensed to a 1,4-thiazine-dioxide ring. Thiaplidiaquinones A and B were investigated for their antitumor activity. Both compounds were able to induce apoptosis in Jurkat cell lines that were derived from a human T lymphoma through the overproduction of reactive oxygen species (ROS), which mediated the collapse of the mitochondrial potential (ΔΨ m ). The thiaplidiaquinones 50 and 51 exhibited cytotoxic activity against the human leukemia T cell line Jurkat cells, with an IC 50 of approximately 3 μM [55]. The total synthesis of thiaplidiaquinone A (50) was described by Carbone [56], while the biomimetic synthesis of thiaplidiaquinone A and B was developed by Copp around the same time [57]. The lack of 13 C-NMR resonance for the group of unsaturated carbon atoms, which were substituted by a δ-deshielded value (δ C 70.9), as in 4 [15], suggest the presence of a hydroxyl group. This finding was also observed for compounds 11 and 12. Compound 15 and 16 presented a substituted hydroquinone nucleus, together with a 3,4-disubstituted-1-methylcyclohexene ring, and 1-hydroxyisopropyl unit, which was attached to C-6ʹ of the cyclohexene ring upon observation of the ROESY spectrum and NOE experiment, respectively. The difference between 15 and 16 was the presence of a methoxy group at 16 [31,35]. Compound 17 produced 1 H-NMR spectrum resonances attributable to two aromatic protons at δ H 6.75 (1H, d, J = 2.1 Hz, H-5) and δ H 6.54 (1H, d, J = 2.1 Hz, H-3), two olefinic protons at δ H 5.29 (1H, t, J = 6.6 Hz, H-2′) and δ H 5.09 (1H, t, J = 6.6 Hz, H-6′), three moderately deshielded methylene signals (δ H 3.27, 2.08, and 2.00), one methoxyl signal at δ H 3.83, and three allylic methyl singlets at δ H 1.69, δ H 1.65, and δ H 1.58. Direct comparison of the 1 H and 13 C chemical shifts of the aromatic ring signals of 17 with a synthetic derivative of 17 (compound which had a hydroxyl group at C-4 instead of the sulfate) allow Chan and co-worker to identified an upfield shift of C-4 and downfield shifts of C-3 and C-5 in the 13 C-NMR spectrum and downfield shifts of H-3 and H-5 in the 1 H-NMR spectrum which were consistent with the placement of the sulfate group at the C-4 [17]. The 13 C-NMR data for Rossinone A (18), Table 3, allowed for the identifications of a triprenylated (farnesyl) hydroquinone-bearing substitution in the terminal prenyl unit. A α-hydroxy ketone group (δ C 201.8) and a carbinol resonance at δ C 69.8 in the side chain of 18 were established [23]. The that could be explained by the diverse steric influence of the hydroxyl group in these two compounds; the structure was confirmed by NOE experiments [38]. For 5,6-Epoxy-rossinone B (22), revealed in 1 H-NMR spectrum the lack of the double bond in ring B when compared with 20, which is substituted by an epoxide ring in 22. The H-6 was attributed to a singlet at δ H 3.80. In the 13 C-NMR spectrum the values at δ C 61.4 and δ C 55.7, were assigned to C-6 and C-5, respectively. Additionally, due to the absence of the conjugated double bond, the C-4 quinone carbonyl in 22 was shifted down-field (δ C 193.9) with respect to 20 (δ C 185.0) [38]. The complex and unprecedented structure of longithorone A (23) was determined by crystal X-ray diffraction, while the enantioselective biomimetic synthesis of longithorone A was accomplished by Layton and coworkers [59]. The 1 H and 13 C-NMR of (23) ( (27) and F (28) are atropoisomers with respect to the para-disubstituted benzoquinone ring [40]. In turn, longithorone F (28) differs from longithorone G (29) in its C-2ʹ, C-3ʹ, and C-10ʹ stereochemistry. The NOESY data for 29 revealed correlations between the aldehyde hydrogen H-13 with H-5 and H-2ʹ suggesting that these protons were all on the same face of the molecule. These correlations were not, however, observed in the NOE experiment for 28. The farnesyl chains in longithorone J (32) and K (33) were characterized by three shielded olefinic methyl resonances, two of these (δ C 14.9 and δ C 16.9) assigned E-geometry configuration and the other (δ H 22.8) was assigned a Z-geometry. The 1 H and 13 C-NMR spectra of longithorone J include an oxymethine proton at δ H 4.84 (δ C 71.5) in contrast to the presence of two ketone resonances at δ C 200.7 and δ C 196.8 in longithorone K (33). The absolute stereochemistry of longithorone J (32) has been determined by the advanced Mosher method, while the absolute stereochemistry for longithorone K (33) was suggested by comparison with 32 and based on biosynthesis [43]. A comparison of the 1 H and 13 C-NMR data for longithorols A-D (34-37) ( Table 5) with previous longithorones reveals that, in longithorols, signals for two substituted 1,4-hydroquinones (C-16 to C-21 and C-16ʹ to C-21ʹ) are present instead of signals for two substituted 1,4-quinones. In addition, an acetoxymethine group can be recognized by the signals at δ H 6.61 (d, J = 10.0 Hz) and δ C 67.1 (for example, in 34), which replace the methylene group corresponding to C-1 in longithorones [47]. The absolute stereochemistry of longithorol C (36) has been determined by the advanced Mosher method [48]. Floresolides skeleton can be recognized by the presence of an endocyclic ε-lactone with an α,β-epoxy-group (δ C 63.7 and δ C 59.1) in 38 and (δ C 62.5 and δ C 58.3) in 39, and a double bond at the C-2 and C-3 position (δ C 137.8 and δ C 136.1) in 40 ( Table 5). The terminal methylene group at the C-11 and C-12 positions is also typical in this structural family. The Floresolide 39 has a primary alcohol and a fully substituted benzene ring including two bromine atoms in its structure. Compound 40 has absolute stereostructure, confirmed by crystal-X-ray diffraction [49]. Conclusions The studies presented in this review reveal the importance of the prenylated quinone and hydroquinone metabolites, which have important biological activities and occur with frequency in ascidians of the genus Aplidium. Those non-nitrogenous metabolites, mainly prenyl quinones or hydroquinones, which can be either linear or cyclic compounds, such as rossinones, longithorones, longithorols, floresolides, scabellones, conicaquinols, aplidinones, thiaplidiaquinones, and conithiaquinones, are examples of meroterpenes. The evaluated compounds mainly presented cytotoxic, anti-inflammatory, and antimicrobial activities. Furthermore, ascidians (tunicates) are a promising source of new bioactive compounds from marine environments [61][62][63]. The Aplidium genus is able to produce meroterpenes with a large range of structural variety, of which the longithorone series is notable for containing the most complex structures with a metacyclophane and paracyclophane scaffolds. The complex and elaborate structure of some of these meroterpenoids has led several research groups to engage in performing the total synthesis of specific compounds for their structural confirmation and to clarify biosynthetic pathways. The 13 C-NMR data of a given compound are an important tool for natural products research because it is sometimes possible to propose the structure of a novel natural compound by performing comparisons with data for known compounds. For example, the structure of rossinones, were deduced by comparing 2D NMR spectrum with those from literature data reported for rossinone B. Therefore, considering the enormous diversity in their chemical structures and the biological potential of the prenyl quinones and hydroquinones found in Aplidium, it is important to conduct further studies of this genus using a multidisciplinary approach.
4,711
2014-06-01T00:00:00.000
[ "Biology", "Chemistry" ]
Drug-transporter mediated interactions between anthelminthic and antiretroviral drugs across the Caco-2 cell monolayers Drug interactions between antiretroviral drugs (ARVs) and anthelminthic drugs, ivermectin (IVM) and praziquantel (PZQ) were assessed by investigating their permeation through the Caco-2 cell monolayers in a transwell. The impact of anthelminthics on the transport of ARVs was determined by assessing the apical to basolateral (AP → BL) [passive] and basolateral to apical (BL → AP) [efflux] directions alone, and in presence of an anthelminthic. The reverse was conducted for the assessment of the influence of ARVs on anthelminthics. Samples from the AP and BL compartments were taken at 60, 120, 180 and 240 min and quantified either by HPLC or radiolabeled assay using a liquid scintillating counter for the respective drugs. Transepithelial resistance (TEER) was used to assess the integrity of the monolayers. The amount of compound transported per second (apparent permeability, Papp) was calculated for both AP to BL (PappAtoB), and BL to AP (PappBtoA) movements. Samples collected after 60 min were used to determine the efflux ratio (ER), quotient of secretory permeability and absorptive permeability (PappBL-AP/PappAP-BL). The reverse, (PappAP-BL/PappBL-AP) constituted the uptake ratio. The impact of SQV, EFV and NVP on the transport of both IVM and PZQ were investigated. The effect of LPV on the transport of IVM was also determined. The influence of IVM on the transport of SQV, NVP, LPV and EFV; as well as the effect PZQ on the transport of SQV of was also investigated, and a two-tailed p value of <0.05 was considered significant. IVM significantly inhibited the efflux transport (BL → AP movement) of LPV (ER; 6.7 vs. 0.8, p = 0.0038) and SQV (ER; 3.1 vs. 1.2 p = 0.00328); and increased the efflux transport of EFV (ER; 0.7 vs. 0.9, p = 0.031) suggesting the possibility of drug transporter mediated interactions between the two drugs. NVP increased the efflux transport of IVM (ER; 0.8 vs. 1.8, p = 0.0094). The study provides in vitro evidence of potential interactions between IVM, an anthelminthic drug with antiretroviral drugs; LPV, SQV, NVP and EFV. Further investigations should be conducted to investigate the possibility of in vivo interactions. Background The sub-Saharan Africa still leads in the prevalence of Human immunodeficiency virus infection and acquired immune deficiency syndrome (HIV/AIDs), malaria, tuberculosis and helminthic infections. HIV positive patients are thus likely to be co-infected with any of the diseases, and the co-administration of anthelminthics and ARVs is not uncommon [1]. This can give rise to drug-drug interactions (DDIs) which are likely to alter the therapeutic outcome of each of the drugs. These interactions may result in an increase or decrease in the plasma concentrations of the drugs thereby increasing the risk of toxicity or development of resistance amongst other adverse effects. This may in some instances require dosage adjustments [2][3][4]. The knowledge of any potential interactions between these drugs is therefore important in optimization of HIV therapy [5,6]. The potential mechanisms of the drug interactions include modulation by drug transporters (both efflux and influx), and inhibition or induction of drug metabolizing enzymes [6][7][8][9][10][11][12][13][14]. Several drugs are substrates and/or inhibitors of these efflux transporters and metabolic enzymes, especially CYP 3A4 [15,16]. Among the ARVs, protease inhibitors (PIs) are known to be substrates of P-gp, ABCC 1 and ABCC 2 [3,12,17,18]. Saquinavir (SQV) and lopinavir (LPV) are substrates and inhibitors of drug transporters [19,20]. Nucleoside reverse transcriptase inhibitors (NRTIs) and non-nucleoside reverse transcriptase inhibitors (NNRTIs) have also been characterized as substrates for drug transporters [21][22][23][24]. Newer NNRTIs such as etravirine have been reported to induce ABC transporters especially BCRP/ABCG2, and therefore potential for drug interactions with co-administered drugs that are substrates for these transporters [25]. Rilpivirine has also been reported to induce and inhibit several relevant drug-metabolizing enzymes and drug transporters albeit with less potential for drug interactions owing to low plasma concentrations [26]. PZQ and IVM are some of the most widely used anthelminthic drugs. PZQ is mainly used to treat schistosomiasis, whereas IVM is used in the treatment of lymphatic filariasis [27,28]. Both diseases are endemic in developing countries with schistosomiasis afflicting over 200 million people [29,30], and lymphatic filariasis having a global prevalence of over 120 million people with an estimated 1.3 billion at risk [31,32]. IVM has been characterized as a substrate and inhibitor of P-gp [33][34][35]. IVM interacts with P-gp modulators [36,37], and the inhibition of P-gp has been described as a potential strategy to counter the emerging resistance to IVM [38]. PZQ has not been conclusively characterized with regards to drug transporter specificity [39,40], but from the available data, it is known to be metabolized by cytochrome P450 isoenzymes (CYP), mainly CYP2B1, CYP3A4, CYP2C9 and CYP2C19. It therefore has potential to interact with drugs which are inhibitors or inducers of these enzymes [41,42]. Enzyme inducers such as carbamazepine, phenytoin and rifampicin reduce PZQ plasma levels, while ketoconazole, an enzyme inhibitor significantly increases its concentration [41,43]. Previous researchers on transport of antiparasitic drugs along the Caco-2 cell monolayers (CCM) have also reported that PZQ is an inhibitor of P-gp without being a substrate [40]. However, it is not clear how the investigators concluded that this was specifically mediated by P-gp using the CCM model since CCM expresses several other transporters [44]. From the research conducted earlier in our laboratories we have demonstrated that PZQ is neither a substrate nor an inhibitor of P-gp in CEM T-lymphoblastoid cells [45]. The majority of drugs in use are orally administered, and their absorption from the gastrointestinal tract is pivotal for their therapeutic success [46]. The ability of a drug to cross the intestinal wall in order to reach portal circulation is to a large extent dependent on its permeability coefficient [47]. The Caco-2 cell model provides a simple and reliable method to assay in vitro permeability of drugs [48][49][50]. The permeability of drugs through the CCM correlates well with in vivo absorption in humans thus making the CCM an invaluable analytical tool in the screening of orally administered drugs [51][52][53][54]. CCM are derived from human colonic adenocarcinoma and have morphological as well as functional similarities to intestinal (absorptive) enterocytes [44,[55][56][57]. They have adherent properties and therefore form a monolayer with tight junctions which prevent paracellular diffusion so that drugs or other solutes can only pass through the cell, as illustrated in the cartoon (Fig. 1). This in addition results in development of cell polarity, and the efflux transporter P-glycoprotein (P-gp) has been shown to be localized on the apical brush border, approximately 20 microns above the base of the cells [58], while certain efflux transporters such as multidrug resistance-associated proteins (MRPs) are expressed on the basolateral side of the monolayer [59,60]. Apart from P-gp and MRPs, CCM express a wide array of transporters (efflux and influx) as well as metabolic enzymes, thus making them suitable for the study of drug-drug interactions based on the permeability of drugs through the monolayers [20,44,55,56,61,62]. The permeation of drugs through the monolayers allows the study of the major absorptive mechanisms for drugs, such as passive transcellular transport and carrier-mediated influx as well as efflux mechanisms [63]. In this system, the passage of drugs from apical (AP) to basolateral (BL) compartment is attributable to passive diffusion and occurs at a lower rate, whereas the BL to AP passage occurs by active transport, presumed to be mediated by transporters [20,44,64]. The carriermediated transport is a saturable process, which raises a possibility that when two drugs are co-administered they may compete for a transporter (influx or efflux) which would lead to drug interactions leading to lower or higher exposure than when dosed alone [20,65]. The cells are grown on a porous membrane and form differentiated monolayers after about 20 days. The membranes are polarized and evaluation of the monolayers can be performed by measuring the transepithelial resistance (TEER), using a volt-ohm meter equipped with electrodes placed in the upper and lower chambers of the insert. TEER increases with culture reaching a maximum in about 10-15 days [66][67][68], and depends on the number of cells seeded, plus the surface area of the filter. TEER values range from 150 to 1600 ohm.cm 2 as compared to human ileum which is about 50 ohm [69]. To date there is scant information from the literature regarding the interactions between ARVs and anthelminthics despite the likelihood of their co-administration in tropical regions owing to their geographic overlaps. The aim of the study was therefore to assess the potential DDIs between ARVs and the anthelminthic drugs; PZQ and IVM. In the study, CCM was used to evaluate potential interactions between ARVs and anthelminthics. The impact of the anthelminthics on the ARVs transport was determined by assessing the AP → BL and BL → AP directions alone, and in presence of an anthelminthic, and the reverse was conducted for the assessment of the influence of ARVs on anthelminthics. Quantification was performed using an HPLC method described earlier [45], or radiolabeled assay using a liquid scintillating counter. PZQ and IVM were used as prototypes for the anthelminthics, and PIs, saquinavir (SQV) and lopinavir (LPV) as well as NNRTIs, efavirenz (EFV) and nevirapine (NVP) as prototypes of ARVs. The impact of SQV, EFV and NVP on the transport of PZQ as well as the impact of PZQ on the transport of SQV was determined. The influence of SQV, NVP, LPV and EFV on the transport of IVM, and IVM on the transport of SQV, NVP, LPV and EFV were also investigated. Equipment The HPLC consisted of a Dionex (Dionex Softron GmbH, Germany) HPLC system with a P 680 pump, an ASI-100 automated sample injector and a UVD 1704 detector. A 250μl injector with a 20μl loop was used. Reversed-phase-liquid chromatography was carried out using a Hypurity™ C 18 analytical column, 5μm x 4.6mm (Thermo Electron Corporation, Runcorn, UK 22105-154630). A column guard (Thermo electron 60140-412) was used to protect the analytical column. The ultraviolet detector was set to monitor the 215 nm wavelength. A Packard Tri-Carb Liquid Scintillation Analyzer model 1900 TR (Packard instrument Co.) was used for radioactivity counting. A Millicell Electrical Resistance System (Fisher Scientific, Leicestershire, UK) was used for measuring the transepithelial electric resistance (TEER). Transwells (six-well transwell polycarbonate tissue culture treated plates, 4.67 cm 2 , 24 mm diameter; 0.4 μm pore size) were purchased from Corning life Sciences (Costar High Wycombe, Bucks; UK). Materials The human colon adenocarcinoma cell line, Caco-2 was purchased from European collection of cell cultures (ECACC No. 286010202), and the cells were counted using a Nucleo Counter (ChemoMetec, Denmark) cell counter. SQV was provided by Roche Discovery (Groningen, Netherlands). All the other chemicals used were of analytical or HPLC grade. Deionised water used to prepare the solutions or mobile phase, and was purified in an Elga DV 25 pure lab option system (Elga, High Wycombe, Bucks, and UK). Caco-2 cell lines Cell culture Caco-2 cells were cultured in Dulbecco's Modified Eagle Medium (DMEM) supplemented with fetal bovine serum (FBS) [15%v/v]. The cells were grown and routinely seeded in tissue cultured treated 162 cm 2 flasks in a humidified chamber (37°C, 10% CO 2 incubator) and harvested by regular trypsinization. The medium was changed every 2 to 3 days until the confluence of the cell monolayer was achieved. Trypsinization involved decanting the media, followed by washing twice with 6 ml of Hanks' Balanced Salt Solution (HBSS), and the detachment of the monolayer by addition of 4 ml of trypsin EDTA. The cells were then incubated for 10 min. The resulting suspension was then centrifuged (2000 g × 5 min, 4°C) and supernatant removed. The resulting pellet was then re-suspended in 20 ml of fresh DMEM (+15% FBS), 10 ml transferred two new flasks; and each made to 20 ml. Storage of Caco-2 cells The cells were trypsinised as described earlier after attaining confluence. The pellets were then re-suspended in DMEM (15%v/v), counted using a nucleocounter and centrifuged (2000 g × 5 min, 4°C). The cells were then resuspended in warm FBS (FBS + 10% DMSO), mixed thoroughly and made up to a concentration of 5 × 10 6 cells per ml. 1 ml of the cell suspension was then transferred to pre-labelled 1.5 ml cryovials and frozen at -80°C for use as and when required. The viable semi-frozen cells were thawed by placing the cryovials rapidly in a waterbath (37°C), or by simply holding in the vials in the hands for a few minutes and re-suspending the pellets in 9 ml DMEM (+15%FBS) followed by culturing. Determination of drug transport in Caco-2 cells Cell seeding The monolayers used were of between passage 20 and 65 after 15 days of growth. Each experiment was performed in duplicate using two six well-transwell culture plates. The cells were trypsinized as described earlier, and after centrifugation the pellet re-suspended in fresh DMEM (+15%), and the cells counted using a cell counter. A volume of DMEM was added enough to give a cell count of 2 × 10 6 cells per ml, and cells were seeded on the transwell culture plates at a density of 2 × 10 4 cells/cm 2 (~100,000 cells per well, since insert membrane growth area = 4.67 cm 2 ). The plates were then incubated at 37°C and 10% CO 2 in a humidified chamber and the media changed every 2-3 days, by aspirating using a suction pump and replacing with an equal volume of DMEM. Transport experiments were conducted 15 to 20 days after seeding. The TEER across the cell monolayers was monitored using a Millicell-ERS in order to assess cell monolayer integrity and the monolayers considered appropriate for the experiment when the TEER values were typically above 500 cm 2 [49,70]. Transport experiments Prior to transport studies, each monolayer was washed and equilibrated with the transport medium (DMEM without FBS). The medium was removed from all AP and BL compartments of the transwells and replaced with 2 ml of the transport medium (DMEM alone), to both compartments and equilibrated for 1 h (37°C, 10% CO 2 incubator), after which the TEER was re-assessed and labeled. The medium was then removed from both compartments and replaced with an equal volume of pre-warmed medium containing the compound of interest at the appropriate concentration. For the AP → BL transport, 2 ml of medium containing the desired drug was placed in the AP chamber and 2 ml of the medium alone in the BL chamber, whereas 2 ml of medium containing the drug was placed in the BL and 2 ml of medium on the AP chamber for the BL → AP transport [71]. The effect of the second drug was then assessed by adding the medium containing the original drug and the drug under study to the AP side for AP → BL transport with the medium containing the original drug alone in the BL chamber, and vice-versa for the BL → AP transport. Transport in each direction was done in triplicate. The transwell plates were then incubated (37°C, 10% CO 2 incubator), and 100μl of the samples from the AP and BL compartments were taken at 60, 120, 180 and 240 min and quantified either by HPLC method described earlier [45], or by the use of a liquid scintillating counter depending on the drug under study. The HPLC method involved liquidliquid extraction followed by ultra-high performance liquid chromatography using a Hypurity C 18 column and ultraviolet detection set at a wavelength of 215 nm. The mobile phase consisted of ammonium formate, acetonitrile and methanol (57:38:5 v/v). Separation was facilitated via isocratic elution at a flow rate of 1.5 ml/min. For HPLC assays, the concentration used for each drug was 20μg/ml. The concentrations that offered the best detection were selected for radiolabeled assay. Table 1 summarizes the type of assay and the various concentrations used in each experiment. HPLC was used in the assay of interactions between SQV/PZQ, IVM/SQV, EFV/PZQ and NVP/PZQ; while radiolabeled assay was utilized in the investigations of the interactions between PZQ/SQV, SQV/IVM, NVP/ IVM, IVM/NVP, LPV/IVM, IVM/LPV, EFV/IVM and IVM/EFV. The integrity of the CCM during the experiment was monitored by measuring the TEER at the beginning (0 min) and the end of the experiment (240min). Apparent permeability The results were expressed as apparent permeability coefficient (Papp, unit: cms -1 ), the amount of compound transported per second. Papp values were calculated for both AP to BL (Papp AtoB ), and BL to AP (Papp BtoA ) movement of the compound. Papp was calculated using the following equation [52,72]: dQ/dt = Steady-state flux (dpm s -1 or μmol s -1 ) A = Surface area of the filter (cm 2 ) C O = Initial concentration in the donor chamber (dpm litre -1 or μM) The quotient of secretory permeability and absorptive permeability (PappBL-AP/PappAP-BL) constitutes the efflux ratio, while the reverse (PappAP-BL/PappBL-AP) is the uptake ratio [71]. This calculation requires that the receiver concentration should not exceed 10% of the donor concentration, and therefore was only applied for the samples taken at 60 min. The permeability is a saturable process and depends on several physiological conditions such as accumulation, pH, and lipophilicity (sink conditions), which have an effect on Papp values with incubations over longer periods of time [71,73]. In order to assess the potential interactions, ER of a respective drug alone was compared to that in presence of a second drug under investigation. Samples of both drugs collected after 60 min were used to investigate the trends over a period of 4 h. Statistical analysis The results were presented as mean ± standard deviation (SD) of three experiments with 95% confidence intervals for differences between the means where appropriate. The analysis of the transport results obtained after 60 min was performed using a two-way analysis of variance (ANOVA). A two-tailed p value of <0.05 was accepted as being significant. Results The results of the transport experiments are summarized in Table 2 Discussion The main aim of this study was to establish the potential interactions between the anthelminthic drugs, IVM and PZQ with ARVs by investigating their transport through CCM. PIs and NNRTIs were selected as they are widely used in management of HIV and may be co-administered with anthelminthics in the mass treatment of helminthic infections and HIV in third world countries because of the geographic overlap of the two diseases. In addition PIs and NNRTIs have both been characterized with regards to substrate specificity of both CYP 450 enzymes and drug transporters [19,24,74,75]. SQV and LPV were selected as prototypes of PIs, while EFV and NVP were prototypes of NNRTIs. The CCM express a wide range of transporters making them suitable for the study of drug-drug interactions, since drug transporters play an integral role in the disposition of drugs and corresponding susceptibility to drug interactions [25,44,[54][55][56]. The main findings from our study provide evidence that IVM influences the transport of SQV, LPV and EFV; whereas NVP influences the transport of IVM as illustrated by their transport characteristics along the CCM. IVM significantly inhibited the efflux transport of LPV and SQV; and increased the efflux of EFV. NVP increased the efflux transport of IVM. This raises the possibility of interactions between the drugs involving drug transporters. Drug interactions between ARVs and coadministered drugs may lead to treatment failures and adverse reactions, and understanding of the mechanism of interaction is pivotal for the optimal choice of the highly active antiretroviral therapy (HAART) regimens [76]. Increased efflux of a drug from the cells may cause resistance as the levels become sub-therapeutic [77][78][79], whereas an inhibition of its efflux may cause enhanced plasma toxicity and subsequent toxicity [80][81][82]. PZQ did not appear to significantly influence the transport of ARVs and likewise ARVs did not affect the transport of PZQ. SQV and other PIs have been demonstrated to be substrates of efflux transporters P-gp, MRP1 and MRP2 that are expressed by Caco-2 cells [17,19]. IVM has also been characterized as a substrate of P-gp and has been shown to inhibit P-gp, MRP 1, 2 and 3 [34]. Altered expression of P-gp has been attributed to the neurotoxicity associated with IVM [83]. In an experiment involving collie dogs, the dogs that had a deletion of ABCB-1 gene displayed neurotoxicity when dosed with IVM, whereas normal dogs do not. The authors in this study concluded that P-gp plays a role in effluxing the IVM from the CNS [84]. In a related study on beagle dogs, co-administration of IVM with spinosad, a P-gp inhibitor, has been demonstrated to increase IVM neurotoxicity through the inhibition of P-gp at the blood brain barrier [85]. IVM has been reported to interact with other drugs including doxycycline and albendazole which improve its antiparasitic efficacy [86]. In the control of onchoerciasis, doxycycline was reported to enhance the ivermectin- induced suppression of microfiladermia [87]. Levamisole has been shown to increase the plasma bioavailability of IVM though without necessarily increasing its antiparasitory effects [88]. Previous studies have also reported that ketoconazole substantially increases IVM plasma concentrations in sheep upon co-administration. The authors attributed this to the reversal of P-gp effects [36]. P-gp modulators itraconazole and valspodar have also been shown to increase the concentration of IVM in plasma and gastrointestinal tissues of rats [37]. There was an eightfold inhibition of the efflux transport of LPV in the presence of IVM which could have possibly involved P-gp or other transporters. The transport of LPV and other PIs has been shown to be modulated by efflux transporters P-gp and MRP [12,89]. Whereas studies have demonstrated that these efflux transporters limit the uptake of ARVs, influence of influx transporters such as Organic anion-transporting polypeptide (OATP) on their modulation and indeed most drugs has not been fully described. Authors from previous studies concluded that an interplay of influx transporter (OATP), efflux transporters (P-gp and MRP) and lipophilicity had implications on the cellular uptake and retention of SQV and LPV in some T-cell lines, CEM, CEM VBL and CEM 1000 as well as peripheral blood mononuclear cells [90]. In this study, pre-treatment of cells with P-gp and MRP inhibitors, tariquidar (XR9576) for Pgp, and MK571 with frusemide for MRP respectively; followed by subsequent co-incubation with a human OATP substrate, estrone =5.94) is more lipophilic than SQV (Log Kow = 2.5), which may contribute to the difference in response to IVM between the two PIs [91 -93]. In a related study the authors concluded that IVM may influence the absorption of fexofenadine by interfering with influx and efflux pumps OATP and P-gp [94]. In our study, LPV inhibited the influx transport of IVM, though to a lesser margin (twofold decrease in the efflux ratio, 2.9 to 1.4 [ Table 2]). It is therefore evident from these results that there is likelihood of interactions between LPV and IVM, and that these interactions could most likely be influenced by drug transporters (influx and efflux). Further investigations should be carried out to determine the specific transporters responsible for the interactions, and the dosage range that would exhibit these interactions. NVP increased the efflux transport of IVM, but IVM did not appear to significantly influence the transport of NVP. In a study to investigate the influence of NNRTIs on P-gp activity, NVP significantly reduced the uptake of rhodamine 123, a P-gp substrate into LS180V cells signifying decreased efflux as a result of inhibition of transport [24]. In a related study, the authors concluded that NNRTIs induced P-gp in LS 180 cells [95,96]. The observed interactions between NVP and IVM may therefore be attributed to the activities of influx and efflux drug transporters. With regards to the interactions between IVM and EFV, there was a marginally significant increase in the efflux ratio of EFV in presence of IVM. EFV has been characterized as a substrate of P-gp and has also been reported to decrease plasma concentrations of co-administered drugs that are metabolized by CYP 450 enzymes without modifying intestinal absorption of co-administered substrates of P-gp [22,97]. PZQ did not significantly influence the transport of ARVs and likewise ARVs did not significantly affect the transport of PZQ. The presence of PZQ did not alter the transport of SQV, whereas SQV, EFV and NVP did not affect the transport of PZQ. This is consistent with our earlier studies whereby we established that PZQ is neither a substrate nor an inhibitor of P-gp in accumulation experiments involving CEM parental and CEMVBL cells [45]. PZQ has not been conclusively characterized with respect to drug transporters and metabolic enzymes. In a study involving the transport of PZQ and other antiparasitic drugs along Caco-2 cell monolayers, PZQ appeared to be an inhibitor of P-gp without being a substrate, based on inhibition of P-gp mediated [ 3 H]-taxol transport in Caco-2 cells [40]. It is however noteworthy that Caco-2 cell lines express several drug transporters, both influx as well as efflux; including metabolic enzymes, and interplay of several factors is therefore possible. Careful interpretation of the results may therefore be necessary before arriving at any conclusions [44,98,99]. Ketoconazole, a CYP-450 inhibitor has been reported to double the plasma concentration of PZQ in humans, while rifampicin; an inducer has been reported to dramatically reduce its concentration, and the authors recommended dose adjustment upon co-administration [41,43]. An increase in Schistosome P-gp levels has also been postulated to confer resistance to PZQ [100].
5,885.6
2017-05-04T00:00:00.000
[ "Biology" ]
Equity Must Accompany Economic Growth for Good Health K. Srinath Reddy discusses a new research study by S. V. Subramanian and colleagues that found no strong evidence of recent economic growth in India being associated with a reduction in child undernutrition. India's rapid economic growth, after the initiation of market-oriented reforms two decades ago, has been the subject of considerable international attention. However, this accelerated economic growth has not been matched by marked improvements in many of the key indicators of health. Not only are the rates of infant and maternal mortality lagging behind the Millennium Development Goals, but the level of undernutrition among children is appallingly high. Even if there is a lag time for improvements in the economic status of countries to be reflected in better health indicators at the population level, child nutrition should be one of the earliest and highly sensitive indicators of economic growth favourably impacting health. Yet, this has not happened in India, where average calorie intake has declined over the last 25 years [1]. In PLoS Medicine this month, Subramanyam et al. [2] explored this paradox through multi-level modeling of crosssectional data from three national surveys conducted during this period of economic growth at 6-to 7-year intervals. With per capita income at the state level as the exposure variable, they examined the association with different measures of undernutrition across several states in India. They did not observe any association between state economic growth with the risk of children being underweight, stunted, or wasted. Adjustment for several demographic and socioeconomic covariates did not alter this null effect. The relationship of economic growth with health has earlier been studied with other indicators. In general, life expectancy increases and infant mortality decreases with rising national incomes. This effect is most marked in the early stages of economic growth, as evident from the Preston curves that are periodically developed to position countries along the line of relationship between per capita gross domestic product (GDP) and life expectancy [3]. It has also been well demonstrated that at equivalent levels of per capita GDP, countries with higher levels of economic and social equality within their societies fare better on most health indicators than countries where the equity gaps are wider. As Wilkinson and Pickett compellingly argue in their book The Spirit Level (2010), poor health is not merely the result of poverty but also of inequality, which manifests in many dimensions [4]. In unequal societies, it is not merely the poor who suffer from sub-optimal health, as other sections of society also are less well than their social class peers in more equal societies. The pattern of India's economic growth over the last two decades has accentuated income inequality, with a large segment of the society not benefiting from it. The recent measure of persons living below the poverty level, even by official estimates, is 37.2% of the population. Those just above that level and even the middle class are also highly vulnerable to food price inflation. As such, aggregate national or state indicators of economic growth do not reflect the real purchasing power of many at the bottom and middle of the population pyramid. There is also a poverty of opportunities-for education, employment, and income generation. Caste, gender, and religion too are among the social determinants that impact opportunities for development and access to health services. Only recently has there been an attempt to guarantee employment for the rural poor at levels that support subsistence. The ability of families to provide appropriate child nutrition is thus dependent on several enabling factors. The demographic and socioeconomic variables chosen by the authors, though pertinent, do not cover issues like the quality of governance, which varies widely across states and also with successive governments and state administrations within states. The surveys also do not account for undernutrition-related deaths in childhood. There is an unanswered question: do states where infant mortality The Perspective section is for experts to discuss the clinical practice or public health implications of a published study that is freely available online. An analysis of cross-sectional data from repeated household surveys in India, combined with data on economic growth, fails to find strong evidence that recent economic growth in India is associated with a reduction in child undernutrition. is declining faster have more surviving low birth weight babies who carry their disadvantage into early childhood, thereby masking the overall benefits of economic growth on child health? The Integrated Child Development Services (ICDS) program of the government has specifically aimed to improve child nutrition, but did not cover the very vulnerable age group of 0-3 years. Universal breast feeding up to at least 6 months of age and timely introduction of complementary feeding, which would have helped to reduce undernutrition in the very vital early years of child growth, were missing elements of this program [5]. The poor nutritional status of mothers (58% of pregnant women were anemic and 36% of the ever-married women, in the childbearing age group, were underweight in the 2005-2006 survey) as well as the low levels of child immunization with risk of childhood infections also contribute to undernutrition. Unsafe drinking water and poor sanitation, which predispose to diarrheal diseases and other infections, also contribute to child malnutrition. The message that comes out clearly from Subramanyam et al.'s article is that developing countries like India should not assume that economic growth will automatically translate into improved child nutrition and health. Measures for enhancing equity through inclusive growth, action on social determinants of health, and specific programs for improved early life nutrition will be needed if undernourished children are not to become the face of an economically advancing India. Author Contributions ICMJE criteria for authorship read and met: KSR. Agree with the manuscript's results and conclusions: KSR. Wrote the first draft of the paper: KSR.
1,355.2
2011-03-01T00:00:00.000
[ "Economics", "Medicine" ]
Linear magnetoconductivity in magnetic metals We theoretically describe a mechanism of low-field linear magnetoconductivity in helical magnetic metals. Two ingredients for the mechanism in three-dimensional metals are identified to be the spin-orbit coupling and momentum-dependent ferromagnetic exchange interaction. We propose and study a number of minimal theoretical models which have linear magnetoconductivity, and discuss their implications for recent experiments. We theoretically describe a mechanism of low-field linear magnetoconductivity in helical magnetic metals. Two ingredients for the mechanism in three-dimensional metals are identified to be the spin-orbit coupling and momentum-dependent ferromagnetic exchange interaction. We propose and study a number of minimal theoretical models which have linear magnetoconductivity, and discuss their implications for recent experiments. Onsager's relations [1,2] dictate that the low-field electric conductivity of the system in the applied magnetic field must be even under the reversal of the magnetic field when the time-reversal symmetry is not violated in the system. However, when the time-reversal symmetry is broken in the system by, for example, spontaneous ferromagnetic order, the Onsager relations allow for the low-field linear magnetoconductivity in the system. There is a number of recent experiments [3][4][5] which observe linear magnetoconductivity in ferromagnetic metals. Indeed, based on the Onsager's relation argument, one would expect that when spontaneous magnetization M is present in the system, there might be terms in the electric current which will depend on the magnetization and result in linear magnetoconductivity. Three such possible terms with a pronounced angle dependence between the electric E, magnetic B fields and magnetization are proprtional to (E·B)M, (E·M)B, and (M·B)E combinations, namely, δj = α 1 (E · B)M + α 2 (E · M)B + α 3 (M · B)E, (1) where α 1,2,3 are material dependent coefficients. Thus, varying the direction of either magnetic field, magnetization, or the current, one can identify the presence of each term in the system, see Fig. 1. However, besides the knowledge of the Onsager relation, the microscopic mechanism behind these three terms is still not fully understood. The aim of the present paper is to introduce a number of theoretical models which provide a possible mechanism of linear magnetoconductivity in magnetic metals. We assume that the spontaneous magnetization in the metals is due to the localized fermions, while the conduction fermions are responsible for the transport in these metals. The localized fermions interact with the conducting fermions via the ferromagnetic exchange interaction, which is proportional to the magnetization. In order to couple the magnetization with the momentum of conducting fermions we propose that the metals are helical, meaning that there is a spin-orbit coupling [6][7][8][9] which leads to the momentum-spin locking of conducting fermions. In case of pure three-dimensional spin-orbit coupling, the ferromagnetic exchange interaction acting on the spin of conducting fermions just like regular Zeeman magnetic field, can't affect the velocity of fermions unless there is spin-orbit coupling affecting motion of the conducting fermions. Indeed, ferromagnetic exchange interaction acting on conducting fermions can be gauged away by simply shifting the momentum of fermions. However, we show that the momentum-dependent ferromagnetic exchange interaction [10,11] does affect the velocity of conduction fermions, and leads to linear magnetoconductivity with all terms present in Eq. (1). The effect of momentum dependent ferromagnetic exchange on the magnetoconductivity has already been theoretically recognized in [12][13][14]. In case of two-dimensional spin-orbit coupling, the Zeeman-like ferromagnetic exchange interaction can affect the velocity of fermions, but only when it has a component parallel to the spin-orbit coupling vector. We discuss such a scenario in our second example of the theoretical models. We show that the current Eq. (1) will depend only on one particular component of the magnetization. The mechanism of linear magnetoconductivity proposed in this paper is due to the effects of Berry curvature and orbital magnetization [15,16]. The Lorentz force in all of the presented cases does not result in linear magnetoconductivity. Three-dimensional spin-orbit coupling. As a model of a three-dimensional metal with spin-orbit coupling (helical metal) we pick the Weyl semimetal [17] with two chiralities each described by a linear spectrum. We also include three possible momentum-dependent terms to the Hamiltonian, which might be present due to the finite magnetization M in the system. Our model Hamiltonian for the s = ± chiralities iŝ where v is the velocity of conducting fermions, µ is the chemical potential, σ are the Pauli matrices describing spin of fermions, and k = (k x , k y , k z ) is the threedimensional momentum. The term with a A is a tilt of the Dirac cones. The tilt breaks the time-reversal symmetry. The other two terms in the second line of Eq. Left: the current is passed in x− direction, the magnetic field is varied in z − y plane, magnetization is in x− direction, while the electric field is measured in the y direction. Hence, only the (E · B)M component of the current is active. Center: the current is passed in z− direction, magnetic field is in z− direction, magnetization is in x− direction, while the electric field is measured in the x − y plane. In this case only the (E · M)B component of the current is active. Right: the current is passed in y− direction, magnetic field is varied in z−x plane, magnetization is in x− direction, while the electric field is measured in the y direction. Thus, only the (M · B)E component of the current is active. with a B and a C , are momentum dependent ferromagnetic exchange interaction, which break the time-reversal symmetry as well. They were considered in [10,11] in studies of fermion's g-factor anisotropy in quantum wells. The second term in the first line of Eq. (2) is the regular ferromagnetic exchange interaction, analogous to usual Zeeman magnetic field. This term simply splits the two chiraliries s = ± in momentum and can be shifted away from the Hamiltonian of a given chirality. Since we are interested in effects linear in M the shift will not affect the terms in the second line of Eq. (2). In all of the models symmetry between the chiralities is broken by the terms in the second line of Eq. (2). Our three models which we will be calling as A, B, and C in accord with a A , a B , and a C terms in Eq. 2 correspondingly, are three dimensional metals with spinorbit coupling. This implies presence of the Berry curvature and orbital magnetization in the description of the fermions [15,16]. To study the electric current, we employ the method of kinetic equation, with equations of motion updated in the presence of the k ] we consider two scattering processes described by different life-times. The first one is scattering within the chiralities denoted by τ , and the other -between the chiralities denoted by τ V , namely I coll [n k is the distribution function averaged over the angles. To analyze the electric current we follow approximations used in [13]. In [13] the electric current for a system with a B = a C = 0 was studied, and it was shown that there is indeed linear magnetoconductivity due to an interplay of the chiral anomaly and the tilt of the Dirac cones and. Based on the findings of [13] (also see Supplemental Material [19]), we here distinguish three contributions to the current. The first one is due to the chiral anomaly [18]. In other words, when a difference of charges at different chirilities builds up in the presence of electric and magnetic fields, namely N + −N − ∝ τ V (E·B). This contribution results in ∝ τ V (E · B)M term in the current. Second contribution to the current is similar in nature to the first one, with the only difference being that a build up of a non-zero chiral charge in the two valleys happens for a given absolute value of the momentum when only the electric field is present. Namely,n k . We show that there is no chiral anomaly due to this contribution in all the three models. This contribution results in ∝ τ V (E·M)B to the current. Note that the first two contributions are defined by interchirality relaxation processes, and are proportional to τ V . Third contribution to the current is due to the Berry curvature and orbital magnetization corrections to the fermion velocity. This contribution is primarily defined by relaxation processes within the chirality and, thus, defined by time τ . All three terms present in Eq. (1) can be derived for the electric current from the third contribution. However, we are assuming that τ V τ , which allows us to select from them only the unique term of the ∝ τ (M · B)E type. This assumption is legitimate given that the splitting between the chiralities defined by M is large. Details of the derivations are given in [19]. Here we list calculated expressions for the linear magnetoransport for the three models, In all the three models all terms listed in Eq. (1) are present. The signs and numerical coefficients are model dependent. We also find in our calculations a ∝ (E · M)(M · B)M term in the current. However, it doesn't have a pronounced angle dependence. Quasi two-dimensional systems. Here we consider quasi two-dimensional system with two-dimensional Rashba spin-orbit coupling in x − y plane, i.e. with Rashba spin-orbit coupling vector in z− direction, and magnetization M pointing in z− direction. This is a model of a hypotethical BiTeI type material with spontaneous magnetization pointing in z− direction. The Hamiltonian of the system iŝ where k = (k x , k y , k z ) is the three-dimensional momentum. The spectrum consists of two branches k;± = k 2 2m ± M 2 z + (λk ) 2 , and we assume that the chemical potential µ is such that both branches are occupied. The Berry curvature can only point in z− direction. Moreover, integrating kinetic equation over the angles, one can check that there is no chiral anomaly in the system, meaning that the N + − N − = 0 in applied electric and magnetic fields. We approximate τ = τ V as the two chiralities are close to each other in momentum and energy space. We calculate linear magnetoconductivity following the same steps outlined above, and we get where I 1 , I 2 , and I 3 are defined in the SM. Again, all terms listed in Eq. (1) are present in Eq. (8), but the current depends only on M z . In addition to Eq. (8) we also find ∝ M z B z E z e z term in the current (see [19] for details). We can't generalize the obtained expression Eq. (8) to any direction of the magnetization, because M x and M y can be shifted away from the Hamiltonian Eq. (7). One might wonder what will happen if non-linear time-reversal symmetry breaking due to M = (M x , M y , M z ) corrections to the Hamiltonian, similar to those with a B and a C in Eq. (2), will do to the linear magnetoconductivity. According to [21], all such possible corrections which would enter the Hamiltonian Eq. (8) with σ x and σ y Pauli matrices, will not affect the Berry curvature and orbital magnetization to linear order in magnetization M. The result of those entering with σ z in the case when M z = 0 and M x = 0 and M y = 0 can be traced from the following argument. According to [21], one can think of a system which will have a non-trivial Berry curvature and orbital magnetization when M z = 0, M x = 0 and M y = 0 in Eq. (7). In such case one will then need to add spin-orbit coupling term obeying, for example, a C 3v symmetry to the Eq. (7). Such a spinorbit coupling (which can be thought of having a vector in where v D and α are coefficients. Then, even in this case, the linear magnetoconductivity will be of the Eq. (8) form with the only difference of M z being replaced by M y with an appropriate coefficient. Finally, note that the term with ∝ (E · B) in Eq. (8) reminds the chiral anomaly contribution, however, it is of different origin. In other words, there is no difference in chemical potentials of the two chiralities when electric and magnetic fields are applied, and as already mentioned Discussion. Typically, low-field linear magnetoconductivity has a small magnitude, and at some point it gets overshadowed by quadratic magnetoconductivity as the magnetic field is increased. Despite of that linear magnetoconductivity has a rich anistropic structure, which can be tested in the experiment (see Fig. 1). We think that the low-energy description of the conduction fermions in ferromagnets, which experimentally show linear magnetoconductivity, fall in to the classes of the theoretical models presented above. Or, it might as well be, in to some other models with the same ingredients, namely, the spin-orbit coupling and momentum dependent ferromagnetic exchange interaction. As a result, if either of the α 1 , α 2 , or α 3 components of the current Eq. (1) is observed in the experiment, the other two must also be present. Based on our findings, below we comment on the two recent experiments. In the experiment Ref. [4] linear magnetoconductivity was observed in ferromagnetic metal SmCo 5 and in ferromagnetic domains of Cd 2 Os 2 O 7 antiferromagnet. Terms with (E · B)M and (M · B)E in Eq. (1) were observed in the experiment. Based on our findings, we think that a (E · M)B component of the current was overlooked [20]. We hope further experiments will identify this missing term, thus, confirming our theoretical models and discussed above mechanism of linear magnetoconductivity. Moreover, we think that the (E·B)M term in the current observed in Ref. [4] might be due to the chiral anomaly. However, further analysis should be made to eliminate possible quasi-two dimensional properties, where as is shown for model D, Eq. (8), such term is present but isn't due to the chiral anomaly. In another experiment Ref. [5] linear magnetoconductivity was observed in magnetic Weyl semimetal Co 3 Sn 2 S 2 [22,23]. There it was claimed that the effect might be due to the tilt of the Weyl cones, namely due to a A term in the Eq. (2) -mechanism first proposed in [13]. Our findings introduced above suggest that the mechanism due to the tilt might not be the only one. Below we will make one more comment on the experiment Ref. [5]. Below are four comments on the model Hamiltonians Eq. (2) and (7). First, any three-dimensional linear in momentum spin-orbit coupling will be described by physics of Weyl semimetals. Hence the choice of the model Hamiltonian Eq. (2) -Weyl semimetal with two chiralities. However, one can reengineer the Hamiltonian, for example, by adding a regular, ∝ k 2 2m type, term. Then, in models B and C such term will allow us to simplify the spectrum by reducing it to only one valley. Note that the problem of chiral anomaly would not be faced in this case, and the overall charge will be conserved. This is because there will still be two Fermi surfaces with opposite chiralities. Model A, on the other hand, can't be reduced to only one valley. Second, we saw that two main ingredients for the linear magnetoconductivity are the linear in momentum spinorbit couping and momentum-dependent ferromagnetic exchange interaction. However, this is is not a unique combination, and one can achieve the same effect of momentum-dependent exchange interaction by introducing, in addition to linear spin-orbit coupling, next in expansion, if symmetry allows, cubic in momentum term. Then, ferromagnetic exchange interaction can be kept to zeroth order in momentum (just like regular Zeeman term). The two schemes are similar to each other and should result in similar linear magnetoconductivity. Third, in the realistic bulk systems the spin-orbit might not be pure, meaning that, some Pauli matrix in the Hamiltonian is not, or only partly, describing the spin of the electrons. Instead, it might be describing pseudospin or some mixture of spin and, for example, unit cell's degree of freedom. In this case M in the components in the current will be anisotropic, and, in the most severe example, the current might only depend only on one projection of the magnetization M. For example, in model D, Eq. (7), we saw that it is only on M z projection the linear magnetoconductivity depends on. However, all three terms in the linear magnetoconductivity are present in the model. When model D is reduced to two dimensions, only the M z B z E out of the three terms will survive. Fourth, depending on the symmetries of the crystall structure of experimental magnetic system, the magnetization M entering the current Eq. (1) and the second line in the Hamiltonian Eq. (2) migth be replaced with its rotated direction, for example M → M × e x,y,z or even with other configurations (for example see Eq. (3) in Ref. [21]). Note that we have already discussed a possibility of such a replacement after Eq. (8). Quite likely, this situation was observed in the experiment Ref. [5]. Namely, Ref. [5] observed two linear magnetoconductivity terms, j x ∝ E x M z B y and j y ∝ E x M z B x . In terms of Eq. 1, the two can be understood as Therefore, we predict that δj = α 2 (E · [M z e z × e x ])B = α 2 (E y M z )B term should also be observed in the experiment Ref. [5]. Essentially, the theory presented in this paper is based on the effect of the Berry curvature on the fermion properties. We note that there are other known scattering processes which contribute to the anomalous velocity of fermions. These are the skew-scattering and side-jump processes which are known, for example, to contribute to the anomalous Hall effect [24][25][26]. As is discussed in [27] these processes might contribute to the linear magnetoconductivity as well. Whether they will result in current of the Eq. (1) type is a question for future research. Since the theory of the linear magnetoconductivity presented in this paper stems from the Berry curvature of fermions, and so as the anomalous Hall effect does too [28], we think that both effects, linear magnetoconductivity and the anomalous Hall effect, should be experimentally looked for in the same material. For example, the model A is known [29] to show anomalous Hall effect as a function of the tilt, here and in [13] we concluded that it shows linear magnetoconductivity due to the same tilt as well. So as the model D [28] has the same feature. It can be checked that the remaining B and C models have the same property. In passing, more magnetic Weyl and topological semimetals have been recently experimentally identified [30][31][32], and based on our findings here, we anticipate that linear magnetoconductivity, just like in experiments [3][4][5], should be observed in these systems. Moreover, we believe that linear magnetoconductivity should be added to a plethora of effects and properties such as the Fermi arcs [33], chiral anomaly driven positive longitudinal magnetoconductivity [18,34] and symmetric in magnetic field so-called planar Hall effect (both are the components of the δj ∝ (EB)B current [13], and see comment [30] in Ref. [21]), anomalous Hall effect (due to the chirality splitting [35] and due to the tilt a A alone [29]), chiral collective modes [36,37], and others, which make Weyl semimetals unique physical systems [38]. Conclusions. In this letter we theoretically discussed the mechanism of linear magnetoconductivity in magnetic metals. We identified two necessary ingredients for the minimal model of the Hamiltonian of conducting fermionsand -three-dimensional spin-orbit coupling and momentum dependent coupling to the magnetization. If the spin-orbit coupling is two-dimensional, the coupling to the magnetization is of regular exchange interaction. We proposed and studied four models Eq. (2) and (7) of such two scenarios. In all of the models linear magnetoconductivity contains three unique terms outlined in Eq. SUPPLEMENTAL MATERIAL TO "LINEAR MAGNETOCONDUCTIVITY IN MAGNETIC METALS" BERRY CURVATURE AND ORBITAL MAGNETIZATION Here we outline derivations of Berry curvature Refs. [15,16] in the Main Text for general two-band fermion system. We assume the system to be three-dimensional. Thus, there must necessary be two chiralities denoted by s = ± in the system. Spin part of the Hamiltonian for s = ± chirality is H (s) spin = sg k · σ = s(g x;k σ x + g y;k σ y + g z;k σ z ). ( The spectrum is For s = ± wave functions are where g x;k + ig y;k = g ;k e iχ k , where g ;k = g 2 x;k + g 2 y;k and sin(θ k ) = g ;k |g k | . We note that it only appears that the wave functions obey Ψ and, therefore, one may conclude that they don't form an orthogonal set. However, there is another pseudo-spin type degree of freedom corresponding to a chirality s = ± which makes the wave function orthogonal to each other. Useful identities are Berry curvature is Orbital magnetization is We can draw a relation between the Berry curvature and orbital magnetization, The spectrum for the conduction (assuming chemical potential µ > 0) band + is updated by the Berry curvature and orbital magnetization (for a review see Ref. [16] in the Main Text), ε (s) In most of the cases, the Berry curvature (for example, for the conduction band) can be presented as then the velocity for the conduction band is v (s) where v (s) The results of this section will be used below when calculating the electric current. Kinetic equation Here and throughout the Supplemental Material we follow approximations used in Ref. [13] in the Main Text. We assume that µ > 0, such that the conduction band is described by (s) +;k spectrum. Below we will omit the + index. Kinetic equation for the conduction band is We set ∂n (s) k ∂r = 0 and assume a steady state Collision integral consists of two scattering processes, wheren where .. = sin(θ)dθdφ 4π (..) is the integration over the angles. The first term in the collision integral is the scattering of fermions within the s = ± chirality (valley/Weyl cone), while the second term, i.e. with τ V , is the inter-chirality scattering of fermions. The second term is important in stabilization of the chiral chemical potential -disbalance of chemical potentials of s = ± chiralities. The collision integral can be rewritten in a more suggestive form is the total inverse fermion life-time. Electric current is To obtain the current we will approximate the kinetic equation and find n where .. = sin(θ)dθdφ 4π (..) is a short notation for the integration over the angles. To the lowest order in electric and magnetic fields, the left-hand side can be rewritten as The right-hand side is note that Therefore, in steady state where = vk notation was used. The second mechanism is when e(v k n k,− = 0, and N + = N − . In the next subsection we will explicitly show this for the three studied in the Main Text theoretical models. However, although there is no chiral anomaly, this mechanism contributes to the electric current with a unique contribution of ∝ (E · M)B. Let us show how the two discussed mechanisms of the chirality disbalances result in the contribution to the current Eq. (21). To calculate the current we obtain the expression for the distribution function from the kinetic equation, It is enough to take n (s) k =n (s) (also assuming that τ V τ ) from Eq. (30). In other words, the intra-chirality scattering processes resulted in an averaged over the angles steady-state distribution of fermions within the chiralities. The corresponding contribution to the current reads (see Ref. [13] in the Main Text for details), For example, schematically, if the velocity and Berry curvature contain symmetric and anti-symmetric in s = ± terms v (s) k then, the current becomes where Λ . Sincen (±) depends only on the absolute value of the momentum, |k|, integration over the angles k v (0) k n (+) +n (−) = 0 and e 2 c k (Ω k and Ω (1) k can be even in momentum. This results in We note that if two first terms would have taken from Eq. (30) for the distribution function, namely n (s) k =n (s) + τ * Λ (s) k , then in the expression for the current Eq. (36) we will have τ V τV−τ τV+τ instead of just τ V . However, as we have already mentioned, in the limit τ V τ we can approximate τ V τV−τ τV+τ ≈ τ V . Let us apply this formula to the three theoretical three-dimensional models discusses in the main text. We remind that the Hamiltonian of the models iŝ with the notations defined in the main text. We note that the linear magnetoconductivity for the model A was first studied in Ref. [13] in the Main Text, and for the model B in Ref. [12] in the Main Text. We assume that v > a A,B,C which allows us to expand spectrum, velocity, and the Berry curvature in a A,B,C parameter. Model A For the model A described by the a A = 0 and a B = a C = 0 we calculate where Θ(x) is the Heaviside function, and where we used = vk notation. Velocity is calculated to be v (s) and the Berry curvature is Ω (s) k = −s k 2k 3 ≡ sΩ k . In the notations introduced above, v We then get for the chiral anomaly current Model B For the model B described by the a B = 0 and a A = a C = 0 we calculate where again = vk. For example, for the M = M e z choice, we write for the chiral anomaly current where we have generalized the result to any direction of the magnetization. Model C We have for the model C, For the sake of calculations we assume M = M z e z , get v z = v kz k +sa C M z k 1 + k 2 z k 2 , and calculate the chiral anomaly current, where we generalized the result to any direction of the magnetization M. Absence of chiral anomaly when B = 0, E = 0, and M = 0 One may wonder if there is a chiral anomaly, i.e. fermion charge disbalance in the two chiralities, when B = 0, E = 0, and M = 0, namely, N + − N − ∝ τ V (E · M). Here we show that although there might be non-zero fermion distribution disbalance in the two chiralities s = ± when just an electric field is applied, namely ∆ (+) k n k,+ − ∆ (−) k n k,− ∝ (E·M), the chiral anomaly completely vanishes for each theoretical model we are considering. To calculate the chiral anomaly we need to integrate Λ + k over the momentum. Model B and C The integral for the model B is where = vk. In order to calculate the integral, one needs to carefully integrate second derivative by parts Therefore, indeed as claimed, By examining the expression for Λ + k for the model C, we conclude that k 2 dk 2π 2 Λ + k = 0 is true also for the model C. Berry curvature and orbital magnetization contribution to the velocity Here we will use the notation for the fermion velocity introduced in the "Berry curvature and orbital magnetization" section, namely v (s) where v We have already considered the effect on the electric current of the first two terms in Eq. (56). They have resulted in the chiral anomaly and disbalance of fermion densities of opposite chiralities. Now, let us understand what the third terms does. From the third term, besides regular Drude conductivity, we obtain contributions to the current which are defined by the Berry curvature and orbital magnetization, For models A, B, and C we extract from the expression above only the terms which result in (M · B)E contribution to the current. Other terms result in (E · M)B and (E · B)M contributions which add up to the ones obtained from the chiral anomaly contribution. We ignore these terms assuming that τ V τ . where Berry curvature was presented as Ω 1;k is proportional to the first power of the momentum-dependent ferromagnetic exchange interaction. Also we have used a = vk notation, and n( ) = (e −µ T + 1) −1 . With the details of the models outlined in the "Details of the theoretical models" section (below), we present the results for the contribution to the electric current, which are linear in magnetic field and magnetization. For the model A, Note the existance of a unique ∝ 1 |M| 2 (E · M)(B · M)M term in the current for the model B. This term also has angle dependence. All terms from Eq. (57) contribute to the model D. We derive them in the "'Details of the theoretical models' section below.
6,909
2021-03-15T00:00:00.000
[ "Materials Science" ]
Deep learning based assessment of hemodynamics in the coarctation of the aorta: comparison of bidirectional recurrent and convolutional neural networks The utilization of numerical methods, such as computational fluid dynamics (CFD), has been widely established for modeling patient-specific hemodynamics based on medical imaging data. Hemodynamics assessment plays a crucial role in treatment decisions for the coarctation of the aorta (CoA), a congenital heart disease, with the pressure drop (PD) being a crucial biomarker for CoA treatment decisions. However, implementing CFD methods in the clinical environment remains challenging due to their computational cost and the requirement for expert knowledge. This study proposes a deep learning approach to mitigate the computational need and produce fast results. Building upon a previous proof-of-concept study, we compared the effects of two different artificial neural network (ANN) architectures trained on data with different dimensionalities, both capable of predicting hemodynamic parameters in CoA patients: a one-dimensional bidirectional recurrent neural network (1D BRNN) and a three-dimensional convolutional neural network (3D CNN). The performance was evaluated by median point-wise root mean square error (RMSE) for pressures along the centerline in 18 test cases, which were not included in a training cohort. We found that the 3D CNN (median RMSE of 3.23 mmHg) outperforms the 1D BRNN (median RMSE of 4.25 mmHg). In contrast, the 1D BRNN is more precise in PD prediction, with a lower standard deviation of the error (±7.03 mmHg) compared to the 3D CNN (±8.91 mmHg). The differences between both ANNs are not statistically significant, suggesting that compressing the 3D aorta hemodynamics into a 1D centerline representation does not result in the loss of valuable information when training ANN models. Additionally, we evaluated the utility of the synthetic geometries of the aortas with CoA generated by using a statistical shape model (SSM), as well as the impact of aortic arch geometry (gothic arch shape) on the model’s training. The results show that incorporating a synthetic cohort obtained through the SSM of the clinical cohort does not significantly increase the model’s accuracy, indicating that the synthetic cohort generation might be oversimplified. Furthermore, our study reveals that selecting training cases based on aortic arch shape (gothic versus non-gothic) does not improve ANN performance for test cases sharing the same shape. The utilization of numerical methods, such as computational fluid dynamics (CFD), has been widely established for modeling patient-specific hemodynamics based on medical imaging data.Hemodynamics assessment plays a crucial role in treatment decisions for the coarctation of the aorta (CoA), a congenital heart disease, with the pressure drop (PD) being a crucial biomarker for CoA treatment decisions.However, implementing CFD methods in the clinical environment remains challenging due to their computational cost and the requirement for expert knowledge.This study proposes a deep learning approach to mitigate the computational need and produce fast results.Building upon a previous proof-ofconcept study, we compared the effects of two different artificial neural network (ANN) architectures trained on data with different dimensionalities, both capable of predicting hemodynamic parameters in CoA patients: a one-dimensional bidirectional recurrent neural network (1D BRNN) and a three-dimensional convolutional neural network (3D CNN).The performance was evaluated by median point-wise root mean square error (RMSE) for pressures along the centerline in 18 test cases, which were not included in a training cohort.We found that the 3D CNN (median RMSE of 3.23 mmHg) outperforms the 1D BRNN (median RMSE of 4.25 mmHg).In contrast, the 1D BRNN is more precise in PD prediction, with a lower standard deviation of the error (±7.03 mmHg) compared to the 3D CNN (±8.91 mmHg).The differences between both ANNs are not statistically significant, suggesting that compressing the 3D aorta hemodynamics into a 1D centerline representation does not result in the loss of valuable information when training ANN models.Additionally, we evaluated the utility of the synthetic geometries of the aortas with CoA generated by using a statistical shape model (SSM), as well as the impact of aortic arch geometry (gothic arch shape) on the model's training.The results show that incorporating a synthetic cohort obtained through the SSM of the clinical cohort does not significantly increase the model's accuracy, indicating that the synthetic cohort generation Introduction In recent years, technological developments have enabled the integration of artificial intelligence (AI), including machine learning (ML), into clinical practice (Asselbergs and Fraser, 2021).This promising development holds the potential to transform various aspects of medicine, ultimately advancing the concept of precision medicine and thereby improving healthcare across many domainsfrom diagnosis to treatment decision and planning.One of the most advanced applications of AI in clinical practice is the field of diagnostic imaging (Yasaka et al., 2018;van Leeuwen et al., 2021).Additionally, AI has demonstrated considerable success in automating the diagnosis of electrocardiographic data (Siontis et al., 2021). Furthermore, technological advancements during the last 2 decades have facilitated the use of image-based computational fluid dynamics (CFD) analysis within the field of cardiovascular medicine (Morris et al., 2016).This approach, known as imagebased CFD modeling of patient-specific hemodynamics, allows the computation of flow parameters with notably higher spatial and temporal resolutions than achievable by any existing in vivo imaging technique (Canè et al., 2022).The outcomes of these simulations could finally be used for clinical support, particularly in cardiovascular surgical planning and diagnostics (Morris et al., 2016).However, despite having many benefits, image-based CFD remains sparsely used in routine clinical practice, with a few exceptions such as the calculation of the Fractional Flow Reserve by HeartFlow (Taylor et al., 2013).Several factors contribute to the limited clinical integration.Notably, CFD demands long computation times, substantial computational resources, and experienced engineers to set up simulations correctly.Unfortunately, these limitations make CFD less usable with current clinical workflows (Huberts et al., 2018). Recently, ML has been proposed as a valuable tool to enhance CFD methods.The primary objective of employing ML in CFD is to optimize various aspects, including the acceleration of simulations, as seen in direct numerical simulations (Bar-Sinai et al., 2019), the improvement of turbulence models, and the development of reduced-order models (Duraisamy et al., 2019;Vinuesa and Brunton, 2022).Furthermore, ML can serve as a low-dimensional approach to replace CFD by using deep learning (Yevtushenko et al., 2022;Yevtushenko et al., 2023). For instance, Ferdian et al. (2022) demonstrated that spatiotemporal wall shear stress (WSS) in the aorta can be estimated using a convolutional neural network (CNN) based on U-Net architecture.They accomplished this by using data of fourdimensional phase-contrast magnetic resonance imaging (4D PC MRI) assessing a three-dimensional velocity field with various image resolutions as input.The success of their approach leads to the question of whether a similar approach could be applied to predict other hemodynamic parameters, such as blood pressure. The major aim of the presented study is to advance the use of artificial neural network (ANN) for treatment decision support, building upon recent work by Yevtushenko et al. (Yevtushenko et al., 2022), for the calculation of the pressure drop (gradient) in coarctation of the aorta (CoA) using the CNN as an alternative approach to the bidirectional recurrent neural network (BRNN).CoA, a congenital heart disease characterized by aortic narrowing (stenosis), causes a high pressure gradient that affects human circulation (Kenny and Hijazi, 2011;Brown et al., 2013).In addition to introducing a change in network architecture (CNN vs. BRNN), the low-dimensional representation of the aortic shape (one-dimensional (1D) scalar values along the centerline) was replaced with a high-dimensional approach (two-dimensional (2D) cross-sections along the centerline) to potentially improve the ANN performance by providing a more spatially accurate representation of aortic shape.Furthermore, the study explores several aspects of ANN training, including the use of real versus synthetic aortic shapes with CoA and the impact of different anatomical pathologies, such as gothic arch shapes. Materials and methods In this study, data for training and testing two different ML models were used from a database provided in a recently published study (Yevtushenko et al., 2022).These data were derived from image-based CFD simulations of real patients as well as simulations based on synthetically generated boundary conditions, as described in our earlier work (Thamsen et al., 2021).Subsequently, the data generation procedure is briefly summarized: • The aortic geometry was manually reconstructed from 3D steady-state free-precession (SSFP) magnetic resonance images (MRI) of the thoracic aorta.Additional information regarding the MRI device, MRI acquisition sequence, and the segmentation procedure, including surface reconstruction for CFD simulations, is described in our previous work (Yevtushenko et al., 2022).• In this study, real data were extracted from 106 patients with CoA before treatment, 37 patients (a sub-cohort of 106 CoA patients) after treatment, and 85 healthy subjects, forming a real cohort of 228 cases.This cohort was also used to construct an SSM, which allowed the generation of synthetic cases.• A subset of 139 cases from the real cohort was used to train ANNs.• For accurate flow boundary conditions, 4D PC MRI was used. This included obtaining the inlet velocity profile and peak systolic flow rates at the ascending and descending aorta for the cases where PC MRI data were available.Otherwise, flow boundary conditions were synthetically generated, as described previously (Yevtushenko et al., 2022).• A synthetic cohort of 2968 cases was generated based on the statistical shape model (SSM) aiming to expand the training database upon the real cohort.This entailed generating both boundary conditions needed for CFD simulations: the geometry of the aorta and the inlet as well as outlet flow rates.The SSM approach used linear principal component analysis as described in more detail here (Thamsen et al., 2021;Yevtushenko et al., 2022).• Hemodynamics of CoA cases, both real and synthetic, were calculated using the commercial CFD solver Siemens STAR-CCM+, version 13.02 (Siemens PLM Software, Plano, TX, United States).Simulations were performed only at the peak systolic state to reduce computational costs of CFD simulations and because only this state of a cardiac cycle is required for the treatment decision according to the clinical guidelines (Baumgartner et al., 2010).• To evaluate the performance of the trained ANNs, 18 real cases were reserved for testing.Among these, 13 cases were CoA patients, whereas 5 represented individuals with a healthy aorta.These cases were excluded from the training and validation process, solely reserved for testing, and were also not used for the development of the synthetic cohort to mitigate data leakage. This chapter is further subdivided into four subsections (2.1-2.4)describing data structure and architecture for both ANNs: 1D BRNN and 3D CNN.The subchapter after (2.5) describes 4 ML experiments performed within the frame of this study aiming to assess various aspects of ANN performance.The final subchapter (2.6) provides an overview of the statistical tests used to evaluate the significance of the results. One-dimensional bidirectional recurrent neural network data structure The development of an ANN usually starts with a definition of the ANN's output parameters as well as their dimensions and resolutions.In our case, the major aim of ANN development is the prediction of hemodynamic biomarkers that characterize aortic flow.These biomarkers could support clinical decision-making and are typically computed using CFD.CFD primarily calculates pressure and velocity vector fields with high spatial resolution, which allows calculating derivatives, e.g., pressure drop or WSS, as well as integral parameters such as surface-averaged WSS or pressure drop between the inlet and outlet.However, high spatial resolution quantitative data (velocity and/or pressure fields) provided by CFD is not directly employed in clinical decisionmaking.Therefore, an ML approach presents an opportunity to develop a model capable of directly predicting integral and derived hemodynamic parameters, while significantly reducing computational cost, functioning as a form of reducedorder modeling. The following hemodynamic parameters were chosen to be predicted by the initially proposed ANN using 1D BRNN architecture: • relative static pressure, mmHg • wall shear stress (WSS), Pa • secondary flow degree (SFD), -• specific kinetic energy (KE), mJ/kg • average turbulence kinetic energy (average TKE), mJ/kg • maximum turbulence kinetic energy (maximum TKE), mJ/kg • average velocity magnitude over cross-section, m/s • maximum velocity magnitude over cross-section, m/s The selection of hemodynamic parameters to be predicted by the ANN was based on the following considerations: 1. Pressure drop, which is calculated from the relative static pressure curve, was selected because this is a major quantitative clinical biomarker (if the invasive catheterbased pressure measurements are performed) used by clinicians to decide whether to treat the CoA.All static pressure curves calculated by CFD or predicted by ANN were set at the inlet to the fixed static pressure value of 120 mmHg.This is done to enable a comparison between ANN and CFD since CFD is unable to calculate systemic blood pressure and calculates pressure curve course only with an inlet static pressure defined by a user.2. Velocity magnitude, which is clinically measured with doppler echocardiography, was selected because usually used to assess pressure gradient by using the Bernoulli equation.3. TKE was proposed because aortic flow is associated with turbulent flow states.Respectively, the ability of the ANN to predict turbulent parameters is indirectly associated with the ability to correctly predict pressure and velocity parameters.4. WSS was selected because this is currently one of the major hemodynamic biomarkers used in cardiovascular research that is associated with various pathologies, such as atherosclerosis, thrombus formation, aneurysm development, and vessel dilatation. 5. Specific (volume normalized) KE and SFD, which is calculated at each cross-section as a ratio of the mean in-plane to the mean through-plane velocity magnitudes, were selected because these are hemodynamic parameters of interest since their increased values are associated with flow features, such as recirculations, helicity, and swirl.These flow features characterize abnormal hemodynamics, which are associated with pathologies (e.g., bicuspid aortic valve (Thamsen et al., 2021) or cardiovascular diseases (e.g., aortic valve stenosis (Nordmeyer et al., 2019)), and affect other hemodynamic parameters, such as WSS, pressure drop, turbulence, or maximal velocity magnitude. As part of our reduced-order modeling strategy, we proposed to assess hemodynamic parameters in a 1D centerline-aggregated format, instead of the high-resolution 3D CFD data.This transformation involved the following steps: 1. Generation of a discrete centerline: To start, a discrete centerline along the case-specific surface model of the aorta was generated, with points spaced 2 mm apart. 2. Vessel cross-section creation: At each centerline point, vessel cross-sections were generated, representing a slice of the aorta that is perpendicular to the centerline.The area of each crosssection was calculated.3. Vessel surface segments: Vessel surface rings were created between neighboring cross-sections, which are essential for calculating surface-averaged WSS values in relation to the respective cross-section point.Additionally, a moving average filter with a window width of 12 neighboring rings was applied to WSS centerline-based data due to its higher variance (see Figure 1).As a result, the ANN was trained to predict the segment averaged WSS instead of the exact average value for every given centerline point.4. Calculation of hemodynamics: On each centerline point, the hemodynamic parameters, including relative static pressure, WSS, SFD, KE, TKE, and velocity, were locally averaged across cross-sections.Moreover, the maximum cross-sectional values for TKE and velocity magnitude were determined. The ANN was trained to map aorta geometry and blood flow input information to the eight aforementioned hemodynamic parameters, which resembles CFD simulations in a reduced form.The input information was composed of seven features: • average velocity magnitude through cross-section (n × 1) where n denotes the number of centerline points, which was set to 178, aligning with the number of centerline points in the longest aorta within the clinical and synthetic cohort.Note that most cases had fewer centerline points.Consequently, zero padding was applied to all the subsequent centerline points after the outlet. The radius was derived from the maximum inner sphere that could fit into the aorta at each centerline point.The gradient of the radius was computed using second-order accurate central differences. where r denotes the radius, g represents the gradient of the radius, h signifies the spacing between centerline points, and n stands for the total number of centerline points.Blood flow at each centerline point was derived from the ascending inlet flow and outlet flow of the branching vessels at the aortic arch.The initial ascending inlet flow rate is diminished by the outlet flow after each bifurcation.Finally, the average velocity magnitude through each cross-section was computed by dividing the flow rate values by the area of the circular cross-section: where v stands for the velocity through the cross-section, f denotes the blood flow, and n represents the total number of centerline points.An example of input and output data for a patient with CoA can be found in Figure 1. One-dimensional bidirectional recurrent neural network architecture The centerline-aggregated parameters of aortic flow can be considered as the duct flow of the blood in one direction.Hemodynamic values at any given centerline point are influenced by those preceding and following it; therefore, an ANN capable of capturing sequential dependencies is desirable.Hence, the recurrent neural network (RNN) was employed to predict hemodynamics along the centerline.The core ML approach was based on the model published earlier (Yevtushenko et al., 2022), and the implementation is done using TensorFlow 2.12.0 in Python (see Figure 2).It consisted of three major components: • a long short-term memory (LSTM) BRNN, • a densely connected neural network layer (dense layer) with Leaky rectified linear unit (ReLU) activation function, and • an additional dense layer for hemodynamic outputs. Leaky ReLU addresses the so-called dying ReLU problem, a limitation observed in the traditional ReLU activation function.Instead of assigning a gradient of 0 to all negative input values, Leaky ReLU introduces an extremely small linear component for negative inputs (Xu et al., 2015).The BRNN consists of two RNNs, one trained on the original input sequence, and the other on a reversed sequence.The hidden states of both RNNs are merged; in our case, the forward and backward outputs were concatenated, resulting in double the number of outputs that were fed into the next layer.The Leaky ReLU activation function with a slope of 0.3 for negative values was used in the following dense layer.Finally, another dense layer is added to map the outputs from the previous layer to hemodynamic values, resulting in an output space with a dimensionality equal to the number of output features. We named this model 1D BRNN, highlighting the dimensionality of the training data and the core ANN.The 1D BRNN was trained with an initial learning rate of 0.001 and a batch size of 50.The learning rate was exponentially decreasing with the number of epochs.To optimize hyperparameters, 10-fold crossvalidation was performed for each hyperparameter separately, with the optimal values found for each one indicated in brackets: • scaling input and output data (no scaling) • loss function (masked root mean square error (RMSE)) • optimizer (Adam) Adam optimization is a stochastic gradient descent method based on adaptive estimation of first-order and second-order moments (Kingma and Ba, 2014).The masked RMSE considers only data within the aorta (non-zero output data) to compute the error. It is worth noting that conducting a grid search to explore all possible hyperparameter combinations could have resulted in a different set of optimal hyperparameters.Due to its time- consuming nature, this approach was mostly avoided in this research.A coarse grid search was, however, used to determine the best combination of units (output dimensionality) for LSTM cells and the dense layer following the BRNN.The resulting 200 and 800 units, respectively, were found to provide the best performance.Once the optimal hyperparameters had been selected, the final model was trained on the entire training dataset.To prevent overfitting, a small validation dataset was still retained, and early stopping was used.Specifically, if the validation loss did not improve for 20 epochs in a row, the training process was stopped. One important issue to consider is the potential for data leakage from the training to the validation datasets, as the synthetic cohort was generated from the SSM of the real cohort.However, if the estimation of general performance is equally biased for all validation datasets within cross-validation experiments, data leakage may not be a significant concern.After all, the main objective is to identify the best hyperparameters for the final model, which can also be based on relative performance.The held-out test was then used for the least biased estimation of general performance. Three-dimensional convolutional neural network data structure The centerline-aggregated method, which involves averaging cross-sectional surface values of hemodynamic parameters along the aorta centerline, as in the 1D BRNN approach, could result in a loss of potentially valuable information and, consequently, worse ANN performance.By retaining geometric cross-sectional surface information and employing a different ML approach a more accurate hemodynamic predictor could potentially be achieved.In this alternative approach, the cross-sectional surface values were stacked into a 3D array instead of using 1D values along the centerline. To prepare the input and output data for training, 80 crosssectional planes were extracted along the aorta centerline.Each cross-sectional plane had an initial resolution of 100 × 100, which was further decimated to 48 × 48.The decimation process involved applying a low-pass filter, specifically a Gaussian filter, to smooth the image before downsampling.This step helps to prevent aliasing artifacts.The standard deviation (σ) of the 2D Gaussian kernel was determined based on the following equation: where s is a ratio between the input dimension and the desired downsampled dimension.In our case, s [100,100] [48,48] , which results in σ [0.54, 0.54]. The spacing between the planes was set to 4 mm, twice the spacing used for training the 1D BRNN model.Note that not all 80 cross-sectional planes were needed for each case, as the aorta length varies from case to case.For all planes outside of the aorta, all input and output values were set to zero (zero padding). The maximum length of the aorta that the 3D ML model could handle is 320 mm (80 cross-sections × 4 mm), which does not align with the maximum length used for 1D BRNN, which was 356 mm (178 centerline points × 2 mm).The reason behind this decision lies in the design of the U-Net model (see Figure 3), which requires dimensionalities that can be divided by 2 at least three times (three decoder and encoder layers) to ensure the same input and output dimensions.The input data consisted of 7 same features used for training the 1D BRNN: • radius (48 × 48 × n × 1). where n is equal to the number of cross-sectional planes, which was 80.The output data consisted of only one feature, which was blood pressure (see Figure 4).Only one feature was selected due to GPU RAM limitations to ensure a reasonably high batch size.Static pressure was chosen because it is the most predictive hemodynamic parameter for diagnosing patients with CoA. To summarize, the training data consisted of an input data array of size (48,48,80,7) and an output data array of size (48,48,80,1).The first and second dimensions represent the height and width of each cross-sectional plane, respectively, the third dimension represents the number of cross-sectional planes, and the last represents the number of input or output features.Note that only the cross-sectional plane grid coordinates and pressure contain 3D information, whereas the remaining input features, including radius, gradient of radius, blood flow, and velocity through cross-sections, are scalar values.To accommodate this 3D data requirement, one approach is to assign constant values to the entire cross-sectional plane to represent scalar features (see the second and third rows of Figure 4).Our goal was to investigate whether including 3D geometric information provides additional insight for improving pressure course and pressure drop (gradient) prediction. Three-dimensional convolutional neural network model architecture For the task of predicting the spatial distribution of blood static pressure, a CNN inspired by the WSSNet (Ferdian et al., 2022) model was selected.The authors were able to predict WSS in the aorta from velocity sheets and coordinate flat maps using a U-Netshaped CNN architecture.Given the similarities between the task of predicting WSS and blood pressure, the WSSNet was adapted for the current task. The CNN network, referred to as 3D CNN in this study, consisted of three encoder and decoder blocks, with each block comprising two convolutional layers that used a 3 × 3 × 3 filter size and ReLU activation function.Batch normalization was applied at the end of each block.The encoder blocks used max pooling with a step size of 2 × 2 × 2, while the decoder blocks used transpose convolution instead of bilinear upsampling.One difference between these two methods is that the filter weights in bilinear upsampling remain constant during training, whereas in transpose convolution, they belong to trainable parameters.The filter size of 2 × 2 × 2 with a step size of 2 × 2 × 2 was set to avoid overlapping. The 3D CNN was trained on an NVIDIA GeForce RTX 3090 Ti with 24 GB of memory.The initial learning rate was set to 0.0001 and the Adam optimizer with exponential annealing learning rate was used to find optimal weights.The model performance was evaluated using the masked RMSE loss function which ignores all values outside the aorta.A batch size of 8 was utilized for training.While the output was scaled by standard deviation, the input features were left unscaled because training the model with both scaled input and output features resulted in less accurate results.The measurement units of features can be seen in Figure 4.Note that the training of the 3D CNN takes considerably longer compared to the 1D BRNN, primarily due to the former's higher number of trainable parameters (5,600,000) compared to the latter (660,000). 1D BRNN To evaluate the performance of the 1D BRNN, the following scalar parameters were selected: • Inlet-outlet pressure drop (PD), • maximum wall shear stress (WSS max ), and • maximum velocity magnitude at the stenosis region (V max ). PD is equal to the difference between the inlet and outlet pressure.These parameters are of particular interest in patients with CoA, as they are known to exhibit high pressure drop, elevated WSS, and abnormal velocity profiles near the narrowed section of the aorta. The Bland-Altman plots were used to assess the degree of agreement between the parameters obtained from the reference method (CFD) and the predictions from ML models.Various 1D BRNN models with optimal hyperparameters and random weight initialization were trained.Among these models, the one with the lowest validation loss was selected for the final analysis. Real versus synthetic training cohorts To investigate the potential impact of training solely on real clinical or synthetic cases, two separate models were trained.The first model utilized 139 cases available from the clinical cohort, while the second model was trained on 139 cases, randomly selected from the larger synthetic cohort.The purpose of selecting a smaller training subset from the synthetic cohort, equal to the size of the clinical cohort, is to avoid bias that might arise if one model Frontiers in Physiology frontiersin.org07 performs better simply because it is trained with more data.Both models were trained using the same optimal hyperparameters identified in the section on 1D ML model architecture (see section 2.2), changing only the batch size from 50 to 10 to increase the number of training iterations per epoch.It is generally considered advantageous if each epoch consists of multiple training iterations, as the weights go through the tuning phase more frequently.The dataset was further split into training and validation sets, with 116 cases used for training and the remaining 23 cases for validation.The training process was repeated five times, among which the one with the best performance, determined by the lowest validation loss, was selected for further analysis.Subsequently, the Bland-Altman plots of both models are compared. Gothic versus non-gothic versus mixed cohorts To explore the effect of anatomical pathologies, particularly the gothic aortic arch (Ou et al., 2004), yet another experiment was conducted.Both pathological shapes, CoA and the gothic aortic arch, are associated with a pathologically high pressure drop.However, flow phenomena causing these pressure drops are different.The pathology of the CoA is associated with an increased pressure drop in the aorta due to stenosis.Stenosis (vessel narrowing) is a type of the so-called form resistance. Flow separates downstream of the narrowing forming a recirculation zone, which is associated with energy loss (pressure drop).The gothic aortic arch, on the other hand, occurs when the width of the aorta (the distance between the ascending and descending aorta) becomes narrow, and the height of the arch is not maintained (Seo et al., 2015).The aortic arch, being a curved duct, represents another kind of form (shape) resistance.Curved vessels, especially those with high curvatures, also cause flow separation due to centrifugal force, resulting in pressure drop.However, the pressure drop due to vessel curvature in non-gothic shapes is usually negligible (Goubergrits et al., 2015;Bouaou et al., 2019).An ANN trained to predict the pressure drop in aortas without a gothic shape is not necessarily able to predict the pressure drop caused by a gothic aortic arch.Thus, the experiment aims to investigate whether there is a difference in performance when the 1D BRNN is trained only on cases with a gothic aortic arch.Three different models were trained using the following training data from the synthetic cohort: • 227 synthetic cases with gothic-shaped aortas (gothic cases), • 259 synthetic cases with non-gothic-shaped aortas (nongothic cases), • 112 synthetic gothic cases and 130 synthetic non-gothic cases (mixed cases). Only synthetic cases were used for training the 1D BRNN model for this experiment because only a small fraction (about 20) of An exemplary real CoA case (same as in Figure 1) with shape represented by cross-sections.The red labeled cross-section marks the stenosis site with the lowest diameter.Seven input cross-sections and one output parameter, calculated by CFD and prepared for the ANN training, are shown. 139 cases of a real clinical cohort can be considered as cases with a gothic-shaped aorta.To examine and compare the performance of these models, a new held-out test dataset was created.The original test dataset had an imbalanced distribution with only 1 gothic case and 17 non-gothic cases.To address this, a new test dataset was formed with an equal number of gothic and non-gothic cases.Specifically, 15 gothic and 15 non-gothic cases from the clinical cohort were selected for testing.Note that these cases were previously used in the creation of the synthetic cohort, which introduces the possibility of data leakage from the training data to the testing data. Similar to the previous experiment, each experiment was repeated five times with the same hyperparameters.The only difference is the batch size, which is set to 20.The dataset was further split into training and validation data with a ratio of 80/20.After five training sessions, the model with the lowest validation loss was selected for further performance assessment on testing data.The Bland-Altman plots of all three models were compared.The impact of aortic arch shape on ANN was investigated only for the PD parameter since the pressure drop is the main factor that can be affected by an aortic arch shape. 1D BRNN versus 3D CNN To assess the influence of architecture and training data dimensionality on the performance of the ANN, the 3D CNN described in Section 2.4 was trained.In order to compare the 1D BRNN and 3D CNN, Bland-Altman plots for PD were plotted.The cross-sectional pressure values (3D CNN predictions) were averaged along the centerline of the aorta to align the output dimensionality to 1D.This allows for a comparison of both model outputs.Furthermore, since PD is a scalar value, it does not provide information about the pressure profile along the aorta centerline.To assess the agreement of the pressure curves, the RMSE was calculated: where n denotes the number of centerline points, p i the pressure prediction by the 3D CNN or 1D BRNN on the ith centerline point, and pi represents the CFD ground truth pressure value on the ith centerline point. Statistical analysis Statistical analysis was conducted using IBM SPSS Statistics software, version 28 (IBM, United States).For normally distributed parameters mean and standard deviation were reported, and Bland-Altman plots depicting the difference (y-axis) between the predicted (1D BRNN) and reference (CFD) values against the mean of these values (x-axis) for PD, WSS max , and V max .The plots are based on 18 test cases that were not part of the training set and were also not used in creating the synthetic cohort. normality of distribution was assessed using a Shapiro-Wilk test.Non-normally distributed parameters were described using median and interquartile range [IQR].A paired two-tailed Student's t-test was used to test for significant differences within normally distributed parameter differences, whereas Wilcoxon signed-rank tests were used for testing non-normally distributed parameter differences.All tests used a standard significance level of 0.05. 1D bidirectional recurrent neural network performance In Figure 5, the Bland-Altman analysis of the mean prediction error for PD shows a small, non-significant underestimation of −1.00 ± 7.04 mmHg by 1D BRNN for 18 test cases (BRNN: 16.52 with [7.17-48.15]mmHg vs. CFD: 14.79 with [9.96-40.79]mmHg, paired Wilcoxon test, p = 0.616).On the other hand, the model tends to significantly overestimate WSS max , with an average error of 7. 06 ± 8.08 Pa (43.38 ± 26.14 Pa vs. 36.31 ± 25.05 Pa, paired Student's t-test, p = 0.002).However, it is worth noting that both WSS max values are significantly higher than physiologic values of a few (<10) Pa in a healthy aorta (Callaghan and Grieve, 2018).Regarding the velocity, the V max error also indicates a significant overestimation of 0.39 ± 0.43 m/s (2.93 ± 1.11 m/s vs. 2.54 ± 1.17 m/s, paired Student's t-test, p = 0.002).However, this difference means according to the Bernoulli equation (PD = 4×V max 2 ) an approximate PD of 0.6 mmHg, which is clinically negligible.Furthermore, the plots reveal a slight negative trend for V max , meaning that as the magnitude of velocity increases, the errors tend to shift toward the lower part of the interval. The 1D BRNN performance analyzed here is based on the model trained with non-scaled data.Throughout the model development process, various scaling methods were tested for both input and output data, revealing significant differences in training and validation loss curves (Figure 6).Notably, when the data were scaled, either standardized or normalized, the 1D BRNN model showed signs of overfitting.This is evident in Figure 6, where the training loss continues to decrease, while the validation loss either converges (in the case of normalization) or even increases (in case of standardization).Consequently, the final model was trained with non-scaled data. Impact of a synthetic cohort on ANN training: real vs. synthetic cohorts The model trained on clinical (real) cases outperformed the one trained on synthetic cases, as observed in the Bland-Altman plots (Figure 7).This is evident from the lower standard deviation of the errors for PD (5.80 mmHg compared to 10.60 mmHg), WSS max (7.20 Pa compared to 8.53 Pa), and V max (0.45 m/s compared to 0.60 m/s).Additionally, the model trained on real cases exhibited less bias for all three parameters.The differences between 1D BRNN models trained with real and synthetic cases in 18 real test cases were significant for all three predicted hemodynamic parameters: , Wilcoxon test, p = 0.006), WSS max ] Pa, Wilcoxon test, p < 0.001), and V max (3.13 ± 1.47 m/s vs. 2.92 with [2.04-4.81]m/s, Wilcoxon test, p < 0.001). Impact of an aortic arch shape on ANN training: gothic vs. non-gothic vs. mixed cohorts Statistical analysis of the predictions from the three models versus CFD results for all 30 test cases revealed significant differences in mean error.The gothic model had a significantly higher mean error of −8.41 ± 9.46 mmHg (paired Student's t-test, p < 0.001 for both tests) compared to the non-gothic and mixed models, which had −2.76 ± 8.80 mmHg and −4.27 ± 10.10 mmHg, respectively.However, the differences in errors between the non-gothic and mixed models were non-significant (paired Student's t-test, p = 0.092). Figure 8 illustrates Bland-Altman plots for all three trained models, separated for gothic and non-gothic test cases.The model trained with only gothic cases had a higher mean PD prediction error for gothic test cases compared to non-gothic cases (−11.35 ± 9.18 mmHg vs. −5.48± 9.08 mmHg).However, this difference was not significant (Student's t-test, p = 0.089).Similar results were found for the model trained with non-gothic cases (−5.05 ± 5.05 mmHg vs. −0.48± 11.13 mmHg) as well as for the mixed model (−8.53 ± 8.16 mmHg vs. −0.01± 10.59 mmHg).In the nongothic model, the differences were not statistically significant (paired Student's t-test, p = 0.158), whereas, for the mixed model, the predictions for non-gothic test cases were significantly more accurate (paired Student's t-test, p = 0.018). Interestingly, the model trained with non-gothic cases significantly outperformed the gothic model in predicting gothic test cases (paired Student's t-test, p < 0.001) and the mixed model (paired Student's t-test, p = 0.01).For non-gothic test cases, the performance of the gothic model was significantly less accurate compared to the non-gothic (paired Student's t-test, p = 0.013) and mixed models (paired Student's t-test, p = 0.003), with no significant difference between non-gothic and mixed models (paired Student's t-test, p = 0.668). Impact of an architecture on ANN training: 1D vs. 3D Based on the analysis of the mean and standard deviation of the prediction errors (Figure 9), the 3D CNN exhibits a slightly higher error (1.27 ± 8.91 mmHg) compared to the 1D BRNN (−1.00 ± 7.04 mmHg) in predicting PD.However, statistical analysis found no significant differences when comparing PD values predicted by the 3D CNN to those calculated by the ] mmHg, Wilcoxon test, p = 0.711) as well as between PD values predicted by 3D CNN and those predicted by the 1D ] mmHg, Wilcoxon test, p = 0.528).The higher standard deviation of the 3D CNN error can be primarily attributed to one outlier (Figure 10), where both the 1D BRNN and 3D CNN overestimated the pressure recovery after the stenosis. Similar results were obtained when examining the point-wise RMSE of pressure curves (Figure 10).The 3D CNN exhibited a lower median RMSE (3.23 mmHg) compared to the 1D BRNN (4.25 mmHg), however, this difference was not significant (paired Wilcoxon test, p = 0.879).The IQR was also narrower for 3D CNN.However, it should be noted that there were two outliers in the case of the 3D CNN, which did not appear in the results of the 1D BRNN. Overall, it can be concluded that the 1D BRNN demonstrated slightly better accuracy in predicting PD, while the 3D model marginally outperformed the 1D model in terms of pressure profile accuracy. Finally, a sensitivity and specificity analysis were conducted on test cases using a PD threshold of 20 mmHg, following current guidelines that recommend intervention if the peak-to-peak coarctation gradient exceeds 20 mmHg at rest (Mercuri et al., 2020).The CFD results served as the ground truth and were compared with the 1D BRNN, which performed slightly better than the 3D CNN in predicting PD (Figure 9).The results of this analysis are as follows: 7 cases were true positive, 9 cases were true negative, 1 case was false positive, and 1 case was false negative.This yields a sensitivity of 87.5% and a specificity of 90%. Discussion The selection of BRNN for predicting hemodynamics along the 1D centerline of the aorta appears to be a reasonable choice, considering that centerline points exhibit sequential dependencies.BRNNs are well-suited for modeling such dependencies in both forward and backward directions. Furthermore, research has suggested that CNNs trained on small data with a saturated activation function like Leaky ReLU perform better than those trained with the standard ReLU, which has a zeroslope part for negative values (Xu et al., 2015). Both our models 1D BRNN and 3D CNN were trained with unscaled data, with the exception of the output 3D CNN data, which was scaled by standard deviation.Surprisingly, the models trained on non-scaled data performed the best, which is not aligned with the common machine learning practice of scaling the training data.In our case, standardization and normalization of data resulted in overfitting of the 1D BRNN, where the training loss continued to improve while the validation loss converged or even increased.Similarly, although scaling did not result in overfitting for the 3D CNN, this model also performed better with unscaled input data.This raises the question of why the models trained with unscaled input data show better accuracy.One possible explanation is that certain input features with smaller scales (e.g., centerline coordinates) might be less predictive, implying they contain less information about the hemodynamic outputs obtained from the ANN.Essentially, because the less predictive features have smaller scales and the more predictive features have larger scales, these more predictive features might naturally have a greater impact on the output.This is the exact scenario we usually want to avoid in machine learning, where we aim to prevent the scale of input and output data from introducing biases.However, in our case, these biases might have actually been beneficial.It is important to emphasize that more predictive features in the ANN model do not necessarily align with the physical reality of the most predictive factors for hemodynamic calculation. To optimize the weights, the Adam optimizer was used, which yielded the best results in cross-validation experiments.This is not surprising, as it has already been shown before (Kingma and Ba, 2014) that Adam can be a better alternative to other optimizers such as stochastic gradient descent (SGD) and root mean squared propagation (RMSProp). The performance of the 1D BRNN in predicting PD, with a standard deviation error of 7.03 mmHg, suggests that this approach has the potential to be used in clinical practice for the diagnosis of CoA patients.The high sensitivity and specificity, both around 90%, further indicate that the model's error would not be a limiting factor for clinical diagnosis.Moreover, the model consistently and accurately predicts the position of the stenosis (where the static relative pressure drops) in all CoA test cases. To improve PD prediction reliability, the introduction of confidence intervals could be considered.If the predicted PD value falls within the range of one standard deviation error (7.03 mmHg), additional investigation would be recommended.To a certain extent, adhering to confidence intervals already exists in clinical practices, where doctors, using cardiac catheterization for determining peak-to-peak pressure gradient, acknowledge an error margin of around 5 mmHg (Yevtushenko et al., 2022). Furthermore, the 1D BRNN showed consistent errors in the prediction of PD, WSS max , and V max across a wide intensity spectrum, as evidenced by the Bland-Altman plots (Figure 5).The absence of outliers as hemodynamic values increase indicates that the model has the potential to generalize well to a broader patient population. It turns out that training the CNN with 3D aorta geometry and pressure distribution does not result in improved predictions of PD, although the pressure curves showed a slight improvement in accuracy.However, this approach has its drawbacks, including increased spacing between cross-sections (4 mm instead of 2 mm) and the limitation of predicting only static pressure.This decision was made to maintain a reasonable batch size for training, which was 8 in our case.If additional output features such as a 3D velocity field are added or the spacing between cross-sections is reduced, the batch size would need to be decreased to fit into GPU RAM.It is preferable to have a reasonably high batch size (with 32 being a good rule of thumb), as lower batch sizes can lead to less accurate predictions (Radiuk, 2017).It is worth noting that a 3D model could be developed, where the first step involves a rough calculation of 3D velocity fields inside the aorta using CFD-calculated velocity fields for training.The information from this step could then be used in the second step for predicting the pressure fields.In other words, we could train the model to predict velocity fields instead of pressure and still be able to predict pressure.One potential solution to address GPU RAM limitation could be reducing the dimensions of cross-sectional planes.For instance, resizing the 3D input and output arrays from (48, 48, 80) to (24, 24, 160) would reduce the overall array size by a factor of 2 while also reducing the spacing to 2 mm.However, it is found that this resizing approach results in lower accuracy of pressure profile prediction, with a median error of 4.34 mmHg, which is 34% higher compared to the 3.23 mmHg obtained by the non-resized solution.Despite experimenting with various shapes, the final shape chosen for the input and output arrays was (48, 48, 80), as it yielded the best results. The lower batch size (8 for the 3D CNN) compared to the 1D BRNN (50) due to GPU RAM limitations highlights the challenge posed by the higher-order complexity of transitioning from 1D to 3D.Additionally, there is a presence of redundant information among input features, such as radius, gradient of radius, flow, and velocity through cross-sections, which are originally 1D features but need to be presented as 3D arrays (with constant values on cross-sections) to comply with the 3D CNN architecture requirements.Although CNNs are commonly used in state-of-the-art machine learning practices, especially for classifying and segmenting imaging data, their ability to model sequential dependencies, similar to RNNs, is questionable. In conclusion, a better-suited machine learning architecture, combining the strengths of both, CNN (for convolutional layers) and BRNN (for modeling sequential dependencies), while reducing the information redundancy and retaining 3D geometry could be identified. The experiments with different training data were conducted to investigate whether there are any significant alterations in performance, which can provide insights into the underlying distribution of different datasets.Interestingly, it is observed that the 1D BRNN performs better when trained solely on real cases compared to when trained exclusively with synthetic cases.This difference in performance suggests that there may be variations in the data distributions between the two cohorts.It is possible that the synthetic cohort does not fully represent the entire distribution of the clinical cohort, leading to the underperformance of the model trained on synthetic data. The model trained on the clinical cohort is expected to generalize better on unseen data if its training data better reflects the true distribution of the population.Although the distribution of flow hemodynamics (PD, WSS max , and V max ) is similar between the real and synthetic cohorts, it has been observed before (12) that the stenosis degree and stenosis position distributions do not match well between the clinical and synthetic cohorts, which might suggest potential flaws in the construction of the synthetic cohort.Interestingly enough, there were no significant improvements in the performance of the 1D BRNN when it was trained with gothic, non-gothic, or mixed (50% gothic and 50% non-gothic) synthetic cases.The results of this experiment suggest that, despite some statistically significant differences between the models, they could not effectively capture the distinctions between the gothic and nongothic cases.The outcome was somewhat unexpected, as you might assume that the model would perform better for the cases it was trained on. Moreover, these results could also be attributed to the fact that all the training cases come from the synthetic cohort, which was found not to be equivalent to the clinical cohort in terms of producing comparable model accuracy.What is particularly interesting is that the gothic model performed significantly worse than the non-gothic model for both gothic and non-gothic test cases, alluding that adding another kind of the flow resistance (i.e., aortic arch) might have confused the ANN model.In other words, the model was not able to differentiate the pressure drop caused by the aortic curvature or the narrowing.Also, it seems that the model trained with non-gothic cases is also able to learn to predict the pressure drop in gothic cases.This finding, that even the model trained with no gothic cases can predict PD reasonably well for gothic cases, is important since gothic cases are seldom found in a real cohort and it is challenging to collect a large number of clinical gothic cases to train the ANN. However, there exists a reasonable doubt about the validity of the results from the gothic experiment.Firstly, looking at Figure 8, some cases can be distinctly identified where the mean value between the predicted and reference values is negative.It is very much possible that the model predicts a higher pressure at the outlet than at the inlet, resulting in a negative pressure drop.This situation may occur when the pressure remains relatively constant throughout the aorta, which is typical for healthy subjects without stenosis.This showcases one of the limitations of the ML models, as they lack awareness of the physical reality present in the given problem they attempt to solve.One approach to tackling this limitation is to "teach" the model about what is physically plausible by introducing penalty terms to the cost function.For example, during the training process, if the model predicts a negative pressure drop, a penalty term could be added to the cost function, aiming to avoid this outcome. Secondly, some cases show extremely high errors, reaching magnitudes of 100%.This could stem from training exclusively on synthetic cases and having a relatively small training sample size.Our findings, illustrated in Figure 7, indicate that the models trained only on synthetic cases perform significantly worse than those trained only on real cases.We used in this experiment gothic cases within the synthetic cohort, as there are not many among real cases. Further, the hemodynamic distributions among all three groups are similar.However, upon closer examination, it is found that gothic cases within the clinical cohort exhibit more severe hemodynamics compared to non-gothic real cases.The same observation does not hold true for the synthetic cohort, suggesting that the synthetic cases may not accurately capture the impact of the gothic arch on hemodynamics. Considering these findings, further experimentation with different methods for generating synthetic cases should be conducted in the future.Improving the synthetic cohort could potentially enhance the performance of the ANN even further. Finally, we must note that the current study is limited to the quasi-steady CFD model employed, which only calculated peak systolic flow conditions in a pulsatile flow.Unsteady flow simulations of the aortic flow are associated with high computational costs, which are, based on our experience, approximately 10-fold compared with a peak-systolic flow simulation.This is because at least two heart cycles have to be simulated to achieve a time-independent solution and because usual time steps between 0.0004 s and 0.005 s (Qin et al., 2023) used for aortic flow simulations result in approximately a few hundred or thousands of time steps to be simulated.Unsteady flow simulations are necessary to assess velocity and pressure fields accurately, especially during high flow acceleration and deceleration phases.However, during peak systole, which is the single time point used for the CoA pressure gradient assessment according to the clinical guideline, the impact of flow unsteadiness is considered to be negligible.This assumption for the CFD model used in our study is confirmed by a set of clinical validation studies against in vivo catheter-measured pressure gradients vs. 4D PC MRI measurements (Mirzaee et al., 2017;Bouaou et al., 2019;Goubergrits et al., 2019;Shi et al., 2019). Conclusion This study showcases the potential of ML methods to replace CFD, effectively mitigating computational costs and facilitating their integration into clinical practice.The inclusion of 3D geometric and pressure information does not boost ANN accuracy in predicting PD.This leads to the conclusion that condensing geometric and flow hemodynamics information into a 1D representation along the aorta centerline is a reasonable simplifying approach.The introduction of a synthetic cohort to augment geometric and hemodynamic variability does not yield an improvement in ANN performance, but it does demonstrate the suitability of synthetic data for training ML models.Consequently, future studies should focus on improving data augmentation techniques to potentially acquire better ML models.Finally, ML models do not seem to identify the variations between cases that have different shapes of the aortic arch.Additionally, exploring alternative architecture could also lead to further improvements.guardians/next of kin in accordance with the national legislation and institutional requirements. FIGURE 1 FIGURE 1An exemplary real CoA case with a shape represented by circumferential lines.The red labeled line marks the stenosis site with the lowest diameter.Seven one-dimensional input parameters and eight centerline-aggregated output parameters, calculated by CFD and prepared for the ANN training, are shown.The yellow curve in the output WSS curve represents original WSS values with very high variance, whereas the blue curve shows 12-window averaged (smoothed) WSS data. FIGURE 2 FIGURE 2Schematic representation of the 1D BRNN architecture including input and output data with n representing the number of points describing the centerline. Four ML experiments with different training data and ANN architectures were performed to analyze their impact on ANN performance: 1. 1D BRNN performance analysis.2. Impact of a synthetic cohort on ANN training: real vs.synthetic.3. Impact of an aortic arch shape on ANN training: gothic vs. non-gothic vs. mixed.4. Impact of architecture on ANN training: 1D BRNN vs. 3D CNN. FIGURE 3 FIGURE 3Schematic representation of the 3D CNN architecture including input and output data. FIGURE 6 FIGURE 6An example of the learning curves for one 1D BRNN training experiment out of the 10 conducted for each scaling method applied on both input and output features.The red curve represents the training loss, whereas the blue one represents the validation loss. FIGURE 7 FIGURE 7Bland-Altman plots comparing the accuracy of PD (1st row), WSS max (2nd row), and V max (3rd row) 1D BRNN predictions on test cases for the following experiments: trained only on real cases (left) vs. training only on synthetic cases (right). FIGURE 8 FIGURE 8Bland-Altman plots comparing the accuracy of 1D BRNN PD predictions on 15 gothic and 15 non-gothic test cases for the following experiments: trained only on gothic cases (1st row) vs.only on non-gothic cases (2nd row) vs. on mixed cases (3rd row). FIGURE 9 FIGURE 9Bland-Altman plots comparing the accuracy of PD predictions on 18 test cases between the 1D BRNN (left) and 3D CNN (right).CNN's crosssectional values were averaged to get 1D pressure values along the centerline. FIGURE 10 FIGURE 10Upper figure: The pressure curve for an exemplary test case with the highest pressure drop (PD) prediction error.Both the 3D CNN and 1D BRNN overestimate the pressure recovery after the stenosis, thus affecting the PD prediction.The PD occurs right at the aorta's narrowing (dashed black line).The pressure course remains relatively steady at the aortic arch (black solid line).The shape of this specific case can be seen on the right side with the red circle marking the stenosis site with the lowest diameter.Bottom figure: Boxplots comparing the error of pressure curve predictions between the 1D BRNN and 3D CNN by computing point-wise RMSE for all 18 test cases.
12,594.4
2024-02-21T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
Analysis of 1-Aroyl-3-[3-chloro-2-methylphenyl] Thiourea Hybrids as Potent Urease Inhibitors: Synthesis, Biochemical Evaluation and Computational Approach Urease is an amidohydrolase enzyme that is responsible for fatal morbidities in the human body, such as catheter encrustation, encephalopathy, peptic ulcers, hepatic coma, kidney stone formation, and many others. In recent years, scientists have devoted considerable efforts to the quest for efficient urease inhibitors. In the pharmaceutical chemistry, the thiourea skeleton plays a vital role. Thus, the present work focused on the development and discovery of novel urease inhibitors and reported the synthesis of a set of 1-aroyl-3-[3-chloro-2-methylphenyl] thiourea hybrids with aliphatic and aromatic side chains 4a–j. The compounds were characterized by different analytical techniques including FT-IR, 1H-NMR, and 13C-NMR, and were evaluated for in-vitro enzyme inhibitory activity against jack bean urease (JBU), where they were found to be potent anti-urease inhibitors and the inhibitory activity IC50 was found in the range of 0.0019 ± 0.0011 to 0.0532 ± 0.9951 μM as compared to the standard thiourea (IC50 = 4.7455 ± 0.0545 μM). Other studies included density functional theory (DFT), antioxidant radical scavenging assay, physicochemical properties (ADMET properties), molecular docking and molecular dynamics simulations. All compounds were found to be more active than the standard, with compound 4i exhibiting the greatest JBU enzyme inhibition (IC50 value of 0.0019 ± 0.0011 µM). The kinetics of enzyme inhibition revealed that compound 4i exhibited non-competitive inhibition with a Ki value of 0.0003 µM. The correlation between DFT experiments with a modest HOMO-LUMO energy gap and biological data was optimal. These recently identified urease enzyme inhibitors may serve as a starting point for future research and development. Introduction Urease is a member of the amidohydrolases and phosphotriesterases superfamily that is produced by plants, fungi, algae, and bacteria and is responsible for converting urea to ammonia and carbon dioxide [1,2]. The structure of urease contains two nickel atoms at the active site, which are mandatory for its activity [3]. The urease has a massive historical Introduction Urease is a member of the amidohydrolases and phosphotriesterases superfamily that is produced by plants, fungi, algae, and bacteria and is responsible for converting urea to ammonia and carbon dioxide [1,2]. The structure of urease contains two nickel atoms at the active site, which are mandatory for its activity [3]. The urease has a massive historical background as it was the first enzyme ever to be crystallized. Later, the nickel atom role was investigated thoroughly followed by a deep understanding of the urease structure of the jack bean for a better understanding of its ureolytic activity. Jack bean (CanaValia ensiformis) urease contains six subunits, each of which is made up of 840 amino acids [4,5]. The prime role of urease is to hydrolyze urea, which is the key compound responsible for the normal body physiological functions [6,7]. Excess levels of urea and its metabolites within the body result in several disorders including hepatic coma urolithiasis, hepatic encephalopathy, urinary catheter encrustation, gastric and peptic ulcers and pyelonephritis [8]. Excessive ammonia is also responsible for alkalinity of the stomach, which in return increases the gastric mucosa permeability and tears down the gastrointestinal tract (GIT) epithelium. Helicobacter pylori (HP) is involved in gastric and peptic ulcers which ultimately lead to gastric cancer, because the lowering of stomach pH facilitates bacterial growth [9][10][11]. Due to all these fatal complications, urease inhibition has been the major target for the last few years. Many anti-urease molecules have been reported, which include imidazoles [12][13][14], and benzohydroxamic acid derivatives [15] (Figure 1). But unfortunately, all these agents have adverse effects as well. Thus, we need to identify more effective anti-urease agents with low toxicity and high bioavailability. The class of organic compound thiourea contain sulphur and have a structural resemblance to urea, i.e., the oxygen atom is replaced by the sulphur atom; they showed excellent biological applications, especially as an anti-urease activity [15,16]. In addition to this, thiourea and its derivatives have also been found to exhibit various pharmacological activities such as anti-oxidant, anti-inflammatory, anti-hypertensive, anti-epileptic, anti-cancer and anti-bacterial activity for the treatment of various co-infections and fatal diseases including renal failure, sepsis and various cancer types [17][18][19]. Considering the significance of thiourea moiety, this work is designed to synthesize 1-aroyl-3-[3-chloro-2methylphenyl] thiourea hybrids [4a-j] as JBU inhibitors [20]. The in vitro enzyme inhibitory activity was performed. The structure activity relationships were established by incorporation of the side chain aliphatic and aromatic moieties. Molecular docking experiments were used to confirm the inhibitors' binding conformations within the active pocket of the enzyme [21,22] and electrostatic potential surface maps derived from traditional DFT computations were employed to determine their relative strength [23]. On the basis of the results of performed activities, this paper proposes a structural model for novel potential inhibitors. The class of organic compound thiourea contain sulphur and have a structural resemblance to urea, i.e., the oxygen atom is replaced by the sulphur atom; they showed excellent biological applications, especially as an anti-urease activity [15,16]. In addition to this, thiourea and its derivatives have also been found to exhibit various pharmacological activities such as anti-oxidant, anti-inflammatory, anti-hypertensive, anti-epileptic, anti-cancer and anti-bacterial activity for the treatment of various co-infections and fatal diseases including renal failure, sepsis and various cancer types [17][18][19]. Considering the significance of thiourea moiety, this work is designed to synthesize 1-aroyl-3-[3-chloro-2-methylphenyl] thiourea hybrids [4a-j] as JBU inhibitors [20]. The in vitro enzyme inhibitory activity was performed. The structure activity relationships were established by incorporation of the side chain aliphatic and aromatic moieties. Molecular docking experiments were used to confirm the inhibitors' binding conformations within the active pocket of the enzyme [21,22] and electrostatic potential surface maps derived from traditional DFT computations were employed to determine their relative strength [23]. On the basis of the results of performed activities, this paper proposes a structural model for novel potential inhibitors. Chemistry A series 4a-j of novel aryl thiourea derivatives were synthesized by treating potassium thiocyanate with different acid chlorides in dry acetone refluxed for 30 min; the respective isothiocyanate intermediates formed in the reaction mixture, after cooling the reaction mixture 2-methyl-3-chloroaniline, were incorporated to afford the final products. Spectroscopic Characterization Spectroscopic analysis of all the derivatives of newly synthesized thiourea was carried out. 1 H and 13 C NMR were recorded in deuterated DMSO-d6 solvent. The products' constructions were supported by their 1 H NMR and 13 C NMR spectrum (Experimental detail about characterization data; FTIR, 1 H and 13 C NMR spectra are depicted in Figures S11-S18 provided in supplementary information). In 1 H NMR, two N-H protons as singlets appeared at 12.161 ppm and 11.815 ppm and were a clear indication of thiourea formation. Intramolecular hydrogen bonding shifted these signals to a higher ppm value; hence the thio core in the thiourea structure is justified. The 8.037-7.290 ppm region indicates the aromatic rings. The most shielded signal in all the structures is of the methyl group attached to the aromatic ring which appeared in the region of 2-2.5 ppm. In 13 C NMR, strong signals at 180 ppm and in the range of 175-160 ppm are a clear indication of C=S and C=O groups. The signal for C=S carbon is the de-shielded one which appeared at 181-180 ppm while the signal for C=O appeared at 175-160 ppm. Signals between 120 and 140 ppm represent aromatic carbons and the signal for methyl carbon which is directly connected to the aromatic ring appeared in the region of 6.8-6.2 ppm. In FTIR, a broad band above 3200 cm −1 shows NH stretching because of the intra-molecular hydrogen bond between the NH and carbonyl oxygen around 3000 cm −1 . The C=O stretch resulted in the appearance of an intense band in the region of 1700-1600 cm −1 ; the signal in the region of 1050-1250 cm −1 was a clear indication of C=S for all the compounds. Free Radical Scavenging All of the produced 1-aroyl-3-[3-chloro-2-methylphenyl]thiourea compounds were assessed for the ability of DPPH free radical scavenging activity as depicted in Figure 2. Chemistry A series 4a-j of novel aryl thiourea derivatives were synthesized by treating potassium thiocyanate with different acid chlorides in dry acetone refluxed for 30 min; the respective isothiocyanate intermediates formed in the reaction mixture, after cooling the reaction mixture 2-methyl-3-chloroaniline, were incorporated to afford the final products. Spectroscopic Characterization Spectroscopic analysis of all the derivatives of newly synthesized thiourea was carried out. 1 H and 13 C NMR were recorded in deuterated DMSO-d6 solvent. The products' constructions were supported by their 1 H NMR and 13 C NMR spectrum (Experimental detail about characterization data; FTIR, 1 H and 13 C NMR spectra are depicted in Figures S11-S18 provided in supplementary information). In 1 H NMR, two N-H protons as singlets appeared at 12.161 ppm and 11.815 ppm and were a clear indication of thiourea formation. Intramolecular hydrogen bonding shifted these signals to a higher ppm value; hence the thio core in the thiourea structure is justified. The 8.037-7.290 ppm region indicates the aromatic rings. The most shielded signal in all the structures is of the methyl group attached to the aromatic ring which appeared in the region of 2-2.5 ppm. In 13 C NMR, strong signals at 180 ppm and in the range of 175-160 ppm are a clear indication of C=S and C=O groups. The signal for C=S carbon is the de-shielded one which appeared at 181-180 ppm while the signal for C=O appeared at 175-160 ppm. Signals between 120 and 140 ppm represent aromatic carbons and the signal for methyl carbon which is directly connected to the aromatic ring appeared in the region of 6.8-6.2 ppm. In FTIR, a broad band above 3200 cm −1 shows NH stretching because of the intra-molecular hydrogen bond between the NH and carbonyl oxygen around 3000 cm −1 . The C=O stretch resulted in the appearance of an intense band in the region of 1700-1600 cm −1 ; the signal in the region of 1050-1250 cm −1 was a clear indication of C=S for all the compounds. Free Radical Scavenging All of the produced 1-aroyl-3-[3-chloro-2-methylphenyl]thiourea compounds were assessed for the ability of DPPH free radical scavenging activity as depicted in Figure 2. The anti-oxidant activity suggested that compounds 4d, 4e, 4f, 4g, 4h and 4i showed good activity in the comparison of standard Vitamin C. However, the rest of the compounds did not display substantial radical scavenging activity even at the high concentration [100 µg/mL]. In Vitro Urease Inhibitory Activity In the present work, 1-aroyl-3-[3-chloro-2-methylphenyl]thiourea hybrids [4a-j] were synthesized with the aim of having potent JBU inhibitors. Hydrophobic and hydrophilic groups were substituted on a phenyl ring in the novel analogues 4a-j that were synthesized in order to check urease inhibition activity. The anti-urease activity results depicted the influence of different functional groups on enzyme activity. The function of nitro, chloro, methyl and ethyl substitution in urease inhibitory activity was evaluated. Thiourea, a well-known urease inhibitor, served as the reference compound. All of the synthesised compounds demonstrated good to exceptional urease inhibitory efficacy relative to the reference medication (thiourea). The IC 50 ranged from 0.0019 ± 0.0011 to 0.0532 ± 0.9951 µM and are far better than standard (thiourea) with IC 50 4.7455 ± 0.0545 µM ( Structure Activity Relationship (SAR) Briefly, when the SAR of compounds 4b and 4c was compared with the parent compound 4a, it was observed that the induction of the methoxy group resulted in improved activity. The behaviour of methoxy in position 4 (para position) has a better inhibitory effect than in position 3, 5 (meta position). In the para position the methoxy group acted as an electron-donating group with a resonance phenomenon while in the meta position it behaved as an electron-withdrawing group by inductive effect. When the effect of the nitro group was noticed, the derivative 4e exhibited maximum inhibitory activity in comparison to 4f and 4g. When the SAR of this derivative 4e was detected, it was found that para substitution resulted in enhanced inhibitory potential as compared to meta directing substitutions. It was also noted that induction of one NO 2 group in the meta position resulted in better activity [IC 50 ± SEM = 0.0136 ± 0.0544] as compared to the derivative 4g with an inhibitory value of IC 50 ± SEM = 0.0335 ± 0.0994. This paper reveals that the deactivating nature of the nitro group exceeds the activating nature of the methyl group in such a way that the SAR of derivative 4d indicated that methyl substitution in position 4 (para position) was not strong enough to improve the inhibition as compared to the parent compound (4a). Interesting behaviour was seen when the substitution of the strong electronegative functional group was introduced to the benzamide ring. Further investigation suggested that the induction of the electronegative group in position 2 (ortho) resulted in improved inhibitory potential in comparison to 4h and 4j. Among all the studied compounds, the derivative 4i was discovered to be the most effective inhibitor. The derivative 4j having ortho & para chloro substitutions had less activity as compared to 4i but more than 4h. When the SAR of 4h, 4i & 4j was compared, it was observed that substitution of the ortho position is more favourable as compared to para (4h) and even di-substitution at ortho & para (4j). From the SAR it can be concluded that para-directing substitutions have better inhibitory activities owing to the fact that we already know that para and ortho positions for substitutions are more stable and faster to prepare. In the same manner, halogens showed their behaviour, as chlorine is more electronegative and when it is attached in the ortho position (4i), it gives a more potent inhibitory effect than in the para position (4h) and ortho, para position (4j). Based upon our results, for the discovery and development of new urease inhibitors, 4i can be used as a structural model. Kinetic Analysis Kinetic investigations provided additional evidence for the inhibitory function, in which the potential of the potent derivative 4i to inhibit the substrate at different concentrations in such a way that enzyme concentration remained constant. The kinetic tests of the enzyme gave a series of straight lines by the Lineweaver-Burk plot of 1/V versus 1/[S] in the presence of varying compound concentrations ( Figure 3A). The results showed that the compound 4i intersected in the second quadrant. Thus, the analysis revealed that V max declined to new growing doses of inhibitors; however, Km remained the same. This behaviour revealed that the mode of enzyme-inhibitor complex shown by compound 4i inhibition is non-competitive. The second plot of the slope against the inhibitor's concentration showed a dissociation constant (Ki) of enzyme inhibition ( Figure 3B). ogens showed their behaviour, as chlorine is more electronegative and when it is attached in the ortho position (4i), it gives a more potent inhibitory effect than in the para position (4h) and ortho, para position (4j). Based upon our results, for the discovery and development of new urease inhibitors, 4i can be used as a structural model. Kinetic Analysis Kinetic investigations provided additional evidence for the inhibitory function, in which the potential of the potent derivative 4i to inhibit the substrate at different concentrations in such a way that enzyme concentration remained constant. The kinetic tests of the enzyme gave a series of straight lines by the Lineweaver-Burk plot of 1/V versus 1/[S] in the presence of varying compound concentrations ( Figure 3A). The results showed that the compound 4i intersected in the second quadrant. Thus, the analysis revealed that Vmax declined to new growing doses of inhibitors; however, Km remained the same. This behaviour revealed that the mode of enzyme-inhibitor complex shown by compound 4i inhibition is non-competitive. The second plot of the slope against the inhibitor's concentration showed a dissociation constant (Ki) of enzyme inhibition ( Figure 3B). We selected the most effective compound 4i, based on our results, to evaluate its type of inhibition and inhibition constant on JBU ( Figure 3). Table 2 shows the kinetic results. We selected the most effective compound 4i, based on our results, to evaluate its type of inhibition and inhibition constant on JBU ( Figure 3). Table 2 shows the kinetic results. (Figure 4). Detail structural analysis revealed that in domain 4, metal binding residue (His545, His519, His409, His407 and Asp633) nickel atoms showed direct interactions within the active pocket of JBU. The VADAR investigations presented that the protein structural design is entailed with helices (27%), β sheets (31%) and coils (41%) in the target protein. Moreover, Ramachandran plots (Figure S1 provided in Supplementary Materials) indicated that 97.5% of residues were present in favoured regions that show the precision of angles (phi (ϕ) and psi (ψ) among the coordinates of JBU. amino acids (Figure 4). Detail structural analysis revealed that in domain 4, metal binding residue (His545, His519, His409, His407 and Asp633) nickel atoms showed direct interactions within the active pocket of JBU. The VADAR investigations presented that the protein structural design is entailed with helices (27%), β sheets (31%) and coils (41%) in the target protein. Moreover, Ramachandran plots (Figure S1 provided in Supplementary Materials) indicated that 97.5% of residues were present in favoured regions that show the precision of angles (phi (φ) and psi (ψ) among the coordinates of JBU. Molecular Docking To figure out which conformational position of synthesised ligands (4a-j) is the best fit against JBU docking studies were employed. In detail, the structure activity relationship (SAR) study revealed that two hydrogen bonds and hydrophobic interaction were detected in the 4i-docking complex at one point. The sulfur group formed a hydrogen bond with Arg439 with a bond length of 2.45 Å (Figure 6A,B). The amino group formed another hydrogen bond interaction with Ala636 having bond distances of 3.25 and 4.37Å, respectively. The relative binding energy and SAR analysis displayed the significance of the 4i compound and may be considered as effective inhibitors by targeting JBU. Other Molecular Docking To figure out which conformational position of synthesised ligands (4a-j) is the best fit against JBU docking studies were employed. In detail, the structure activity relationship (SAR) study revealed that two hydrogen bonds and hydrophobic interaction were detected in the 4i-docking complex at one point. The sulfur group formed a hydrogen bond with Arg439 with a bond length of 2.45 Å ( Figure 6A,B). The amino group formed another hydrogen bond interaction with Ala636 having bond distances of 3.25 and 4.37Å, respectively. The relative binding energy and SAR analysis displayed the significance of the 4i compound and may be considered as effective inhibitors by targeting JBU. Other amino acid residues, i.e., Ala436, Gln635 and Asp494, were involved in van der Waals interactions. Molecular Docking To figure out which conformational position of synthesised ligands (4a-j) is the best fit against JBU docking studies were employed. In detail, the structure activity relationship (SAR) study revealed that two hydrogen bonds and hydrophobic interaction were detected in the 4i-docking complex at one point. The sulfur group formed a hydrogen bond with Arg439 with a bond length of 2.45 Å ( Figure 6A,B). The amino group formed another hydrogen bond interaction with Ala636 having bond distances of 3.25 and 4.37Å, respectively. The relative binding energy and SAR analysis displayed the significance of the 4i compound and may be considered as effective inhibitors by targeting JBU. Other amino acid residues, i.e., Ala436, Gln635 and Asp494, were involved in van der Waals interactions. The structure of the ligand is indicated in the olive drab colour; yellow represents the two nickel atoms. Two interacted residues Ala636 and Ala436 are highlighted in purple for hydrogen and hydrophobic bonding, respectively. The 2D depiction of docking complexes of synthesized compounds is shown in the supplementary information ( Figures S2-S10). The structure of the ligand is indicated in the olive drab colour; yellow represents the two nickel atoms. Two interacted residues Ala636 and Ala436 are highlighted in purple for hydrogen and hydrophobic bonding, respectively. The 2D depiction of docking complexes of synthesized compounds is shown in the supplementary information ( Figures S2-S10). Molecular Dynamics Simulations The distinctive feature of CHARMM-GUI is its adaptability, which gives users the choice to select popular simulation programmes for their simulations. Using the NAMDgenerated input files, each system was equilibrated for 100 ns production MD simulation. To estimate the stability of the system, root mean square deviations (RMSD) of backbone atoms with respect to the initial structure and root mean square fluctuation (RMSF) of each residue were calculated. For these calculations, the production trajectories were positioned in relation to the atoms of the corresponding segments of protein, nucleic acid, or carbohydrates. Protein structural alignment is shown by its RMSD. If the simulation has reached equilibrium satisfactorily, the RMSD analysis demonstrates that fluctuations of the simulation around 1-3 are fairly acceptable [24], whereas greater than 3 changes indicate that the protein is going through a significant conformational change. If, at the conclusion of the simulation, the protein's RMSD is still rising or falling on average, the system has not yet reached equilibrium and the simulation may not have lasted long enough to go through a thorough analysis. Additionally, this suggests that the ligand has clearly spread out beyond its initial binding location. The Urease-4i complex's RMSD plotted graph (Figure 7) demonstrates that the complex reaches stability at 17 ns. Following that, oscillations in RMSD values for the target (protein) stay within 2.0 for the period of the simulation, which is quite acceptable. After being equilibrated, protein-ligand complex RMSD values oscillate within 2.5 angstroms. These findings demonstrate that during the simulated time, the ligand remained firmly bound to the receptor's binding site. the simulation, the protein's RMSD is still rising or falling on average, the system has not yet reached equilibrium and the simulation may not have lasted long enough to go through a thorough analysis. Additionally, this suggests that the ligand has clearly spread out beyond its initial binding location. The Urease-4i complex's RMSD plotted graph (Figure 7) demonstrates that the complex reaches stability at 17 ns. Following that, oscillations in RMSD values for the target (protein) stay within 2.0 for the period of the simulation, which is quite acceptable. After being equilibrated, protein-ligand complex RMSD values oscillate within 2.5 angstroms. These findings demonstrate that during the simulated time, the ligand remained firmly bound to the receptor's binding site. Peaks in the RMSF show residues of the protein that fluctuate during the simulation. Usually, the tail (N and C terminal) of the protein changes more than other parts. The residues with higher peaks are found in the N and C terminal zones, according to MD trajectories (Figure 8). The stability of the ligand-protein interaction is shown by the low RMSF values of the binding site residues. Peaks in the RMSF show residues of the protein that fluctuate during the simulation. Usually, the tail (N and C terminal) of the protein changes more than other parts. The residues with higher peaks are found in the N and C terminal zones, according to MD trajectories (Figure 8). The stability of the ligand-protein interaction is shown by the low RMSF values of the binding site residues. The radius of gyration (rGyr) refers to the compactness and distribution of acid residues around the center of mass. The high value for rGyr indicated that p has a low stability and uneven distribution of residues. In the current study, amin residues showed consistent value for the radius of gyration which ranges from 30-3 stroms. An MD simulated trajectory showed slight fluctuation at 60 and 80 ns but came stable after a short period of time. Figure 9 shows the rGyr value for the Ure complex. The radius of gyration (rGyr) refers to the compactness and distribution of amino acid residues around the center of mass. The high value for rGyr indicated that protein has a low stability and uneven distribution of residues. In the current study, amino acid residues showed consistent value for the radius of gyration which ranges from 30-31 angstroms. An MD simulated trajectory showed slight fluctuation at 60 and 80 ns but it became stable after a short period of time. Figure 9 shows the rGyr value for the Urease-4i complex. The solvent accessible surface area (SASA) corresponds to the exposure of the surface area of the protein to the solvent. The higher the value of the SASA the lower will be the stability of the protein. In the current study, the residue-based SASA value was retrieved from simulated trajectories. It was observed that the SASA value for amino acid residues of the targeted protein ranged from 200 to 350 Å 2 . In particular, amino acid 780 showed a slightly higher value for the SASA (Figure 10). acid residues around the center of mass. The high value for rGyr indicated tha has a low stability and uneven distribution of residues. In the current study, am residues showed consistent value for the radius of gyration which ranges from 30 stroms. An MD simulated trajectory showed slight fluctuation at 60 and 80 ns came stable after a short period of time. Figure 9 shows the rGyr value for the U complex. The solvent accessible surface area (SASA) corresponds to the exposure of th area of the protein to the solvent. The higher the value of the SASA the lower w stability of the protein. In the current study, the residue-based SASA value was from simulated trajectories. It was observed that the SASA value for amino acid of the targeted protein ranged from 200 to 350 Å 2 . In particular, amino acid 780 s slightly higher value for the SASA (Figure 10). The solvent accessible surface area (SASA) corresponds to the exposure of th area of the protein to the solvent. The higher the value of the SASA the lower w stability of the protein. In the current study, the residue-based SASA value was from simulated trajectories. It was observed that the SASA value for amino acid of the targeted protein ranged from 200 to 350 Å 2 . In particular, amino acid 780 s slightly higher value for the SASA (Figure 10). Density Functional Theory (DFT) The in vitro effect of these derivatives on the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) reflected the biological activity, chemical reactivity and stability of a molecule. A molecule with small frontier orbitals gap revealed high chemical reactivity and low kinetic stability. According to Table 3, the compound which showed the lowest energy gap was compound 4c (∆E = 0.058 eV) and this was found to be the softest molecule. The compound with the highest energy gap was the compound 4g (∆E = 0.114 eV) and it was predicted to be the most kinetically stable of all the compounds. As HOMO is the electron donor and LUMO is the electron acceptor, the compound having the highest HOMO energy was 4c (EHOMO = −0.097 eV). It was thought to be the best electron donor due to its higher energy. The compound with the lowest LUMO energy was the compound 4c (ELUMO = −0.039 eV), which suggests that it can be the best electron acceptor. As the compound which has small orbital energy gap is more polarized, compound 4c was identified as being more polarized. The geometrical optimization and HOMO−LUMO is given in Figures 11 and 12 and values are given in Table 3. The in vitro effect of these derivatives on the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) reflected the biological activity, chemical reactivity and stability of a molecule. A molecule with small frontier orbitals gap revealed high chemical reactivity and low kinetic stability. According to Table 3, the compound which showed the lowest energy gap was compound 4c (∆E = 0.058 eV) and this was found to be the softest molecule. The compound with the highest energy gap was the compound 4g (∆E = 0.114 eV) and it was predicted to be the most kinetically stable of all the compounds. As HOMO is the electron donor and LUMO is the electron acceptor, the compound having the highest HOMO energy was 4c (EHOMO = −0.097 eV). It was thought to be the best electron donor due to its higher energy. The compound with the lowest LUMO energy was the compound 4c (ELUMO = −0.039 eV), which suggests that it can be the best electron acceptor. As the compound which has small orbital energy gap is more polarized, compound 4c was identified as being more polarized. The geometrical optimization and HOMO−LUMO is given in Figures 11 and 12 and values are given in Table 3. Ten derivatives of thiourea were analyzed for their ADMET properties. It was established that all the compounds met the criteria of Lipinski's Rule of Five (Table 4). 2.6.6. ADMET Properties Ten derivatives of thiourea were analyzed for their ADMET properties. It was established that all the compounds met the criteria of Lipinski's Rule of Five (Table 4). All molecules were shown to have an optimal volume of distribution range. Furthermore, it was discovered that almost all derivatives had superior CACO-2 permeability to ordinary thiourea, with the exception of compound 3, which had a negative value of caco-2 permeability. The computed value of human intestinal absorption of all substances was found to have a high chance of being HIA+. The HIA+ of all compounds was comparable to that of conventional thiourea. The greater the HIA number, the better the compound's intestinal absorption. A chemical with a positive blood brain barrier value has a better lipophilicity profile and may quickly absorb from plasma membranes; the computed blood brain barrier value for all compounds was found to be closer to standard thiourea, which was BBB+ in all compounds. In terms of the PGP substrate, according to standard thiourea, the output value of all compounds had a high likelihood of being aPGP substrate. Furthermore, ordinary thiourea is a powerful inhibitor of P-glycoproteins, and all derivatives were found to be effective inhibitors of P-glycoproteins. In general, all compounds had a superior ADMET profile to the standard; all values are listed in Table 5. Materials and Methods Prior to use, standard methods were followed for the drying and distillation of solvents. Sigma Aldrich provided all the reagents. A Stuart SMP3 melting point apparatus was used for determining the melting point. The NMR spectra were recorded on a Bruker 300 ( 1 H-NMR at 300MHz and for 13 C-NMR at 75.5MHz) and chemical shifts are stated in ppm against an internal reference standard tetramethylsilane or the residual solvent resonance [25]. A thin layer chromatography (TLC) was used to monitor the reactions, and aluminum sheets coated with silica gel F254 were used (Merck). Ultravoilet light at 254 and 360 nm was used for the detection of invisible spots on the TLC plate. General Procedure All the chemicals such as aromatic acids, aniline [2-Me-3-Chloroaniline] and thionyl chloride were bought from Aldrich. Acetone that was of analytical grade [E. Merck] was dried and distilled freshly. One mmol of each aromatic acid was kept in a 100 mL of two roundneck bottom flasks, attached to a reflux condenser and a gas trap. In 1.2 mmol thionyl chloride a few drops of dry DMF were added and the reaction mixture was refluxed for 3 h in order to obtain the acid chlorides. In a 250 mL of two roundneck bottom flasks attached to a flux condenser, a solution of potassium thiocyanate (1 mmol) in 20 mL dry acetone was stirred; freshly manufactured aromatic acid chlorides (1 mmol) in each case were supplied dropwise and the mixture was refluxed for 30 min with the addition of the solution of 2-Me-3-chloroaniline (1 mmol) in 20 mL dry acetone dropwise. The reaction mixture was then refluxed and stirred for 1-2 h. TLC [thin layer chromatography] was employed to monitor the progression of the reaction. After the reaction was complete, the reaction mixture was dumped onto crushed ice. The solid thiourea precipitates that appeared instantly were then filtered out, cold distilled water was used for washing it well; it was dried and recrystallization was completed in ethanol to synthesize thiourea derivatives 4a-j (Scheme 1). Free Radical Scavenging Assay The assay performed in our studies had the same methodology as reported in our previous research articles [25,26]. In Vitro Urease Inhibitory Activity The methodology used in this research for in vitro urease inhibitory activity is the same as reported in our earlier paper [27]. Kinetic Analysis Kinetics performed here as stated previously in our research articles was used for urease activity determination [25][26][27]. For this analysis, compound 4i having the most potent IC 50 value was selected. Retrieval of Jack Bean Urease The crystal structure of JBU was downloaded from the Protein Data Bank (PDB) [28] with PDBID4H9M on the basis of high resolution. The UCSF Chimera 1.10.1 was utilized to access the protein structure of JBU [29]. The Ramachandran graph of the targeted protein was obtained by employing the Discovery Studio Visualizer 4.1 [30]. For the prediction of protein architecture statistical percentage values of the receptor proteins helices, β-sheets, coils and turn, an online server VADAR 1.8 was used [31]. Preparation of Ligands and Molecular Docking All the synthesized chemical ligands were drawn with an ACD/ChemSketch tool and made smaller with an UCSF Chimera 1.10.1 tool. All the ligands [4a-j] were docked against the crystal structure of the JBU experimentally by using a PyRx docking tool [32]. In order to visualize the binding conformational analysis, a grid box center with dimensions 60, 60 and 60, for X, Y and Z, respectively, with spacing of 0.375 Å, was tuned by default exhaustiveness value, which is 8. The evaluation of all the docked complexes was carried out on the basis of the values of the lowest binding energy [kJ/mol] and analysis of the structure activity relationship [SAR]. The Discovery Studio Visualizer [4.1] was used to accomplish 3D graphical representation of all the docked complexes. Molecular Dynamics Simulations For evaluating a set of optimal simulation protocols in NAMD [33], the first of the entire complex (protein urease and ligand 4i) was built using CHARM-GUI solution builder. The Monte Carlo method was used for computing the equilibrium properties. Protein was solvated using the TIP3P water model both individually and in complex, then counter NaCl ions were added with 0.15 concentrations to neutralize charges. Initially, NVT equilibration was performed with the temperature set at 300 K and a constant volume throughout one nanosecond [34]. Secondly, NPT equilibration was conducted for one nanosecond with the temperature set at 300 K and a constant pressure [35]. Ten thousand frames of each trajectory were captured while simulating both proteins and their complexes at 100 ns. Periodic boundary conditions (PBC) were set up automated. CHARMM36 were selected as a forcefield here. VMD was utilized to analyze the results [36]. Density Functional Theory The density functional theory is the wonderful technique used to obtain the equitable optimized structures and the HOMO LUMO analysis. B3LYP functional was used for great accuracy and the efficiency of vibrational spectra. All the gas-phase calculations performed by the DFT/B3LYP method with STO-3G basis set used the Gaussian 09 program [37]. Results were obtained by visualizing the output files using the Gauss View 5 program [38]. ADMET Properties Predicting ADMET characteristics is a crucial step in the drug development process. Drug design and lead optimization are aided by in silico ADMET evaluation models. The online web server ADMET lab 2.0 [ADMET lab 2.0 https://admetmesh.scbdd.com/ (accessed on 7 July 2022) was used to calculate the physicochemical parameters and medicinal properties of thiourea derivatives [39]. Conclusions The derivatives 1-aroyl-3-[3-chloro-2-methylphenyl]thiourea hybrids 4a-j were efficiently synthesized with a high yield. The chemical structures of the synthesized compounds were characterized by spectral data such as FTIR, 1 H and 13 C NMR. In-vitro results of enzyme inhibitory activity proved that the compound 4i exhibited exceptional enzyme inhibitory activity with IC 50 value 0.0019 ± 0.0011 µM better than standard thiourea (4.7455 ± 0.0545 µM). The kinetic studies were also carried out and all the synthesized compounds [4a-4j] showed effective anti-urease activity. Moreover, in silico investigations including molecular docking studies, density functional theory (DFT) studies, and ADMET properties further supported these studies. Therefore, it is suggested that 1-aroyl-3-[3chloro-2-methylphenyl] thiourea derivatives may be the possible "lead" candidate for therapeutic development and discovery.
8,465.8
2022-10-01T00:00:00.000
[ "Chemistry", "Medicine" ]
The convergence epidemic volatility index (cEVI) as an alternative early warning tool for identifying waves in an epidemic This manuscript introduces the convergence Epidemic Volatility Index (cEVI), a modification of the recently introduced Epidemic Volatility Index (EVI), as an early warning tool for emerging epidemic waves. cEVI has a similar architectural structure as EVI, but with an optimization process inspired by a Geweke diagnostic-type test. Our approach triggers an early warning based on a comparison of the most recently available window of data samples and a window based on the previous time frame. Application of cEVI to data from the COVID-19 pandemic data revealed steady performance in predicting early, intermediate epidemic waves and retaining a warning during an epidemic wave. Furthermore, we present two basic combinations of EVI and cEVI: (1) their disjunction cEVI + that respectively identifies waves earlier than the original index, (2) their conjunction cEVI- that results in higher accuracy. Combination of multiple warning systems could potentially create a surveillance umbrella that would result in early implementation of optimal outbreak interventions. Introduction Intervention strategies that control the spread of an epidemic are required to sustain health, social and economic stability in times of local or global health crisis. Such strategies can be supported by early warning tools/systems that provide timely indication and adoption of preventive measures. Early warning systems focus on one or many of the following aspects; 1) modeling of the epidemics' seasonality, 2) identification of the link between meteorological parameters and pathogens (Heffernan et al., 2004) and/or 3) spotting of spatial and temporal abnormalities in the expected number of cases (Vega et al., 2013;Yoneoka et al., 2021). A number of methods already exist to monitor and identify initial and intermediate waves of an epidemic, such as the moving epidemic method that focuses on the start of the epidemic, growth models that provide predictions regarding the outbreak, methods based on machine learning algorithms that utilize associated parameters to indicate future epidemic waves (Chang et al., 2015) and the recently introduced epidemic volatility index (EVI), which is based on the already accumulated data and provides early warnings for both initial and intermediate epidemic waves (Kostoulas et al., 2021). The latter has been shown to provide accurate early warnings for COVID-19 epidemic data from a number of different countries including each individual state in the United States (Kostoulas et al., 2021). However, during the initial wave, as well as in special cases where sudden intermediate waves were observed, the original EVI algorithm is sometimes slow to identify an outbreak, which can result in delayed warnings. EVI can be used in cases of novel emerging threats e like COVID-19 e where the disease has not been well studied, thus providing an "early warning" relative to waiting for more traditional surveillance systems to adjust to the novel pathogen. Therefore, early warnings, as discussed in this manuscript, are not derived from predictive models (Alabdulrazzaq et al., 2021;Benvenuto et al., 2020;Proverbio et al., 2022). After the disease status is established, such early-warnings can be compared to "on-time detection" methods (Vega et al., 2013) while they differ from early warning signal (EWS) approaches that produce forecasts, e.g., by applying dynamic systems (Brett et al., 2017;Brett & Rohani, 2020;Southall et al., 2021). In this manuscript we introduce the convergence Epidemic Volatility Index (cEVI) as a stand-alone alternative and as complementary methods (cEVIþ, cEVI-) to the stand-alone EVI. EVI is briefly introduced, then, cEVI is presented and the differences to EVI are highlighted. An example application is given based on COVID-19 cases of four countries, namely; France, India, South Africa and the United States. We demonstrate that cEVI is a valid choice either as an alternative or as a complementary index to EVI, especially when the identification of rapid changes in intermediate epidemic waves is of importance. The original epidemic volatility index -EVI The original index (EVI) is based on the calculation of a rolling standard deviation for a series of data, for example, the number of new cases per day. At each algorithm's step and for a specific rolling window, cases within a window are obtained by shifting the window forward and calculating a new standard deviation. EVI is then calculated as the relative change between two consecutive rolling windows of size m. A warning signal is issued if the relative change exceeds a threshold (c) between zero and one, and simultaneously the observed cases at time (t) are higher than the previous weeks' reported cases average. Accuracy of EVI is measured by its sensitivity (Se) and Specificity (Sp) of the procedure (the probability of correctly issuing an early warning vs. the probability of correctly not signaling an early warning). These values are calculated in relation to a case definition, i.e., a percentage in the rise of mean number of cases between two consecutive weeks. An inner optimization algorithm based on the Youden's Index ðJ ¼ Se þ Sp À 1Þ, utilizes all sensitivities and specificities and then selects an optimal rolling window size (m') and threshold (c') for each time point (t) (Kostoulas et al., 2021). Based on the optimized combination of a window (m') and a threshold (c') an early warning is issued for time (t). The convergence epidemic volatility index -cEVI The convergence epidemic volatility index, (cEVI) is based on the original EVI (Kostoulas et al., 2021) with an optimization algorithm inspired and led by a Geweke diagnostic-type statistic (Geweke, 1992), as observed in Fig. 1 and in Appendix (B. Model description, step 1). cEVI is calculated based on two consecutive windows of total size m resulting in an early window (m/2) and a late window (m/2). Based on these two non-overlapping but consecutive windows, two averages and standard deviations are calculated. At each step of the algorithm, these averages change as both windows are shifted forward one observation at a time. cEVI is then calculated as cEVI t ¼ y ðtÀm=2þ1Þ:t À y ðtÀmþ1Þ:ðtÀm=2Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2s 2 An early warning is issued if the calculated value results in a statistically significant difference at a specific a level, while the observed cases at exact time (t) are higher than the previous weeks' reported cases average. As in EVI, accuracy of cEVI is measured by its Sensitivity and Specificity of the procedure. A similar criterion to EVI is applied to confirm true positive and true negative signals (i.e., a 20% percentage in the rise of mean number of cases between two consecutive weeks is considered), while again an optimization algorithm based on the Youden's Index ðJ ¼ Se þ Sp À 1) selects an optimal total window size (m') and instead of an optimal threshold (c'), it now optimizes the a-level for accepting a false negative test result. Based on the optimized combination of a window size (m') and a (a'-level), a final early warning is issued for time (t). An overview of the procedure used to calculate cEVI is given in Fig. 1, and more specific details can be found in the online Appendix [B. Model description]. On combining multiple epidemic volatility indexes EVI and cEVI can be applied through the EVI package (Meletis et al., 2022) via the deviant function. cEVI can directly be combined and/or compared with EVI via the graphical function evirlap. Conjunctions, (cEVI-: both EVI and cEVI must produce a signal), and/or disjunctions, (cEVIþ: either EVI or cEVI produce a signal), of the stand-alone EVI and cEVI indices can be calculated and plotted. The aforementioned functions have been incorporated in the GitHub repository (https://github.com/ ku-awdc/EVI) of EVI and they will be incorporated in the published EVI R package (Meletis et al., 2022, p. 7). All models and model comparisons have been developed in R (R Core Team, 2022). Motivating examples Data on the four countries consist of the number of daily reported cases of COVID-19 up to March 9 th 2023. For demonstration purposes of cEVI, four countries were chosen with diverge wave characteristics, such as number of epidemic waves, peak intensity, time to main waves, and/or length of waves, as well as being on different continents. Similarly to the original EVI publication (Kostoulas et al., 2021), we focus on the reported confirmed cases, thus, we do not correct for discrepancies in the surveillance systems or other factors. Daily cases in South Africa consists of five waves, cases from France show two large late waves, cases of United States show smaller intermediate waves along with a single large wave, while cases in India consists of 2 very intense waves (Figs. A1, A2, Figs. 2, 3). These data can be retrieved from the COVID-19 Data Repository which is retained by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (Dong et al., 2020). To minimize the non-biological variability of daily cases, similarly to EVI, a 7-day moving average has been calculated and provided to cEVI as input (in the same way as for EVI). Analysis of examples The ability of cEVIþ, as a disjunction (i.e. identified by EVI or cEVI), to identify early new and intermediate waves can be seen in all presented countries and throughout most countries' epidemic waves (Figs. 2, 3, Figs. A1, A2). Moreover, the ability of cEVI to continuously issue warnings during each wave seems more sufficient compared to EVI. cEVI demonstrates a more sensitive ability to provide daily warnings in comparison to EVI (Figs. 2, 3, Figs. A1, A2). Thus, it has the ability to early identify waves and retain a warning for an ongoing wave. We further provide an empirical comparison across all 4 indices and a benchmark, the Farrigton's algorithm (Farrington et al., 1996), which was widely applied before the pandemic. In Table A1 (Appendix), interested readers can observe all 5 indices' relative behavior under 3 case definitions, an increase of 10%, 20% (original EVI) and 40% in cases between two consecutive weeks. To induce a more fair comparison we explored multiple a-levels for the Farrigton's algorithm. Across all 3 case definition scenarios and 4 countries, a) cEVI-index produced the most number of highest accuracies; b) cEVIþ produced the most number of highest negative predictive values; and c) both cEVI-and Farrigton's algorithm produced the most number of highest positive predictive values with small discrepancies in only a few of those scenarios (Table A1). Discussion As a disjunction of EVI or cEVI, cEVI þ identifies and detects earlier COVID-19 epidemic's new and intermediate waves than either EVI or cEVI (Figs. 2, 3). This characteristic was tested in data from other countries and results remain consistent. The combined index provides indications of an upcoming epidemic wave earlier than the stand-alone indices (Figs. 2, 3). Absence Fig. 2. Combined early warnings for France and India on the logarithm of the moving average number of cases based on cEVI, EVI, cEVIþ, cEVI-for the first 700 days. cEVI-is plotted as the conjunction of EVI and cEVI. All colored dots in each panel construct cEVIþ, the disjunction of EVI or cEVI. EVI alone and cEVIconstruct the EVI warnings, while cEVI alone and cEVI-construct the cEVI warnings. EVI alone and cEVI alone are warnings produced only by EVI or only by cEVI, respectively. The gray dots correspond to no warnings. Primary interest lies in early identifying an epidemic within each country individually and not to compare epidemics across countries that possibly have discrepancies in reporting and diagnostic testing. Alternative versions of Fig. 2 can be found in the Appendix where the total N days are shown for each country (Figs. A1, A2). of a common indication across EVI and cEVI, the cEVI-index, retained relative high accuracy and positive predictive values (Table A1), and it can therefore be suggestive of whether an early warning should be issued at all. cEVI has been constructed under the same framework as EVI, although, this index handles optimization through different parameters; 1. the size of each window ðmÞ and 2. the type I error level ðaÞ are selected at each time point to optimize the Youden's index (Fig. 1). cEVI depends on more computational steps than EVI and even though both EVI and cEVI depend on relative simple computation, the inner optimization algorithm (dotted arrows in Fig. 1) combined with a large time series can make cEVI less computationally efficient. cEVI requires 1.5 times more time than EVI to be computed for a time series of 1-year time points. A function to update, only with new cases, the already run EVI and cEVI algorithms exists in the EVI R package deviant_update (Meletis et al., 2022). cEVI seems to identify better secondary epidemic waves and retain a warning, although its performance worsens during the peak of waves resulting in additional false positive results. Indeed, the positive and negative predictive values of the indices are impacted by the case definition and thus, their relative performance on identifying and retaining a warning, as observed in the empirical comparison of EVI approaches to the Farrigton's algorithm (Table A1). We should note that the applied original Farrigton's algorithm has been further extended and alternatives may perform better when data are available on local outbreaks (Yoneoka et al., 2021). For future work, a more extensive (simulative/empirical) comparison of EVI, cEVI, their combinations and additional approaches could be undertaken (Brett et al., 2017;Southall et al., 2021;Yoneoka et al., 2021). This would require an objective early warning case definition by the construction of an empirical distribution for the case definition, i.e., the empirical distribution of the time difference between the first COVID-19 case for each country compared to the time of first lockdown. Furthermore, in this study, when defining the true COVID-19 cases per country, we did not account for discrepancies in surveillance systems between countries, discrepancies in types of diagnostic testing within and across countries or discrepancies in national factors. As our main goal is to evaluate the performance of cEVI on identifying epidemic waves on surveillance data, we do not expect such discrepancies to impact the relative performance of cEVI to EVI. Fig. 3. Combined early warnings for South Africa and the United States on the logarithm of the moving average number of cases based on cEVI, EVI, cEVIþ, cEVIfor the first 700 days. cEVI-is plotted as the conjunction of EVI and cEVI. All colored dots in each panel construct cEVIþ, the disjunction of EVI or cEVI. EVI alone and cEVI-construct the EVI warnings, while cEVI alone and cEVI-construct the cEVI warnings. EVI alone and cEVI alone are warnings produced only by EVI or only by cEVI, respectively. The gray dots correspond to no warnings. Primary interest lies in early identifying an epidemic within each country individually and not to compare epidemics across countries that possibly have discrepancies in reporting and diagnostic testing. Alternative versions of Fig. 3 can be found in the Appendix where the total N days are shown for each country (Figs. A1, A2). Radical changes in reporting of surveillance data by national authorities that create an influx of cases may signal an early warning for cEVI but it would certainly be identified as a false positive by each country. The combined cEVI can be applied in the context of public health registry surveillance. Disjunctions (cEVIþ) or conjunctions (cEVI-) of the stand-alone early warning indices can lead to further insights. cEVIþ can quickly identify temporal and spatial discrepancies, resulting in a valuable tool for policymakers, healthcare officials and any health-related organization that is interested in identifying abnormalities in an influx of health-related data, for instance, medicinal prescriptions, drug usage and/or exams prescriptions (Andreu-Perez et al., 2015;Reformed, 2021). cEVI-could also provide an earlywarning tool for such systems with high accuracy rates with minimal data as input. Finally, we should also note that it is also important to consider the issue of colored correlations in signals, complex patterns and relationships within the time series data, to report a more informative underlying pattern. Thus, future early warning or on-time detection systems should also account for time-dependent covariates as factors of so-called dark figure (and other surveillance biases) of infections (Lazer et al., 2014). Conclusion Application of cEVI to real data from COVID-19 showed a consistently good performance in predicting early, intermediate epidemic waves while retaining a warning during an epidemic wave. The combined index, cEVIþ, was shown to identify waves earlier than the original stand-alone EVI, while cEVI-demonstrated the ability to retain higher accuracy than the standalone EVI. Combination of multiple alarm systems has the potential to identify waves in a more efficient way and result in early e and thus optimal -implementation of interventions to control outbreaks. Funding This work was funded by COST Action CA18208: HARMONYdNovel tools for test evaluation and disease prevalence estimation (https://harmony-net.eu/). Data and computing code The data and the computing code are already available for replication in GitHub (https://github.com/ku-awdc/EVI) and they will become formally available in the already published R package EVI upon publication of the manuscript. Declaration of competing interest None.
3,922.8
2023-05-01T00:00:00.000
[ "Mathematics" ]
Constraining the Si composition and thermal history of Earth's liquid core from ab initio calculations 22 Earth’s core has sustained a global magnetic field for much of the last 4 bil23 lion years which is, at present, sustained by the power associated with inner 24 core growth. High thermal conductivity of the core suggests the solid inner 25 core is young and models of this predict that there is insufficient power from 26 secular cooling to sustain a geodynamo prior to inner core formation. Pre27 cipitation of light elements dissolved into the liquid core offers an alternative 28 power source for the magnetic field in the absence of inner core growth. We 29 present the first ab initio calculations of the silicon partition coefficient at 30 core-mantle boundary conditions and a thermodynamic partitioning model 31 based on interaction parameters which captures previous experimental re32<EMAIL_ADDRESS>Preprint submitted to Earth and Planetary Science Letters January 26, 2022 sults. We report our model and its implications for the past and present 33 core composition as well as the effect of silicon precipitation on the early 34 geodynamo. Oxygen competes with silicon in the liquid metal, meaning for 35 one to be abundant, the other must be sparse. We calculate precipitation 36 rates of ∼ 10−4 to 10−6 wt % K for oxygen concentrations of 0.6 to 3.1 37 wt%. Incorporating our partitioning model into a classic thermal evolution 38 model of the core coupled to a parameterised model of the solid mantle, we 39 show that precipitation of Si can satisfy constraints of the present inner core 40 size, convective heat flux of the mantle and mantle temperature, all whilst 41 sustaining a magnetic field until inner core formation, but requires that the 42 initial oxygen content of the core was < 3 wt%. We find that the core inner 43 age is between 840 and 940 Myrs and that the ancient core was hot, with a 44 core mantle boundary temperature of ∼4700 K, 3.5 Ga. 45 Earth's core has sustained a global magnetic field for much of the last 4 bil-23 lion years which is, at present, sustained by the power associated with inner 24 core growth. High thermal conductivity of the core suggests the solid inner 25 core is young and models of this predict that there is insufficient power from 26 secular cooling to sustain a geodynamo prior to inner core formation. Pre-27 cipitation of light elements dissolved into the liquid core offers an alternative 28 power source for the magnetic field in the absence of inner core growth. We the inner core to be far less than 1 Gyrs old and requires the geodynamo to 71 be powered by heat loss from the core alone. This rapidly cooling scenario 72 means the mantle would have been subject to a super-solidus core-mantle 73 boundary (CMB) temperature for much of Earth history (Nimmo, 2015b; 74 Davies, 2015; Labrosse, 2015). Davies and Greenwood (2022) proposed that 75 the presence of a basal magma ocean (BMO) may provide a resolution, al-76 though this approach relies upon the uncertain evolution of the BMO as well 77 as requiring a conductivity at the lower limit of the recent high estimates. 78 In search of an alternate explanation for the long-lived geodynamo, prior implement an interaction parameter model (Ma, 2001) with differing num-98 bers of included elements and interactions. Davies and Greenwood (2022) 99 show that these differences change the onset time and power of precipitation 126 To evaluate the influence of Si precipitation on the geodynamo we use ab 127 initio molecular dynamic simulations of iron-rich liquids and silicate liquids 128 to calculate equilibrium constants at CMB conditions. We compare our ab 129 initio results to the results of a thermodynamic model fit to experimental 130 partitioning data of Si between silicate and metallic liquids using the inter-131 action parameter formulation of Ma (2001). This thermodynamic model is 132 then coupled to a core evolution model to describe Si solubility in the liq-133 uid core as it cools. We then evaluate the influence of precipitation on the 134 thermal history of the core. We conduct density functional theory (Hohenberg and Kohn, 1964;Kohn 137 and Sham, 1965) molecular dynamic simulations of silicate and iron-rich liq-138 uids to calculate the excess chemical potentials of individual chemical com-139 ponents. Chemical potentials (µ i ) can be described as the free-energy change 140 (∂F ) of a system when the quantity of a species is changed (2) Method 2 computes the change in free energy as a result of changing the Here µ i is dependent on v, T and composition. Separating out the configu-158 rational portion of µ i , which plays no role in partitioning, gives whilstμ SiO 2 =μ Si + 2μ O in the liquid, which when rearranged (for a disso-160 ciation reaction) becomes equal the distribution coefficient allowing us to validate our thermodynamic model. 162 We focus on pressures and temperatures most relevant to the CMB (124 163 GPa and 4500-5500 K), as these are the most crucial for the evolution of 164 the core, and also to avoid complications with changes in magnetic moment 165 at shallower conditions. Simulations were run using the VASP code (Kresse 166 and Furthmüller, 1996) in the canonical ensemble using a Nosé thermostat 167 (Nosé, 1984) and with the Brillioun Zone sampled at the Γ point. A timestep 168 of 1 fs was used and runs lasted between 10 and 100 ps. The plane wave 169 cutoff was set to 500 eV and the projector augmented wave method (Kresse 170 and Joubert, 1999) was used with the generalised gradient approximation 171 functional PW91 (Perdew et al., 1992 The equilibrium constant (K) describes a reaction at equilibrium where a i are activities, α i are reaction exponents, x i and x j are the mo- to K by for which K d is given respectively by where ∆F r is the Helmholtz free energy of reaction and ∆G r is the equivalent 195 Gibbs free energy change, representing conditions of constant volume (V ) and 196 pressure (P ), respectively. ∆H r , ∆S r and ∆V r are the changes in enthalpy, 197 entropy and volume with reaction, respectively, and k B is the Boltzmann 198 constant. Eq. 14 is often written where a, b and c describe the entropy (S), enthalpy (H) and volume (V ) 200 changes of reaction, respectively. This naming convention is adopted over 201 traditional thermodynamic notation because these quantities are not exclu-202 sively represented by a, b or c; entropy for example, will have a pressure and 203 temperature dependence which is not captured by a and so, in practice, these 204 effects will be absorbed into b and c. What cannot be absorbed into these 205 parameters is the compositional dependence of the reaction. 206 Combining Eq. 7 and 15 gives the model which is fit to calculated K d from previous experimental partitioning results 208 via a least squares approach. The best-fit parameters are used to calculate 209 the equilibrium concentration of Si for a dissociation reaction (which we later 210 show to be favourable) in the liquid metal using and T 0 = 1873 K. The activity coefficient of the solvent (γ Fe ) is described 244 In Table 1 we report our ab inito results of the calculated excess chemical 245 potential of SiO 2 at 5500 K and 4500 K, both at 124 GPa. We show consis-246 tency with experimental K d at comparable T and P in Fig. 1 including those below the solidus of SiO 2 (e.g. Usui and Tsuchiya (2010)). 338 We assume the core to be well mixed throughout for convenience, although with the overlying mantle. We assume that the core is mixed thoroughly 448 on timescales far shorter than the timestep of our simulation such that the 449 liquid core has no compositional variation nor stable layers. 450 To balance energies in the core, we follow Davies (2015) where, if small 451 terms are ignored, the heat flow across the CMB (Q cmb ) is where Q s is the secular heat stored in the core and Q L is the latent heat 453 release due to inner core growth. Q p is the gravitational energy from mixing 454 the dense, iron-rich residual liquids post precipitation across the outer core where ρ is density, α i ppt is expansivity, ψ is gravitational potential, C ppt is the 456 precipitation rate (see Fig. 5), t is time and V c is volume of the liquid core. core has no effect on the Si concentration of the liquid core. Gubbins et al. 461 (2004) show that the entropy budget of the core can be balanced by where E α is the entropy due to barodiffusion throughout the core which 463 is negligible (Gubbins et al., 2004;Davies, 2015) and so is ignored. E k is 464 the entropy from thermal conduction and the other terms follow the same 465 notation as their energy counterparts. 466 If when evaluated, these entropy sources produce a positive E j , the geo-467 dynamo can be sustained. This presents the difficulty in a high thermal 468 conductivity core, the entropy balance now has a far larger E k , taking power 469 from E j , and so to sustain a magnetic field before inner core growth, one or 470 more of the r.h.s terms must be increased. Because time before inner core 471 nucleation excludes the influence of E L and E g , a more rapidly cooling core 472 (E s ) or precipitation (E ppt ) are needed. 473 We vary the upper to lower mantle viscosity ratio (f visco ) and initial CMB 474 temperature (T t=0 cmb ) to regulate the core temperature such that the final state 475 of our models matches constraints of the present-day core. These constraints is because for the core to cool sufficiently to produce the inner core of present 491 radius, they fail to consistently sustain a positive E j . We find that including 492 the energy and entropy effects of precipitation can maintain a positive E j for 493 the majority of Earth history preceding inner core formation, also found to be 494 the case by Hirose et al. (2017), producing an older inner core. We find that 495 high initial oxygen concentration cases require f visco < 1 in order to grow the 496 core to present-day size meaning the upper mantle is more viscous than the lower mantle. We do not expect this to be the case in reality (Rudolph et al., 2015), highlighting the requirement for modest O content for Si precipitation 499 to sustain a geodynamo whilst satisfying present-day constraints. Figure 6: Thermal evolution of the Earth's core with an initial condition of low O concentration (2 mol%) and Si saturation with inclusion (teal) and exclusion (pink) of power and entropy of precipitation (cases D and D P from Table 2, respectively). Inner core radius (a), mid-mantle potential temperature (b, left, solid lines) and CMB temperature (b, right, dotted lines), convective mantle heat flux (c, left, solid lines) and CMB heat flow (c, right, dotted), and core entropy from ohmic dissipation (d). Black dashed lines show present-day target values. We show the outcomes of thermal history cases from Table 2 to produce a magnetic field from ∼800 Myrs prior to inner core formation 518 (crossed symbols, Fig. 7). 519 Figure 7: Inner core age and core temperature at 3.5 Ga for our model (coloured symbols) with and without convective power from precipitation (connected by dashed lines). Initial Si saturation is shown as red and undersaturation as blue whilst O rich initial conditions are squares and O poor conditions are circles. Numbers within symbols give the CMB heat flow at 3.5 Ga and where models fail to maintain positive E j prior to inner core nucleation, symbols are crossed out. Also shown are the models of Davies and Greenwood (2022) (who examine MgO precipitation) where up, right and down triangles have a ppt. rate of 0, 0.3 and 1.5 ×10 −5 K -1 respectively and colours denote the core properties in terms of the density jump at the ICB (0.6 (light grey) and 1.0 g cm -3 (dark grey)) which represent bounding extremes of the density jump. 531 We find that the precipitation of Si allows the early core to cool more 532 slowly than it would otherwise and supplies power to the geodynamo through- transition of the magma ocean should occur between 40% to 60% melt frac-537 tion (Abe, 1997;Solomatov, 2015), rather than at the intersect of the liquidus 538 or solidus, which would correspond to the occurrence of complete freezing and 539 first partial melt, respectively. Our thermal histories for the low oxygen con- The dataset using in this study is available to download from the sup- Badro, J., Aubert, J., Hirose, K., Nomura, R., Blanchard, I., Borensztajn, core and its potential to drive an early exsolution geodynamo. Geophysical
3,049.2
2022-01-28T00:00:00.000
[ "Geology", "Physics" ]
Degradation of metaldehyde in water by nanoparticle catalysts and powdered activated carbon Metaldehyde, an organic pesticide widely used in the UK, has been detected in drinking water in the UK with a low concentration (<1 μg L−1) which is still above the European and UK standard requirements. This paper investigates the efficiency of four materials: powdered activated carbon (PAC) and carbon-doped titanium dioxide nanocatalyst with different concentrations of carbon (C-1.5, C-40, and C-80) for metaldehyde removal from aqueous solutions by adsorption and oxidation via photocatalysis. PAC was found to be the most effective material which showed almost over 90% removal. Adsorption data were well fitted to the Langmuir isotherm model, giving a q m (maximum/saturation adsorption capacity) value of 32.258 mg g−1 and a K L (Langmuir constant) value of 2.013 L mg−1. In terms of kinetic study, adsorption of metaldehyde by PAC fitted well with a pseudo-second-order equation, giving the adsorption rate constant k 2 value of 0.023 g mg−1 min−1, implying rapid adsorption. The nanocatalysts were much less effective in oxidising metaldehyde than PAC with the same metaldehyde concentration and 0.2 g L−1 loading concentration of materials under UV light; the maximum removal achieved by carbon-doped titanium dioxide (C-1.5) nanocatalyst was around 15% for a 7.5 ppm metaldehyde solution. Graphical abstract ᅟ Electronic supplementary material The online version of this article (doi:10.1007/s11356-017-9249-1) contains supplementary material, which is available to authorized users. Introduction Metaldehyde, which has been reported by the UK Environment Agency, is an organic compound used as pesticide targeting slugs, snails, and other molluscs and is widely used in agriculture (UK Environment Agency 2009). There are growing concerns that relatively high levels of metaldehyde have been detected in surface water. In fact, it is reported that trace amounts of metaldehyde have been found in treated drinking water in the UK with concentrations as high as 1 μg L −1 which is above the European and UK standards of 0.1 μg L −1 (Water UK 2013). The common treatment designed to remove pesticides from water by adsorption onto granular activated carbon (GAC) or by other processes involving chlorine or ozone was proven to be ineffective in removing metaldehyde (Water UK 2013). There are a number of studies investigating new methods to remove metaldehyde from water. For example, using ultraviolet (UV) irradiation to activate a number of different chemicals such Responsible editor: Suresh Pillai Electronic supplementary material The online version of this article (doi:10.1007/s11356-017-9249-1) contains supplementary material, which is available to authorized users. as TiO 2 , H 2 O 2 for the degradation of the organic pollutant to CO 2 and H 2 O (Autin et al. 2012), and a dual-stage method of using catalyst and ion-exchange resin as adsorbent to remove metaldehyde (Tao and Fletcher 2016). However, a more costeffective method is still needed to solve this issue. The reason why metaldehyde is not effectively degraded by GAC could be that the particle size and surface area of GAC are not suitable for the removal of metaldehyde. Therefore, one alternative approach would be to use powdered activated carbon (PAC) which has smaller particle sizes than traditional GAC, thereby providing more pore and surface space for adsorption. Another approach that has potential to remove organic pollutants would be advanced oxidation processes (AOPs), which applies UV irradiation to catalysts such as TiO 2 to produce ·OH radicals and attack organic molecules (Zhang et al. 2012;Doria et al. 2013;Kim et al. 2013;Ribeiro et al. 2015). TiO 2 as a widely studied photocatalyst has shown its potential for removing organic pollutants from water. For instance, Chung and Chen (2009) found that azo dye reactive violet 5 was successfully removed by TiO 2 photocatalysis; Lin et al. (2011) studied the degradation of benzylparaben by UV/TiO 2 . This study investigated the effectiveness of using three novel nanocatalysts, i.e., C-doped TiO 2 with a carbon content of 1.5, 40, and 80% under UV-C light to remove metaldehyde from aqueous solution. The photocatalytic activity of TiO 2 for degradation of dilute pollutants is known to be enhanced by addition of small amounts of absorbents, in particular, activated carbon and zeolites (Agrios and Pichat 2005). The adsorbent-catalyst system interacts synergistically leading to a higher performance of degradation of metaldehyde in water. The National Chemical Laboratory (NCL) had developed carbon from cheap agro-wastes and used it in TiO 2 synthesis which showed high performance for the degradation of certain dyes. The efficiency of these catalysts for metaldehyde degradation has been compared with that of PAC in this work. The specific objectives of this study were (1) to determine the effect of initial metaldehyde concentration on degradation; (2) to find out the effectiveness of PAC and the novel catalysts on metaldehyde degradation; (3) to check the effect of UV-C light on metaldehyde degradation; and (4) to analyse the adsorption and kinetics of metaldehyde degradation. Specification of PAC and synthesis of nanocatalysts Commercial PAC (charcoal, decolorizing powder activated) used in the work was Darco G60, manufactured by the British Drug Houses (BDH) laboratory supplies. Its carbon source is charcoal, and it is certified for a maximum use level of 250 mg/L (National Sanitation Foundation 2015). In addition, the following C-doped TiO 2 nanocatalysts were used: & Cetyltrimethylammonium bromide (CTAB)-modified carbon-doped titanium dioxide (C-doped TiO 2 ) nanocatalyst with 1.5% carbon, 98.5% TiO 2 (C-1.5) & CTAB-modified C-doped TiO 2 nanocatalyst with 40% carbon, 60% TiO 2 (C-40) & CTAB-modified C-doped TiO 2 nanocatalyst with 80% carbon, 20% TiO 2 (C-80) The C-doped TiO 2 nanocatalysts were provided by the National Chemical Laboratory (NCL) in India. The C-doped TiO 2 catalyst (C-1.5) was made using 7.372 g titanium butoxide, 33.818 g isopropanol, 7 mL H 2 O, 0.5 g urea, 0.03 g carbon made from sugar cane leaf agro-waste, and 5 g CTAB. The procedure of synthesis was as follows: titanium butoxide and isopropanol were added together and stirred for 0.5 h, and CTAB was added to H 2 O and isopropanol and mixed well. After that, urea was dissolved in the mixture and then carbon was added. This mixture then was added into the previous butoxide solution and stirred for 24 h at room temperature. Then, the mixture was dried at 80°C for 5 h and lastly calcined at 300°C for 3 h. C-40 and C-80 nanocatalysts were made by the same procedure but, this time, varying the amounts of carbon and titanium butoxide. The characterization of PAC was determined by the Micromeritics Instrument Corporation in Korea using AutoPore IV 9500 V1.07. The scanning electron microscope (SEM) images of PAC and C-doped TiO 2 (C-1.5) were captured with an accelerating voltage of 20 kV. The characterization of C-doped TiO 2 (C-1.5) was determined by the National Chemical Laboratory. Other materials included metaldehyde, HPLC-grade methanol, and HPLC-grade dichloromethane (DCM). One gram of solid metaldehyde PESTANAL was purchased from Sigma-Aldrich. Preparation of metaldehyde standard solutions During the whole experiment and sample preparation, Millipore water was used. This was because deionised water could have a high organic content and may react with ·OH radicals that are produced by photocatalysis during the reaction and thereby affect the GC-MS analysis. Metaldehyde stock solution was prepared by the reference method from the UK Environment Agency (UK Environment Agency 2009). Metaldehyde solid (0.1 g) was added into 100 mL methanol to make 1000 ppm metaldehyde stock solution. Metaldehyde stock solutions could be stored between 1 and 10°C for up to 1 year. For each photocatalytic experiment, a different amount of stock solution was diluted by Millipore water to 1000 mL to prepare sample solutions with different metaldehyde concentrations. The studied range of metaldehyde concentrations was from 0.1 to 15 ppm. Analytical methods Metaldehyde was analysed by gas chromatography (Perkin Elmer precisely Clarus 500) with mass spectrometry (GC-MS). All samples of metaldehyde solution taken from the photoreactor were filtered using a MILLEX 0.22-μm syringe-driven membrane filter unit (manufactured by Millipore Express) before passing through a pre-conditioned solid phase extraction column (SDB SPE disposable extraction columns, 3 mL, 200 mg 126 BAKERBOND™ spe). After extracting metaldehyde from the aqueous phase to the organic phase (in DCM), the sample was then transferred to the autosampler. All samples were prepared as triplicates, and each sample was injected three times by the autosampler to ensure repeatability and minimise instrumental error. Before injection of the samples, pure DCM was first injected to ensure samples were not contaminated from previous use of the GC-MS (Fig. S1). The detection of metaldehyde by GC-MS was made using the parameters in Table S1 in the supplementary materials. The solid phase extraction method is described in the Appendix, and the recovery rates of metaldehyde for each set of experiments from the aqueous phase to the organic phase are listed in Table S2 in the supplementary materials. The detection limit of metaldehyde by GC-MS was tested by preparing a range of metaldehyde solutions in DCM from 0.1 ppb to 10 ppm. From Fig. 1 (chromatograms showing peaks of metaldehyde and DCM), it was suggested that the detection limit was between 1 and 5 ppb. For metaldehyde concentrations equal or higher than 5 ppb, the peak of metaldehyde at 7.37 min could be detected, together with the peak of DCM at 9.72 min, and for concentrations equal or higher than 50 ppb, the peaks of metaldehyde were distinctive with low peaks of DCM presented at 9.72 min. As the concentration of metaldehyde increased, the peaks of metaldehyde became more distinctive while the peaks of DCM became less distinguishable, especially when concentrations of metaldehyde were higher than 0.5 ppm. For metaldehyde concentration below 50 ppb, there were a few peaks at 7.64, 7.84, 7.90, and 8.14 min which were worth noting. From Fig. S1 150°C; because DCM is extremely volatile, it would partially decompose on heating and might produce vapours of HCl, CO, and COCl 2 (International Labour Organization 2012). Batch photoreactor system All experiments were performed in a batch reactor using a photoreactor with an ultraviolet (UV) lamp as the source of radiation. This batch photoreactor system followed the one proposed by Kim et al. (2013). The photoreactor was a rectangular box made from stainless steel with four valves installed at the bottom and top (Figs. S2 and S3 in the supplementary materials). The UV lamp used was a UV-C medium-pressure mercury-vapour Philips lamp, of 11 W, 240 V, and 254 nm wavelength, made in Holland. The light density of this lamp was 35 μmol m −2 s −1 (11.4 W m −2 ), measured by a lux metre (Apogee, model MQ-100, serial number 1514, made in the USA). The UV-C lamp was inserted vertically and mounted from the top of the reactor which enabled it to make contact with solution inside. The reactor was surrounded by a water cooling jacket to prevent the sample solution from being heated bythe UV-C lamp during the course of the reaction. Hence, a constant temperature of around 25°C (room temperature) was maintained. The reactor was connected to an air source from the air tap to ensure that PAC or nanocatalysts were well mixed and evenly distributed in the solution from bottom to top inside of the reactor. The air supply was maintained at 1 cm 3 /min through an air flow metre manufactured by CT Platon. In addition, a magnetic stirrer was placed inside the reactor to stir the sample solution and ensure that PAC or nanocatalysts were in contact with the solution. For all experiments, the volume of the metaldehyde solution was 500 mL, the loading concentration of PAC or catalyst was 0.2 g L −1 , and reaction time was 2 h. At first, a set of experiments were carried out to compare the efficiency of all the materials in removing 5 ppm metaldehyde solution under different conditions: C-doped TiO 2 (C-1.5), UV-C light only, C-doped TiO 2 (C-1.5) with UV-C light, C-40 with UV-C light, C-80 with UV-C light, PAC with UV-C light, and PAC in the dark. After that, using the most effective material (PAC) under UV-C light, the adsorption isotherm was determined by using 0.1 g PAC to degrade 500 mL metaldehyde solutions with different concentrations that ranged from 0.1 to 10 ppm. To compare with adsorption of metaldehyde by PAC, a set of experiments were carried out to analyse photocatalysis of metaldehyde (concentration range from 0.1 to 12 ppm) by C-doped TiO 2 (C-1.5) nanocatalyst. shaped clusters with an average size of 25 μm. The porosity is 17.78%. From the SEM images, the surface of the clusters is flat, rough, and porous. Figure 3 shows the SEM images of C-doped TiO 2 (C-1.5). As titanium butoxide solution was added to pseudohomogeneous solution containing carbon particles (made from sugar cane leaf agro-waste) under rigorous stirring, it is possible that TiO 2 would be formed with the carbon particles decorating it. However, the TEM image presented by Fig. 4 shows the presence of carbon on the TiO 2 surface; therefore, it is not considered as TiO 2 -decorated carbon particles. The crystal size of the nanoparticles is around 10 nm. Compared to the PAC, the shape of the nanocatalyst particles is more rounded. These particles have agglomerated together forming clusters, which appear to have a rough, porous surface. The surface area of C-doped TiO 2 nanocatalyst is 115.06 m 2 g −1 , total pore volume is 0.3349 cm 3 g −1 , and average pore diameter is 105.8 A°. The characterizations of PAC and C-doped TiO 2 (C-1.5) nanocatalyst are presented in Table 1. The characterizations of C-40 and C-80 nanocatalysts are at different stages, and therefore are not shown here. Effect of UV-C light and increasing carbon content of nanocatalysts A set of experiments were carried out to determine the role of UV-C, PAC, C-doped TiO 2 (C-1.5), C-40, and C-80 nanocatalysts on metaldehyde degradation under controlled conditions, including presence of PAC/nanocatalysts only, UV-C light only, and both PAC/nanocatalysts and UV-C. The prepared metaldehyde solution concentration for this experiment before all treatments was 5 ppm, while the nanocatalyst loading concentration was 0.2 g L −1 . Figure 5 shows the concentration of metaldehyde after each treatment. All data has a relative standard deviation (RSD) less than 6%, suggesting quite high precision and accuracy from experiment performance and instrumental analysis. There was no significant metaldehyde degradation by the nanocatalysts alone, UV-C light alone, and nanocatalysts with UV-C light (Table 2). An ANOVA single-factor statistic test was performed to determine whether there was a significant difference (p < 0.05) before and after different treatment methods. The treatment methods incorporating C-80 with UV-C, PAC with UV-C, and PAC only (marked with asterisks) show significant differences of metaldehyde concentrations before and after each treatment (p < 0.05). The increased carbon content of nanocatalysts from 1.5 to 80% only slightly increased the removal of metaldehyde by 2 and 4%, respectively. On the other hand, PAC alone removed a substantial amount of metaldehyde by 77% which confirmed that adsorption is one of the favourable removal mechanisms for metaldehyde. Busquets et al. (2014) had similar findings with mesoporous phenolic carbon which demonstrated effective degradation of metaldehyde with an adsorption capacity of 76 mg g −1 for 64 ppm metaldehyde. Interestingly, degradation of metaldehyde by PAC under UV-C light was slightly more effective (by 4%) than PAC alone under dark conditions. This could indicate that metaldehyde can be more effectively degraded by a combination of adsorption and photolysis which could be a promising technique in water and wastewater treatment. Similarly, there were studies implying combination of UV light and GAC can increase degradation efficiency by more than 50%, regarding removal of total solid concentration, total volatile solids, and biochemical oxygen demand of wastewater (Asha et al. 2015). Degradation of metaldehyde using PAC PAC (0.1 g) was used in this set of experiments with different prepared initial concentrations of metaldehyde samples at 0.1, 0.5, 1, 5, and 10 ppm under UV-C light for the 2-h treatment. Figure 6 compares the concentration of metaldehyde before and after treatment by PAC. PAC effectively removed metaldehyde from the solution, especially for lower initial concentrations of 0.1, 0.5, and 1 ppm (p < 0.05). Figure 7 shows that the removal of metaldehyde decreases as the initial concentration of metaldehyde increases. It can be seen that the removal of metaldehyde at lower concentrations were 88, 95, 95, 82, and 59% for 0.1, 0.5, 1, 5, and 10 ppm, respectively. The removal of metaldehyde was slightly lower at 0.1 ppm than those at 0.5 and 1 ppm. This is because adsorption at low concentration indicates there is a considerable amount of adsorption sites but only with a small amount adsorbate, and when exceeding a certain ratio of adsorbent and adsorbate, adsorption slows down (Nandi et al. 2008). Therefore, the reaction is slower and more adsorption time is needed for more effective removal of metaldehyde. Degradation of metaldehyde using C-doped TiO 2 (C-1.5) nanocatalyst C-doped TiO 2 (C-1.5) nanocatalyst is considered not effective regarding degradation of metaldehyde in water, especially compared with PAC. Figure 8 shows the concentrations of metaldehyde in solution before and after the 2-h treatment by C-doped TiO 2 (C-1.5) nanocatalyst under UV-C light. At higher initial concentrations of metaldehyde solutions such as 5, 7.5, 10, and 12 ppm, the degradation was slightly more significant (2-9%), compared to that at lower concentrations. Only for prepared concentrations of metaldehyde higher than 5 ppm, there was a significant difference (p < 0.05) before and after treatment, suggesting there was no degradation of metaldehyde at prepared initial concentrations at 0.1, 0.5, 1, C-doped TiO 2 (C-1.5) only 0 UV-C only 1 ± 1 C-doped TiO 2 (C-1.5) + UV-C 2 ± 2 C-40 + UV-C 4 ± 4 C-80 + UV-C* 6 ± 4 PAC + UV-C* 81 ± 1 PAC only* 77 ± 1 and 2.5 ppm. The degradation of metaldehyde at 7.5 and 10 ppm (Table 3) was the highest with removal of metaldehyde of 15 ± 5 and 13 ± 5%, respectively. This can be explained by the following: (1) photocatalysis reaction is slow with low concentrations of contaminants and it would require longer reaction time to effectively degrade contaminants (Dionysios et al. 2016); therefore, at lower concentrations (<5 ppm), 2 h reaction time would not be enough to effectively degrade metaldehyde, and (2) at high concentrations, the active sites on the surface of the nanocatalyst would be gradually filled by metaldehyde molecules; therefore, removal of metaldehyde would be lower. Degradation of metaldehyde by C-doped TiO 2 was not significant compared to the values of Autin et al. (2012) who found complete degradation of 1 ppm of metaldehyde using 0.3 mM of TiO 2 (0.024 g) with 600 mJ cm 2 UV radiation. In our case, 0.1 g of C-doped TiO 2 (C-1.5) cannot degrade 1 ppm metaldehyde. Therefore, this suggests that further investigation is needed to investigate the degradation using higher concentrations of the catalysts (>0.1 g) or stronger UV radiation. Fitting adsorption isotherm models for PAC The adsorption isotherm of metaldehyde degradation by PAC under UV-C light is shown in Fig. 9. The adsorption uptake at equilibrium (q e : concentration of solute metaldehyde on the surface of the adsorbent PAC) can be calculated from the initial solution concentration (C 0 ) at t = 0, solution concentration after 2 h of contact time (C e : final concentration of solution in equilibrium), and the material (PAC) loading concentration (C solid ) as Eq. (1) demonstrates (Kumar et al. 2008). The plot of q e against C e in Fig. 10 suggests that metaldehyde adsorption obeys two possible adsorption isotherm models: Freundlich model and Langmuir model. The Freundlich model can be represented by Eq. (2) which shows the empirical relationship between C e and q e with two specific Freundlich constants, K F (indicates adsorption capacity) and 1/n (indicates adsorption intensity), that are dependent on the adsorbate and adsorbent (Kumar et al. 2008). Fig. 6 Concentrations of metaldehyde before and after 2-h treatment by 0.1 g PAC: UV-C light has a wavelength of 254 nm; each bar represents nine experimental data (triplicates samples and three injections into GC-MS; error bars showing standard errors The Freundlich model is linearized as Eq. (3) to find K F and 1/n by linear regression (Fig. 10). Figure 10 shows that the data do not fit well with the Freundlich model (R 2 = 0.8735). The K F value obtained is 0.092 (mg g −1 )/(mg L −1 ) 1/n , and the 1/n value obtained is 0.64 (n = 1.563). In fact, due to effective degradation of metaldehyde by PAC, the K F value, as an indicator of adsorption capacity, is supposed to be larger than the obtained value from the Freundlich model. Nevertheless, the K F value here is quite insignificant. From the study by Radjenovic and Medunic (2015), effective degradation gives a K F value of 1.074 (mg g −1 )/(mg L −1 ) 1/n , and 1/n is an indicator of the distribution of energy sites. A high value of 1/n suggests high affinity between the adsorbate and the adsorbent. 1/n is 0.64 suggesting that 64% of the active adsorption sites have equal energy levels. Although the value of 1/n does fit in the beneficial adsorption (0.1 ≤ 1/n ≤ 1), the low 1/n value cannot explain the actual effective degradation of metaldehyde (Radjenovic and Medunic 2015). Therefore, the Freundlich model is suggested as not being suitable for fitting the data of metaldehyde degradation by the PAC in this study. The Langmuir model shows the relationship between C e and q e with two constants, K L (Langmuir constant in L mg −1 ) and q m (maximum/saturation adsorption capacity in mg g −1 ) (Radjenovic and Medunic 2015), shown by Eq. (4). The Langmuir model is linearized as Eq. (5), and shown by Fig. 11 so that the intercept of 1/K L q m and slope 1/q m could be found (Radjenovic and Medunic 2015). Fig. 8 Concentrations of metaldehyde before and after 2-h treatment by 0.1 g C-doped TiO 2 (C-1.5) nanocatalyst: UV-C light has a wavelength of 254 nm; each bar represents nine experimental data (triplicates samples and three injections into GC-MS; error bars showing standard errors Fig. 9 Adsorption isotherm of metaldehyde degradation by PAC Table 3 Percentage degradation of metaldehyde using 0.1 g C-doped TiO 2 (C-1.5) under UV-C light with a wavelength of 254 nm Prepared initial concentration (ppm) Metaldehyde removal (%) 5 1 0 ± 1 7.5 15 ± 5 10 13 ± 5 12 7 ± 3 Figure 11 shows that that data fit very well with the Langmuir model (R 2 = 0.9811). The q m value obtained is 32.258 mg g −1 and the K L is 2.013 L mg −1 . Effective degradation from the study by Radjenovic and Medunic (2015) gives q m of 12.71 mg g −1 and K L of 0.0211 L mg −1 , and here, both values of q m and K L are larger than that, thereby indicating effective degradation of metaldehyde by PAC in this experiment. Based on the results of the fitting to the Langmuir isotherm, PAC has a q m value of 32.258 mg g −1 . It is worth noting that the value of q m calculated in this work is much higher than the value of 12.8 mg g −1 (Busquets et al. 2014) obtained using industrial GAC. This is likely to be associated with the higher specific surface area of PAC used for our experiments (962 m 2 g −1 ) compared with that used in the earlier studies (500 m 2 g −1 ). Moreover, the sorbent used here has a higher affinity for metaldehyde, as the initial slope of its isotherm is greater than that of the GAC. Therefore, the Langmuir model is considered a better model for representing metaldehyde degradation using the PAC investigated. Adsorption kinetic study for PAC A set of experiments was further performed using 5 ppm metaldehyde and 0.1 g PAC with a 2-h reaction time under UV-C light. Samples were taken at 0,5,10,15,20,30,40,50,60,90, and 120 min. The result is presented in Fig. 12. At 5 min, the removal of metaldehyde had already achieved 64%, indicating that at the very beginning of the reaction, the adsorption efficiency of PAC was at its highest. For a 2-h reaction time, the total removal of metaldehyde was 81%. The removal of metaldehyde plateaued over time suggesting that PAC was getting saturated gradually. The adsorption capacity (q t ) of PAC at different times is presented in Fig. 13. This suggests that PAC adsorbed metaldehyde (q t5 = 18.01 mg g −1 ) at 5 min, almost as soon as the experiment started, with the adsorption capacity at equilibrium (q e ) of 22.87 mg g −1 at 120 min when the PAC was considered to be saturated. Compared to the experiment of removing 5 ppm metaldehyde using PAC only, the final q e value is To study the adsorption rate and model the adsorption kinetic data, pseudo-first-and pseudo-second-order equations were used as they are the most common kinetic models for adsorption. The pseudo-first-order model, according to Lagergren (1898), assumes that the adsorption rate is proportional to the difference of adsorbate adsorbed at equilibrium (q e ) and at time (q t ) shown by Eq. (6) (k 1 : pseudo-first-order kinetic rate constant). Take the log value of each side; Eq. (6) can be linearized: To fit the data to Eq. (7), ln (q e − q t ) was plotted against time which gives a slope of −k 1 and intercept of ln q e (Fig. 14). The R 2 value is 0.7844, suggesting that the data are not well fitted to the pseudo-first-order model. The intercept is 0.8224 which represents ln q e and gives a theoretical q e value of −0.2 mg g −1 obtained from the pseudo-first-order model. Nevertheless, this value does not match to the q e value of 22.87 mg g −1 from the experiment. However, compared to the study of Salvestrini et al. (2016) which GAC gave a k 1 value of 0.45 h −1 with R 2 value of 0.87, the adsorption rate constant of PAC here, k 1 , is 0.0161 min −1 which is 0.966 h −1 more than twice higher than that of GAC, suggesting PAC is more efficient regarding the adsorption of metaldehyde. Since the data do not fit well with the pseudo-first-order model, they are then fit to the pseudo-second-order model. Equation (8) was given by Ho and McKay (1998) in differential form, and k 2 is the pseudo-second order kinetic rate constant. And it can be integrated to which can be transferred into To fit the data to Eq. (10), t/q t was plotted against time and from which q e and k 2 can be calculated (Fig. 15). The R 2 value is 0.9994, suggesting that the data are very well fitted to the pseudo-second-order model. The slope of 1/q e is 0.0434 which gives the theoretical value of q e of 23.04 mg g −1 . This value is very close to the value obtained in the experiment, again, confirming that data are well fitted. From the i nte rcep t, k 2 ca n be calc ulate d t o be 0.023 g mg −1 min −1 . Compared to the k 2 value obtained from the GAC studied by Salvestrini et al. (2016) of 4.8 × 10 −6 g μg −1 h −1 which is 8 × 10 −5 g mg −1 min −1 , PAC is approximately 288 times more efficient than GAC. Table 4 compares the key characterization and experimental data of PAC obtained from this project regarding metaldehyde degradation with other studies. It indicates that metaldehyde adsorption is a complex mechanism and the effectiveness and efficiency of metaldehyde adsorption depend very much on the adsorbents. For example, the study of Tao and Fletcher (2013) stated the GAC used has the adsorption capacity of 71 mg g −1 which is almost five times higher than the 15 mg g −1 capacity of the GAC used by Busquets et al. (2014) while in the surface area the GAC does not differ that much (560 and 500 m 2 g −1 , respectively). This suggests that adsorption capacity is not strictly relevant to surface area; more factors such as pore size distribution need to be taken into consideration. Moreover, the adsorption capacity of PAC used in this project was 32 mg g −1 which is not as high as the GAC used by Tao and Fletcher (2013) and Salvestrini et al. (2016) but it is effective and much more efficient for metaldehyde degradation with a reaction rate 288 times faster than that of Salvestrini et al. (2016) and 40 times faster than that of Tao and Fletcher (2013). This explains that high adsorption capacity does not necessarily mean high adsorption rate. And the adsorption rate is not relative to the surface area as well. The GAC used by Salvestrini et al. (2016) has a high surface area of 774, but the adsorption rate is more than seven times slower than the GAC used by Tao and Fletcher (2013). Therefore, to link characterizations of the adsorbent to adsorption capacity and adsorption rate regarding metaldehyde degradation would require more studies on various characterizations of the adsorbent, such as particle size, pore size distribution, surface analysis, and point of zero charge. Fitting adsorption isotherm models for C-doped TiO 2 (C-1.5) nanocatalyst The isotherm of metaldehyde degradation by C-doped TiO 2 (C-1.5) nanocatalyst within 2 h of reaction under UV-C light is shown by Fig. 16. The adsorption capacity in equilibrium (q e ) from Eq. (1), q e , has a maximum value of 6.16 mg/g (Kumar et al. 2008). The isotherm shows that as C e increases, the adsorption capacity increases until it reaches maximum capacity which corresponded to the highest removal of metaldehyde at 7.5 ppm. The data are also fitted to the Freundlich and Langmuir models. The Freundlich model is plotted (Fig. 17) and data points are fitted by a linear trend line, and the log-log plot gives the intercept of log K F and slope of 1/n (Kumar et al. 2008). It can be seen from Fig. 17 that the metaldehyde adsorption by C-doped TiO 2 (C-1.5) obeys the Freundlich model (R 2 = 0.9661). However, the K F value obtained is −0.431 L g −1 and the n value obtained is 0.915. K F , as the indicator of adsorption capacity is a negative value here, which indicates that the material is not suitable for adsorption (Radjenovic and Medunic 2015). This may be explained by the small amount of carbon present in the system. Conclusions Among the four studied materials of PAC, C-doped TiO 2 (C-1.5), C-40, and C-80 nanocatalysts, PAC was the most effective material for metaldehyde degradation. Within the studied concentration ranges of 0.1, 0.5, 1, 5, and 10 ppm, and reaction time of 2 h, PAC with a Salvestrini et al. (2016) G A C 3 2 0 7 7 4 k 2 = 8 × 10 −5 g mg −1 min −1 Tao and Fletcher (2013) GAC 71 560 k 2 = 5.8 × 10 −4 g mg −1 min −1 Tao and Fletcher (2016) Macronet (for metaldehyde) 200 402 k 1 = 11.6 × 10 −3 min −1 Ion-exchange resin (for acetaldehyde) 441 N/A k 2 = 0.17 g mg −1 min −1 Fig. 16 Isotherm of metaldehyde degradation by C-doped TiO 2 (C-1.5) loading concentration of 0.2 g L −1 showed more significant removal of metaldehyde at low concentrations than higher concentrations. Increasing in the initial concentration of metaldehyde solution did not result in more effective metaldehyde removal. The removal of metaldehyde by PAC decreased as the prepared initial concentration of metaldehyde solution increased. Removal of metaldehyde using PAC by adsorption fits well with the Langmuir kinetic model, giving a q m value of 32.258 mg g −1 and a K L value of 2.013 L mg −1 , suggesting that adsorption is favourable for removing metaldehyde. Adsorption of metaldehyde by PAC fits well to pseudo-second-order kinetics and gives a k 2 value of 0.023 g mg −1 min −1 , indicating PAC can remove metaldehyde efficiently in a short period. Compared to PAC, C-doped TiO 2 (C-1.5, C-40, and C-80) nanocatalysts were not effective for removing metaldehyde in solution by photocatalysis within the studied concentration range, catalyst loading concentration, light intensity, and reaction time. The analysis of the effect of UV-C light, and the increasing carbon content of the nanocatalysts, suggests that (1) UV-C light alone does not have any effect on the removal of metaldehyde and (2) increasing carbon content of the nanocatalysts only slightly promotes the degradation of metaldehyde (about 4%). However, PAC alone under dark conditions removed 77% metaldehyde, while it can remove more than 81% under UV-C light. It is considered that metaldehyde is likely to be removed by adsorption of powdered activated carbon. Nevertheless, it would work more effectively under UV-C light. From this study, it is suggested that more parameters such as UV-C light intensity, pH of metaldehyde solution, reaction time, and material loading concentrations can be varied and tested to find out the optimum parameters for metaldehyde degradation. 9. The vial was then removed from the SPE vacuum manifold, and the 3-mL DCM equate was reduced to 1 mL by evaporation with nitrogen gas. 10. The 1 mL of DCM was then transferred to a suitable Perkin Elmer GC-MS vial by a glass micropipette and was ready to be analysed. Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
7,936.6
2017-06-14T00:00:00.000
[ "Engineering", "Chemistry" ]
3Y-TZP/Ta Biocermet as a Dental Material: An Analysis of the In Vitro Adherence of Streptococcus Oralis Biofilm and an In Vivo Pilot Study in Dogs The surface adhesion of bacterial cells and the in vivo biocompatibility of a new ceramic–metal composite made of zirconium dioxide and tantalum were evaluated. Within the framework of an in vitro study using the crystal violet staining and colony counting methods, a relatively similar adhesion of Streptococcus oralis to the 3Y-TZP/Ta biocermet (roughness Ra = 0.12 ± 0.04 µm) and Ti-Al6-V4 titanium alloy (Ra = 0.04 ± 0.01 µm) was found. In addition, in an in vivo preliminary study focused on the histological analysis of a series of rods implanted in the jaws of beagle dogs for a six-month period, the absence of any fibrous tissue or inflammatory reaction at the interface between the implanted 3Y-TZP/Ta biocermets and the new bone was found. Thus, it can be concluded that the developed ceramic–metal biocomposite may be a promising new material for use in dentistry. Introduction The design of dental implant systems is the most active and dynamic area of research in the field of oral implantation.The evaluation of the quality of an implant is mainly concentrated on three aspects: the materials, surface properties and design.The main material for dental implants remains titanium and its alloys.They have high biocompatibility, mechanical strength, and corrosion resistance.On the other hand, ceramic materials have been used as an alternative to titanium dental materials as they offer better esthetic qualities.Many dense ceramics have been tested as promising candidates for odontology, but only a few of them have achieved human clinical applications.Yttria-stabilized tetragonal zirconia (3Y-TZP) is one of them due to its high mechanical properties as a result of the tetragonalto-monoclinic phase transformation effect [1,2].Nevertheless, the presence of any minor defects that may occur during the manufacture or clinical handling of ceramic products may lead to a final unpredictable fracture.In addition, it may spontaneously transform Antibiotics 2024, 13, 175 2 of 14 to its stable monoclinic form under in vivo conditions.This effect has become known as low-temperature hydrothermal degradation or aging [3,4].Therefore, the development and creation of new materials that could combine the heterogeneous positive properties of ceramics and metals (biocermet) is a relevant task [5,6].As in any multiphase material, the most significant challenge in biocermets is to attain materials with a superior performance by the subtle management of the individual properties of its components [7,8].In this work, zirconium oxide (3Y-TZP) and tantalum were chosen as the starting materials.This metal is one of the best metallic bioinert materials due to a thin but very strong and chemically resistant tantalum pentaoxide (Ta 2 O 5 ) film that self-forms on its surface.Due to its high adhesion rates, compared to traditional titanium or cobalt-chromium components, when facilitating and accelerating the process of fusion of the implant with living tissue, there is a low rejection rate of tantalum implants and the absence of inflammatory reactions [9][10][11][12][13][14][15][16][17].Recently, we have developed a new zirconia ceramic matrix composite reinforced with 20 vol.% tantalum particles (3Y-TZP/Ta).This biocermet showed outstanding mechanical characteristics such as good flexural strength, hardness and fracture toughness [18][19][20][21].The high values of its mechanical properties have been achieved due to the crack bridging of the elastic-plastic deformations of ductile metal particles associated with the transformation toughening mechanism in the zirconia matrix.In addition to its excellent properties under monotonic loading, this material also showed exceptional resistance to fatigue loading [22].The 3Y-TZP/Ta composite also showed a higher wear resistance and lower friction coefficient, both related to the high toughness and the presence on the surface of an interfacial layer (autolubricating phase) of plastically deformed metal grains [23,24].Besides their exceptional mechanical and tribological properties 3Y-TZP/Ta composites also exhibited high resistance to low-temperature degradation (LTD) due to reduction in the number of oxygen vacancies in the zirconia matrix because the presence of a solid solution of Ta 2 O 5 [22,23].It should also be noted that for manufacturing complex-shaped implants instead of traditional ones, modern processing methods are required, which have a number of advantages, including lower processing costs, less waste, high accuracy, versatility and a degree of automation [25,26].Electrical discharge machining [27,28] is such a method.However, its application requires the material to have an electrical resistivity below 100-300 Ω•cm [29].The developed 3Y-TZP/Ta biocermet also possesses an electrical conductivity suitable for electrical discharge machining and is hence appropriate for producing complex-shaped parts by electrical discharge machining to the required tolerance with reduced machining costs [30].In addition, the developed material showed in vitro biocompatibility and the effective prevention of implant-associated infections (antibacterial properties) [31,32].Notably, the biocompatible character of 3Y-TZP/Ta biocermets, alongside their mechanical and tribological properties, bridge the gap between functional and structural materials [33].All of this makes them a potential material for future hard-tissue replacement. Bacterial colonization is another factor that affects the clinical performance of dental materials, besides their conventional material properties.When an implant is exposed in the oral cavity, it provides a unique surface that can interact with native host bacteria, leading to plaque formation.The adherence of oral microorganisms and the subsequent formation of pathogenic biofilms on the surface of dental implants cause infections of the peri-implant tissues and eventually implant failure.Therefore, preventing adherence has been considered an effective strategy for preventing infectious diseases.The oral cavity is known to be characterized by a large species diversity of microorganisms that colonize dental implants as early as 30 min after insertion [34].Most of them are commensals such as Streptococcus oralis [35,36].Primary colonizers change the surface not only by their physical presence but also by exhibiting a different "surface-attached" phenotype with a distinct metabolic activity and surface properties, thus changing their environment and creating new niches for other bacteria to colonize [37].S. oralis serves as an anchor for intermediate and late pathogenic colonizers [38,39], which contributes to the formation of biofilm [40].Dental plaque as a biofilm plays a crucial role in the etiology and progression of the most common infections affecting humans, such as dental caries and periodontal diseases [41,42], as well as the likelihood of further development of endocarditis [43].S. oralis can also enhance the pathogenicity of bacteria [44] and the virulence of Candida albicans [45][46][47][48].At the same time, it has been shown that S. oralis can counteract bacterial pathogens and hence facilitate homeostasis [49,50]. The purpose of this study was to evaluate the in vitro adhesion of S. oralis to 3Y-TZP/Ta biocermet and compare it to Ti-Al6-V4 (90 wt.% titanium 6 wt.% aluminum, 4 wt.%vanadium) titanium alloy, the gold standard for endo-osseus dental implant production, and to characterize the factors associated with biofilm formation on the surfaces of the samples.Besides the in vitro microbial adhesion characterization of 3Y-TZP/Ta composites, their in vivo osteointegration performance and inflammatory response in the form of a series of rods implanted in the jaws of beagle dogs for a six-month period were evaluated using the zirconia matrix composition as a control material. Results and Discussion Figure 1 shows a representative SEM micrograph of the polished 3Y-TZP/Ta and titanium alloy samples.The dark and grey phases in Figure 1A define 3Y-TZP and Ta grains, respectively.Tantalum particles were evenly distributed in the ceramic matrix, and no residual porosity was observed. distinct metabolic activity and surface properties, thus changing their environment and creating new niches for other bacteria to colonize [37].S. oralis serves as an anchor for intermediate and late pathogenic colonizers [38,39], which contributes to the formation of biofilm [40].Dental plaque as a biofilm plays a crucial role in the etiology and progression of the most common infections affecting humans, such as dental caries and periodontal diseases [41,42], as well as the likelihood of further development of endocarditis [43].S. oralis can also enhance the pathogenicity of bacteria [44] and the virulence of Candida albicans [45][46][47][48].At the same time, it has been shown that S. oralis can counteract bacterial pathogens and hence facilitate homeostasis [49,50]. The purpose of this study was to evaluate the in vitro adhesion of S. oralis to 3Y-TZP/Ta biocermet and compare it to Ti-Al6-V4 (90 wt.% titanium 6 wt.% aluminum, 4 wt.%vanadium) titanium alloy, the gold standard for endo-osseus dental implant production, and to characterize the factors associated with biofilm formation on the surfaces of the samples.Besides the in vitro microbial adhesion characterization of 3Y-TZP/Ta composites, their in vivo osteointegration performance and inflammatory response in the form of a series of rods implanted in the jaws of beagle dogs for a sixmonth period were evaluated using the zirconia matrix composition as a control material. Results and Discussion Figure 1 shows a representative SEM micrograph of the polished 3Y-TZP/Ta and titanium alloy samples.The dark and grey phases in Figure 1A define 3Y-TZP and Ta grains, respectively.Tantalum particles were evenly distributed in the ceramic matrix, and no residual porosity was observed.Figure 2 shows the 3D surface topographies of the 3Y-TZP/Ta biocermet and titanium disks.The 3Y-TZP/Ta biocermet disks had a much higher average Ra value (0.12 ± 0.04 µm) than the titanium disks (0.04 ± 0.01 µm).The specific surface area (roughness ratio) Figure 2 shows the 3D surface topographies of the 3Y-TZP/Ta biocermet and titanium disks.The 3Y-TZP/Ta biocermet disks had a much higher average Ra value (0.12 ± 0.04 µm) than the titanium disks (0.04 ± 0.01 µm).The specific surface area (roughness ratio) also followed the same pattern.It was always more than one, but it was higher for the 3Y-TZP/Ta biocermet disks (1.2 ± 0.2) than for the titanium disks (1.01 ± 0.01).These results indicate that the 3Y-TZP/Ta biocermet disks had a rougher surface and higher surface area than the titanium alloy disks.also followed the same pattern.It was always more than one, but it was higher for the 3Y-TZP/Ta biocermet disks (1.2 ± 0.2) than for the titanium disks (1.01 ± 0.01).These results indicate that the 3Y-TZP/Ta biocermet disks had a rougher surface and higher surface area than the titanium alloy disks.Figure 3 demonstrates the total biofilm mass for all the S. oralis strains determined by crystal violet staining.No significant differences were observed between the absorbance values of the 3Y-TZP/Ta composite and titanium alloy disks.Figure 3 demonstrates the total biofilm mass for all the S. oralis strains determined by crystal violet staining.No significant differences were observed between the absorbance values of the 3Y-TZP/Ta composite and titanium alloy disks.also followed the same pattern.It was always more than one, but it was higher for the 3Y-TZP/Ta biocermet disks (1.2 ± 0.2) than for the titanium disks (1.01 ± 0.01).These results indicate that the 3Y-TZP/Ta biocermet disks had a rougher surface and higher surface area than the titanium alloy disks.Figure 3 demonstrates the total biofilm mass for all the S. oralis strains determined by crystal violet staining.No significant differences were observed between the absorbance values of the 3Y-TZP/Ta composite and titanium alloy disks.Figure 5 shows the number of viable suspended cells (log10 CFU/mL) of S. oralis strains in THY-glucose after 24 h of incubation on the different material surfaces; no statistically significant difference was found in the number of cell colonies on the 3Y-TZP/Ta composite and titanium alloy disks.Figure 5 shows the number of viable suspended cells (log 10 CFU/mL) of S. oralis strains in THY-glucose after 24 h of incubation on the different material surfaces; no statistically significant difference was found in the number of cell colonies on the 3Y-TZP/Ta composite and titanium alloy disks. Antibiotics 2024, 13, x FOR PEER REVIEW 5 of 14 Figure 5 shows the number of viable suspended cells (log10 CFU/mL) of S. oralis strains in THY-glucose after 24 h of incubation on the different material surfaces; no statistically significant difference was found in the number of cell colonies on the 3Y-TZP/Ta composite and titanium alloy disks.Figure 6 shows the SEM images of the 3Y-TZP/Ta biocermet and titanium alloy disk surfaces after 24 h of incubation with CI-1.The images reveal that both materials were colonized by bacterial cells that formed dense biofilms on their surfaces. Antibiotics 2024, 13, x FOR PEER REVIEW 6 of 14 Figure 6 shows the SEM images of the 3Y-TZP/Ta biocermet and titanium alloy disk surfaces after 24 h of incubation with CI-1.The images reveal that both materials were colonized by bacterial cells that formed dense biofilms on their surfaces.Some material surfaces have physico-chemical properties that affect how bacteria stick to them [51].Surface roughness is one of these properties [52,53].Bacteria tend to stick and form biofilms more on rough surfaces than on ultra-smooth ones.This is because rough surfaces have more surface area and uneven pits that offer more sites for bacteria to grow.An Ra value of 0.2 µm is usually considered as the average roughness limit below which bacteria cannot stick [54].Previous studies have found that smoother surfaces have less plaque formation.However, a recent study showed that titanium surfaces with hard titanium coatings like zirconium nitride or titanium nitride had fewer bacteria colonies than polished titanium, even though they had the same roughness (similar Ra values) [55].In our study, we used two methods (crystal violet staining and colony counting) to measure the microbial amount of S. oralis.We found that the 3Y-TZP/Ta biocermet surface and the titanium alloy surface had similar microbial amounts, even though the biocermet surface had a higher Ra value and a larger specific surface area than the titanium alloy surface.We were not able to measure the impact of surface roughness on bacterial colonization in our study.However, some studies suggest that surface pores can shield cells from shear stress and help them stay on the surface.Based on these results, we can infer that the 3Y-TZP/Ta biocermet surfaces may have less biofilm formation and accumulation than titanium alloy surfaces with the same roughness value.This finding is not well understood yet.One possible explanation is that the biocermet surfaces have fewer oxygen defects and a more non-polar surface structure.Titanium surfaces are known to be very reactive [56,57].They have a thin oxide layer (mainly titanium dioxide) that can adsorb both cations and anions.This layer also binds to biopolymers in saliva, creating a highly reactive surface [58].On the other hand, zirconia implants have shown promising results in reducing biofilm formation and bacterial adhesion compared to titanium implants.A study using an anaerobic flow chamber mode tested the biofilm formation on zirconia or titanium Some material surfaces have physico-chemical properties that affect how bacteria stick to them [51].Surface roughness is one of these properties [52,53].Bacteria tend to stick and form biofilms more on rough surfaces than on ultra-smooth ones.This is because rough surfaces have more surface area and uneven pits that offer more sites for bacteria to grow.An Ra value of 0.2 µm is usually considered as the average roughness limit below which bacteria cannot stick [54].Previous studies have found that smoother surfaces have less plaque formation.However, a recent study showed that titanium surfaces with hard titanium coatings like zirconium nitride or titanium nitride had fewer bacteria colonies than polished titanium, even though they had the same roughness (similar Ra values) [55].In our study, we used two methods (crystal violet staining and colony counting) to measure the microbial amount of S. oralis.We found that the 3Y-TZP/Ta biocermet surface and the titanium alloy surface had similar microbial amounts, even though the biocermet surface had a higher Ra value and a larger specific surface area than the titanium alloy surface.We were not able to measure the impact of surface roughness on bacterial colonization in our study.However, some studies suggest that surface pores can shield cells from shear stress and help them stay on the surface.Based on these results, we can infer that the 3Y-TZP/Ta biocermet surfaces may have less biofilm formation and accumulation than titanium alloy surfaces with the same roughness value.This finding is not well understood yet.One possible explanation is that the biocermet surfaces have fewer oxygen defects and a more non-polar surface structure.Titanium surfaces are known to be very reactive [56,57].They have a thin oxide layer (mainly titanium dioxide) that can adsorb both cations and anions.This layer also binds to biopolymers in saliva, creating a highly reactive surface [58].On the other hand, zirconia implants have shown promising results in reducing biofilm formation and bacterial adhesion compared to titanium implants.A study using an anaerobic flow chamber mode tested the biofilm formation on zirconia or titanium disks with either three-species biofilm or human plaque samples [59].The results showed that zirconia had a lower biofilm thickness and mass than titanium but a similar biofilm metabolism.This implies that zirconia implants may have less plaque accumulation and peri-implant inflammation than titanium implants.Another study measured bacterial adhesion on zirconia or titanium disks and found that the zirconia disks had lower bacterial counts than the titanium ones [60].This finding was confirmed by other studies that also reported lower bacterial adhesion on zirconia surfaces than on pure titanium surfaces [61,62].Furthermore, an animal model comparing zirconia or titanium implants in dogs with ligature-induced peri-implantitis showed that zirconia implants had less crestal peri-implant bone loss and no implant failure, while titanium implants had one implant loss due to peri-implantitis [63].A recent systematic review with a meta-analysis also supported the advantage of zirconia over titanium in terms of oral biofilm parameters and surface roughness [64].However, this effect may vary for ceramic-metal biocomposites and needs more research.Moreover, the surface free energy differences were reduced by the adsorption of salivary proteins [65], and other oral bacteria may also have different patterns of colony formation.Therefore, it is necessary to extend this research to other types of materials, like zirconia, in order to have a broader knowledge of all the materials used in dental applications.More experimental validation is necessary to apply the current findings to clinical situations. The osteointegration performance and inflammatory response of biocermet implanted in dogs' mandibles were also tested in an in vivo pilot study.A series of zirconia rods were also implanted as a reference group.Radiographic examination showed that the 3Y-TZP/Ta rods were well integrated with the surrounding bone six months after surgery (Figure 7).Furthermore, no visual signs of gingival inflammation, such as redness and swelling, were detected during this period. Antibiotics 2024, 13, x FOR PEER REVIEW 7 of 14 disks with either three-species biofilm or human plaque samples [59].The results showed that zirconia had a lower biofilm thickness and mass than titanium but a similar biofilm metabolism.This implies that zirconia implants may have less plaque accumulation and peri-implant inflammation than titanium implants.Another study measured bacterial adhesion on zirconia or titanium disks and found that the zirconia disks had lower bacterial counts than the titanium ones [60].This finding was confirmed by other studies that also reported lower bacterial adhesion on zirconia surfaces than on pure titanium surfaces [61,62].Furthermore, an animal model comparing zirconia or titanium implants in dogs with ligature-induced peri-implantitis showed that zirconia implants had less crestal periimplant bone loss and no implant failure, while titanium implants had one implant loss due to peri-implantitis [63].A recent systematic review with a meta-analysis also supported the advantage of zirconia over titanium in terms of oral biofilm parameters and surface roughness [64].However, this effect may vary for ceramic-metal biocomposites and needs more research.Moreover, the surface free energy differences were reduced by the adsorption of salivary proteins [65], and other oral bacteria may also have different patterns of colony formation.Therefore, it is necessary to extend this research to other types of materials, like zirconia, in order to have a broader knowledge of all the materials used in dental applications.More experimental validation is necessary to apply the current findings to clinical situations.The osteointegration performance and inflammatory response of biocermet implanted in dogs' mandibles were also tested in an in vivo pilot study.A series of zirconia rods were also implanted as a reference group.Radiographic examination showed that the 3Y-TZP/Ta rods were well integrated with the surrounding bone six months after surgery (Figure 7).Furthermore, no visual signs of gingival inflammation, such as redness and swelling, were detected during this period.histological analysis of the zirconia (Figure 8A) and 3Y-TZP/Ta composite (Figure 8B) implanted rods showed good biocompatibility and osteointegration in the LB (lingual bone) and BB (buccal bone) regions after six months of implantation.The formation of new immature osteoids that directly contacted the implants without any fibrous tissue interposition was observed.No signs of inflammation, such as foreign-body giant cells or inflammatory cell infiltration, were detected in the implanted sites.It is important to point out that the amount of bone and bone-to-implant contact area appeared to be higher for the biocermet compared to the zirconia ceramic rod. Antibiotics 2024, 13, x FOR PEER REVIEW 8 of 14 The histological analysis of the zirconia (Figure 8A) and 3Y-TZP/Ta composite (Figure 8B) implanted rods showed good biocompatibility and osteointegration in the LB (lingual bone) and BB (buccal bone) regions after six months of implantation.The formation of new immature osteoids that directly contacted the implants without any fibrous tissue interposition was observed.No signs of inflammation, such as foreign-body giant cells or inflammatory cell infiltration, were detected in the implanted sites.It is important to point out that the amount of bone and bone-to-implant contact area appeared to be higher for the biocermet compared to the zirconia ceramic rod.The results of the present in vivo study were largely consistent with previous investigations about the oestointegration of ZrO2-Nb biocermets implanted in the tibiae of New Zealand white rabbits [5].Bartolome et al. showed that biocermets have excellent biocompatibility due to the coexistence of ceramic and metal grains and a microstructure of the composite that can act synergistically to enhance the osteointegration process.In any aqueous electrolyte, an oxide is spontaneously formed on the metal.This surface oxide is hydroxylated and has an amphoteric, or bipolar, character.The chemisorption ability of such surfaces is well known [66].Peptides and amino acids are ligands that bind to metal oxide surfaces by replacing the hydroxyl groups.The terminal carboxyl and amino groups of amino acids and proteins bind with the surface hydroxyls.We reported in our previous study [22] that the zirconia and tantalum grains were in direct contact at the biocermet interfaces without any extra phases.A solid solution of tantalum oxide occurs, and the oxygen may be dissolved and randomly distributed in the metal.The ceramic-metal bonds create highly reactive polar oxide surfaces.Thus, ZrO2-Ta interfaces have a high activity to form Ta-OH groups. In summary, this paper demonstrates that ZrO2-Ta biocermet has a moderate level of bacterial adhesion on its surface, is fully compatible with biological tissues, and can bond with bone.The bone formation and plaque accumulation on this ceramic/metal composite seem to be influenced by its specific microstructure, but more research is needed to elucidate the exact mechanism. Materials Processing For the fabrication of the ceramic-metal mixture, yttria-stabilized tetragonal zirconia (3Y-TZP, 3 mol% Y2O3; TZ-3YE, Tosoh Corp., Tokyo, Japan) and tantalum (99.97% purity, Alfa Aesar, Karslruhe, Germany) powders with average particle sizes of 0.26 and 44 µm, respectively, were used as the initial materials.The tantalum powder was first milled in The results of the present in vivo study were largely consistent with previous investigations about the oestointegration of ZrO 2 -Nb biocermets implanted in the tibiae of New Zealand white rabbits [5].Bartolome et al. showed that biocermets have excellent biocompatibility due to the coexistence of ceramic and metal grains and a microstructure of the composite that can act synergistically to enhance the osteointegration process.In any aqueous electrolyte, an oxide is spontaneously formed on the metal.This surface oxide is hydroxylated and has an amphoteric, or bipolar, character.The chemisorption ability of such surfaces is well known [66].Peptides and amino acids are ligands that bind to metal oxide surfaces by replacing the hydroxyl groups.The terminal carboxyl and amino groups of amino acids and proteins bind with the surface hydroxyls.We reported in our previous study [22] that the zirconia and tantalum grains were in direct contact at the biocermet interfaces without any extra phases.A solid solution of tantalum oxide occurs, and the oxygen may be dissolved and randomly distributed in the metal.The ceramic-metal bonds create highly reactive polar oxide surfaces.Thus, ZrO 2 -Ta interfaces have a high activity to form Ta-OH groups. In summary, this paper demonstrates that ZrO 2 -Ta biocermet has a moderate level of bacterial adhesion on its surface, is fully compatible with biological tissues, and can bond with bone.The bone formation and plaque accumulation on this ceramic/metal composite seem to be influenced by its specific microstructure, but more research is needed to elucidate the exact mechanism. Materials Processing For the fabrication of the ceramic-metal mixture, yttria-stabilized tetragonal zirconia (3Y-TZP, 3 mol% Y 2 O 3 ; TZ-3YE, Tosoh Corp., Tokyo, Japan) and tantalum (99.97% purity, Alfa Aesar, Karslruhe, Germany) powders with average particle sizes of 0.26 and 44 µm, respectively, were used as the initial materials.The tantalum powder was first milled in an attritor and then wet-mixed with 80 vol% of ceramic powder.More detailed information on the starting materials and powder mixing technique presented in previous works [20].After homogenization, the suspension was dried for 24 h at 75 • C and then passed through a sieve with mesh size of 32 µm.The resulting mixture was consolidated using spark plasma sintering (SPS) at 1400 • C (heating rate 100 • C/min) and 80 MPa in a vacuum.The holding time at the maximum temperature was 3 min.After sintering, the oven was naturally cooled until 150 • C and then additionally supplied with argon to accelerate the process.The vacuum was broken, the chamber was door opened, and the sintered specimens were extracted.The reference samples for the in vivo studies were produced from monolithic zirconia using the same sintering cycle.The obtained samples had a diameter and thickness of 20 and 7 mm, respectively.Disks (diameter ~20 mm × 3 mm thickness) were saw-cut from a Ti-Al6-V4 (Ti017950, 99.0% purity, Good fellow, Huntingdon, England) titanium alloy rod.The titanium alloy and sintered 3Y-TZP/Ta disks then were polished with a diamond suspension from 9 to 1 µm and used in the in vitro studies.Sintered zirconia and 3Y-TZP/Ta disks were machined using a diamond tool to produce cylinders (diameter~2 mm and height~5 ± 1 mm) for the in vivo research. Microstructure and Surface Characterization The microstructure of the polished 3Y-TZP/Ta and titanium alloy disk specimens was studied using a Nova NANOSEM 230 scanning electron microscope (SEM, FEI, Hillsboro, OR, USA).A 3D surface Talysurf CLI 500 profilometer (Taylor Hobson, Leicester, UK) was used to measure the roughness ratio or specific surface area (the ratio between the actual surface and projected area, S dr ) and the average surface roughness (Ra) of the samples by scanning the surface with a stylus.Data are presented as the mean value with the corresponding error of ten independent experiments.The stylus arm had a 90 • diamond tip with a nominal radius of 2 µm.The data sampling intervals in X and Y were 0.5 and 2.5 µm, respectively.The Z-scale resolution was 32 nm.The profilometer generated a 3D map of the surface topography.The disks were rinsed with sterile saline to remove loosely attached cells after biofilm formation and then air-dried and examined by SEM. Bacterial Strains and Culture Conditions In this study, a standard Streptococcus oralis ATCC 35037 and two clinical (CI-1 and CI-2) strains that were isolated from a human mouth were used [67].Todd-Hewitt broth medium supplemented with 5% yeast extract (Difco; BD Diagnostics, Sparks, MD, USA) and 50 mM THY-glucose (Panreac, Barcelona, Spain) was used. Biofilm Formation Assays To assess the biofilm formation, S. oralis strains were cultured in THY-glucose medium with 5% CO 2 at 37 • C overnight on Columbia sheep blood agar plates (Difco, BD Diagnostic Systems, Sparks, MD, USA).The bacterial density was adjusted to 0.5-1 × 10 8 CFU/mL using a UV-visible spectrophotometer (GBC, Model Cintra 101, Keysborough, Australia), and 100 µL of the bacterial suspension was added to 900 µL of the THY-glucose medium (1:10 dilution) in 24-well plates.The plates were incubated for 24 h at 37 • C in a wet chamber with 5% CO 2 .Biofilm formation was quantified by two methods: (i) crystal violet staining and absorbance measurement and (ii) viable cell counting.In order to measure the total biofilm amount, we used a crystal violet assay.We washed the disks three times with sterile saline to get rid of cells that did not stick.Then, we added 300 µL of methanol to fix the biofilm and waited for 20 min.We removed the supernatant and let the disks dry.Next, we stained the biofilm with 300 µL of 1% crystal violet solution (Química Clínica Aplicada, Tarragona, Spain) and left it for 20 min at room temperature.Finally, we washed away the extra dye with water.The biofilm amount was quantified by releasing the bound crystal violet with 200 µL of ethanol and measuring the absorbance at 570 nm using a spectrophotometer.The negative control value was subtracted from the absorbance to correct for background staining.Each experiment was repeated three times.The number of bacteria in the biofilm and the supernatant was determined by viable counts.The supernatant (200 µL) was collected and plated to estimate the CFU/mL of planktonic bacteria.The disks were washed three times with sterile saline to remove non-adherent cells and transferred to tubes with 5 mL of sterile saline.To release the bacteria stuck on the disk surface, the tube was shaken hard for 2 min and exposed to sound waves two times for 10 s each (Microson ultrasonic cell disruptor XL Misonix, Inc., Fanningdale, NY, USA).Then, 200 µL of the biofilm sample treated with sound waves was mixed with 0.9% saline, diluted in steps, and plated on Columbia sheep blood agar medium to count the living bacteria in CFU/mm 2 .All experiments were conducted three times using one disk per composition.The lowest number of bacteria that could be detected was 2.5 × 10 2 CFU.Tests carried out beforehand showed that the sound waves did not kill the bacteria. Statistical Analysis All statistical analyses were performed by ANOVA with the Tukey test for multiple comparisons.A p-value of <0.01 was considered statistically significant. In Vivo Studies Five four-year-old Beagle dogs were used.The sample size was calculated taking ethical considerations and the sample sizes used in similar studies into account.A controlled clinical trial was conducted in accordance with the ethical principles of the ARRIVE guidelines and was carried out in accordance with the UK Animals (Scientific Procedures) Act 1986, the associated guidelines, and EU Directive 2010/63/EU for animal experiments.The study protocol was approved by the Ethics Committee for Animal Research and Welfare to be carried out at the Minimally Invasive Surgery Center in Caceres (Spain).Veterinary assistance was given throughout the study.General anesthesia was achieved by intravenous injection of 10 mg/kg propofol (Propofol Hospira, Hospira Productos Farmaceuticos y Hospitalarios, Madrid, Spain).The dogs underwent endotracheal intubation with a №7 cuffed tube and were connected to a Leon Plus anesthesia machine (Heinen & Löwenstein, Bad Ems, Germany).Sevofluorane (Sevorane, Abbott Laboratories, Madrid, Spain) was used to maintain anesthesia.The dogs received ketorolac 1 mg/kg (Toradol 30 mg, Roche, Basel, Switzerland), tramadol 1.7 mg/kg (Grünenthal, Aachen, Germany), and buprenorphine 0.01 mg/kg (Buprex, Reckitt Benckiser Pharmaceuticals Limited, Berkshire, United Kingdom) for analgesia.The premolars and first molars of the lower jaw were extracted and allowed to heal for 3 months.Then, 10 cylinders per dog (3 of zirconia and 7 of biocermet) were inserted randomly in the gaps on both sides of the jaw.The dogs were fed a soft diet during this period and were euthanized by an overdose of potassium chloride (2 mEq/kg) under a premedication with dexme-detomidine (5 µg/kg) administered intravenously, followed by an overdose of propofol (15 mg/kg) administered intravenously after 6 months.Digital radiographs were obtained from all implant sections at the end of the experiment. Histological Preparation and Examination The mandibular block containing the implant was removed from the mandible using an oscillating autopsy saw (Exakt, Kulzer, Germany) and stored in a 5% formaldehyde solution (pH 7).The position of the implants was confirmed by radiographic examination (Figure 9). The specimens were immediately fixed in 4% formaldehyde and 1% calcium solution and prepared for ground sections following the protocol of Donath and Breuner [68].Each implant block was embedded in methyl methacrylate and stained with a mixture of Harris and Wheatley Hematoxylin (Leica, Wetzlar, Germany).Two central buccal tongue grinding slides of about 25 mm were obtained from each implant.Histologic analysis was conducted using a light microscope (Optiphot, Nikon, Japan) with a digital camera (DP-12, Olympus, Japan).The specimens were immediately fixed in 4% formaldehyde and 1% calcium solution and prepared for ground sections following the protocol of Donath and Breuner [68].Each implant block was embedded in methyl methacrylate and stained with a mixture of Harris and Wheatley Hematoxylin (Leica, Wetzlar, Germany).Two central buccal tongue grinding slides of about 25 mm were obtained from each implant.Histologic analysis was conducted using a light microscope (Optiphot, Nikon, Japan) with a digital camera (DP-12, Olympus, Japan). Conclusions The in vitro surface adhesion of bacterial cells and in vivo biocompatibility evaluation of a new ceramic-metal composite (biocermet) made of zirconium dioxide and tantalum were investigated.The crystal violet staining and colony counting methods showed comparable numbers of S. oralis microorganisms despite the higher roughness value (Ra = 0.12 µm) and larger specific surface area of the 3Y-TZP/Ta composite compared to the surface of the Ti-6Al-4V titanium alloy control sample (Ra = 0.04 µm).Based on these in vitro results, we can expect lower plaque formation on the biocermet than on the Ti-Al6-V4 surface with the same roughness values.In addition, in vivo preliminary research has shown that the surface of the 3Y-TZP/Ta metal-ceramic composite favors bone formation after 6 months in the mandible of Beagle dogs.No fibrous tissue or inflammatory reaction were observed at the interface between the 3Y-TZP/Ta implants and the new bone.This in vivo pilot study has confirmed their biocompatibility and effectiveness of osseointegration within the specified timeframe.This work revealed the clinical potential of 3Y-TZP/Ta ceramic-metal composites for dental applications. Conclusions The in vitro surface adhesion of bacterial cells and in vivo biocompatibility evaluation of a new ceramic-metal composite (biocermet) made of zirconium dioxide and tantalum were investigated.The crystal violet staining and colony counting methods showed comparable numbers of S. oralis microorganisms despite the higher roughness value (Ra = 0.12 µm) and larger specific surface area of the 3Y-TZP/Ta composite compared to the surface of the Ti-6Al-4V titanium alloy control sample (Ra = 0.04 µm).Based on these in vitro results, we can expect lower plaque formation on the biocermet than on the Ti-Al6-V4 surface with the same roughness values.In addition, in vivo preliminary research has shown that the surface of the 3Y-TZP/Ta metal-ceramic composite favors bone formation after 6 months in the mandible of Beagle dogs.No fibrous tissue or inflammatory reaction were observed at the interface between the 3Y-TZP/Ta implants and the new bone.This in vivo pilot study has confirmed their biocompatibility and effectiveness of osseointegration within the specified timeframe.This work revealed the clinical potential of 3Y-TZP/Ta ceramic-metal composites for dental applications. Figure 3 . Figure 3.Comparison of total biofilm mass for the studied S. oralis strains determined by crystal violet staining among 3Y-TZP/Ta and titanium alloy disks. Figure 4 Figure4exhibits the adherence of the viable S. oralis strains (log10 CFU/mm 2 ) to the 3Y-TZP/Ta biocermet and titanium alloy surfaces.The 3Y-TZP/Ta biocermet and Ti-6Al-4V disks had no major differences in terms of the colony counts. Figure 3 . Figure 3.Comparison of total biofilm mass for the studied S. oralis strains determined by crystal violet staining among 3Y-TZP/Ta and titanium alloy disks. Figure 4 Figure4exhibits the adherence of the viable S. oralis strains (log10 CFU/mm 2 ) to the 3Y-TZP/Ta biocermet and titanium alloy surfaces.The 3Y-TZP/Ta biocermet and Ti-6Al-4V disks had no major differences in terms of the colony counts. Figure 3 . Figure 3.Comparison of total biofilm mass for the studied S. oralis strains determined by crystal violet staining among 3Y-TZP/Ta and titanium alloy disks. Figure 4 Figure4exhibits the adherence of the viable S. oralis strains (log 10 CFU/mm 2 ) to the 3Y-TZP/Ta biocermet and titanium alloy surfaces.The 3Y-TZP/Ta biocermet and Ti-6Al-4V disks had no major differences in terms of the colony counts. Figure 5 . Figure 5.Comparison between the viable planktonic bacteria (CFU/mL) for the studied S. oralis strains on the surface of 3Y-TZP/Ta and titanium alloy disks. Figure 5 . Figure 5.Comparison between the viable planktonic bacteria (CFU/mL) for the studied S. oralis strains on the surface of 3Y-TZP/Ta and titanium alloy disks. Figure 5 . Figure 5.Comparison between the viable planktonic bacteria (CFU/mL) for the studied S. oralis strains on the surface of 3Y-TZP/Ta and titanium alloy disks. Figure 9 . Figure 9. Digital radiograph of mandibular block with implanted cylinders. Author Contributions: Conceptualization, A.S. and J.F.B.; methodology, A.S., J.F.B., D.S., L.A., R.L.-P., F.G., P.P., S.G. and N.W.S.P.; validation, J.F.B., S.G. and N.K.; investigation, A.S., J.F.B., D.S., L.A., R.L.-P.and F.G.; data curation, D.S., L.A. and F.G.; writing-original draft preparation, A.S. and J.F.B.; project administration, O.Y., S.G. and A.S. All authors have read and agreed to the published version of the manuscript.Funding: Part of this work was supported by the Ministry of Science and Higher Education of the Russian Federation under project 056-00041-23-00.Institutional Review Board Statement: The animal study protocol was approved by the Ethics Committee for Animal Research and Welfare of the Center for Minimally Invasive Surgery (Cáceres, Spain) with permit number 031/12.This controlled clinical trial was conducted in accordance with Figure 9 . Figure 9. Digital radiograph of mandibular block with implanted cylinders.
8,621.2
2024-02-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Entanglement in the Quantum Phase Transition of the Half-Integer Spin One-Dimensional Heisenberg Model We use the Bethe’s ansatz method to study the entanglement of spinons in the quantum phase transition of half integer spin one-dimensional magnetic chains known as quantum wires. We calculate the entanglement in the limit of the number of particles N →∞ . We obtain an abrupt change in the entanglement next the quantum phase transition point of the anisotropy parameter 1 ∆ = from the gapped phase 1 ∆ > to gapless phase 1 ∆ < . Introduction The study of entanglement in quantum spin chains has been subject of intense research recently.In this field of knowledge, theory of quantum information and condensed matter theory intertwine.In special, the study of properties of entanglement in systems of many particles and analysis of its behavior near quantum phase transition deserve much attention [1] [2].In this work, we deal entanglement of low-lying magnetic excitations in the spin-1/2 one-dimensional Heisenberg model (HM).It is well known that one-half spin chains are different from integer spin chains due to the opening of a gap in the spectrum, where integer spin chains present a gap in the spectrum known as the Haldane gap [3] [4].It is also known that there is an absence of this gap in the half-integer spin Heisenberg chains according to the Lieb, Schultz and Mattis theorem [5].Besides, the low-lying excitations are different for integer and half-integer spin chains.While in the integer spin chains the excitations are magnons, in the half-integer spin chains, the excitations are spinons that are particles without charge but spin one-half.It is very important to understand the entanglement of these quasi-particles in neighborhood of the quantum phase transition which is well known to be dominated by strong quantum fluctuations. One-half spin chains present a quantum phase transition with the anisotropy parameter ∆ .In the range 1 1 − < ∆ < the system does not present gap in the spectrum.When 1 ∆ > and 1 ∆ < − , there is an opening of a gap in the spectrum.It is important to know the influence of these quantum phase transitions on the entanglement. The spin one-half Heisenberg model was solved exactly for the first time by Bethe in 1931; the solution was known as the Bethe' ansatz [6].In reality the initial solution proposed by Bethe is nominated as coordinates of the Bethe's ansatz.However, the Bethe's ansatz suffered modifications among the years and today uses a version modified by the initial Bethe's ansatz nominated as the algebraic Bethe's ansatz. The quantum spin-1/2 (HM) was much studied extensively in the literature using the Jordan-Wigner transformation and Abelian and non-Abelian bosonization.The thermodynamic properties of this model were studied by Klümper in Ref. [7] [8].The dynamics properties such as spin and thermal transport were also extensively studied [9]- [14]. For the integer spin Heisenberg chains, the thermodynamics properties and dynamics such as spin transport were much studied in the literature using different methods.The non-linear sigma model was used by Haldane [3] [4]; he verified that integer spin chains are different from half-integer spin by opening a gap in the spectrum until by the use of spin wave approximations [15]- [18], Schwinger boson theory [19]- [23] and so on. In general, entanglement is a property at the heart of quantum mechanics [24], which was first brought to the intriguing questions posed by Einstein, Podolsky and Rosen [25].Entanglement is defined in terms of some kinds of instantaneous interaction, contrary to the relativistic principle that all interaction is possible only at a velocity less than that of light [26].The entanglement in the quantum critical phenomena in one-dimensional spin-1/2 XX and XY models were studied by Vidal et al. [27], in non-critical and critical regimes.He calculated the entropy for a block of L contiguous spins.The entanglement for 1D spin-1/2 XY model was calculated for a lattice with N sites in transverse field by [28]- [30]. The aim of this paper is to verify the influence of quantum phase transition on entanglement of the quantum one-half spin Heisenberg model.This work is divided in the following way.In Section 2, we discuss the properties of the model.In Section 3, we develop the analytical tools to calculate the entanglement of the system.In Section 4 we present the analytical results, and in the last section, Section 5, we present the conclusions and the final remarks. The Model The model is defined by the following Hamiltonian ( ) with periodic boundary conditions on a chain of length L. When 1 ∆ = + the system is an isotropic Heisenberg antiferromagnetic (AFM).For 1 ∆ = − , the system reduces to the isotropic Heisenberg ferromagnetic (FM).The 0 ∆ = correspond to the XY model.The anisotropy parameter is conveniently parameterized by cosγ ∆ = , where 0 π γ ≤ < .We restrict to the critical regime 1 1 − < ∆ ≤ , where the system displays correlation functions algebraically decaying to zero temperature [9].For 1 ∆ > , easy axis, the system is Ising like which is the simplest quantum lattice system to exhibit a quantum phase transition [31].The dependence of the ground state with ∆ is quite complicated.However, it is possible to investigate the ∆ → ∞ limit exactly [28]. Algebraic Bethe's Ansatz We search for a pseudo-vacuum state Ω that is a simple eigenstate of the diagonal operator valued with entries A and D of the monodromy matrix  , where . The lower-left entry C of the monodromy matrix applied to Ω yields zero, the upper-right of the entry B yields new non vanishing states.Hence C and B play the role of annihilation and creation operators. The reference state is given by 1 , where i ω are the local states.The monodromy matrix  applied to Ω yields an upper triangular 2 2 Therefore, Ω is an eigenstate of  .We intend to use the operators B as creation operators for excitations, i.e. we demand that the new state ( ) ( ) = Ω (one-particle state) be an eigenstate of  .The algebra for exchange ( ) B v with ( ) ( ) D u can be obtained from the Yang-Baxter equation We have that , where , N b β = .For any N-particle state, we look at the following state ( ) ( ) where the numbers i v are Bethe ansatz roots of ( , , is the eigenvalue.The Bethe's ansatz equation above is the basis of an efficient analytical and numerical treatment of the thermodynamics of the Heisenberg chain.There are, however, variants in form of integral equations that are somewhat more convenient for the analysis in the case where the external magnetic fields h close to 0 h → .The al- ternative integral expression for the eigenvalues Λ reads ( ) ( ) ( ) where ( ) where 1 T β = . The ground state energy 0 e is given in [7] and ( ) x u , ( ) are complex-valued functions with integration paths along the real axis.These functions are determined from the following set of non-linear integral equations where the symbol * denotes the convolution product where the Equation ( 11) can be simplified in the limit N → ∞ . Entanglement and Quantum Phase Transitions A measure of the degree of entanglement of a quantum state is the von Neumann entanglement entropy.Considering a partition of a physical system Σ into two disjoint subsystems that we will label by A and B where A B Σ =  and A B = ∅  . The Hibert space of states on Σ is Let Ω be a pure quantum state of the system on A B  , as such it can be decomposed as [1] [2] , , where { } The model Equation (1) has the unique ground state Ω .In the ground state, the entropy for the whole sys- tem vanishes but the entropy of a sub-system can be positive.We treat the whole chain as a binary system A B Ω = ⊗ , where we denote the block of L neighbouring spins by sub-system A and the rest of the chain by sub-system B [29].The density matrix of the pure state Ω of the total system A B  is . We can define the reduced density matrix for subsystem A to be the partial trace of A B ρ  over the degrees of freedom in B as and similarly for the reduced density matrix B ρ . The von Neumann entanglement entropy A S for subsystem A, when the total system is in state Ω , is de- fined to be the entropy of the reduced density matrix, ( ) It also follows that the von Neumann entanglement entropy can be written as i.e. the entanglement entropy is symmetric in the two (entangled) subsystems.This symmetry property is a consequence of our assumption that the total system A B  is in a pure state Ω .In the quantum field theory the Gibs' density matrix of the system is ( ) It is well known that in the critical regime the entropy diverges logarithmically with the size of a block of L spins [27] [33] [34].As derived in Ref. [35], in 1 1 + conformal field theory the entropy of a subregion of length L reads ( ) with a coefficient given by the holomorphic and anti-holomorphic central charges c and c of the theory. Results and Discussion In thermodynamic limit, we have [7] max The integral expression for Λ is given by Equation ( 9).The Helmholtz free energy is max 1 ln where max Λ is the largest eigenvalue of the quantum transfer matrix [9].The entropy is consequently given as The von Neumann entropy provides a good quantifier for the entanglement in the thermodynamic limit which is also equivalent to the entanglement of distinguished particles.We can define the entanglement for a N number of particles as [1] [2] ( ) where ( ) ( ) When N → ∞ we have, in this case, the entropy of entanglement is simply the von Neumann entropy of the reduced matrix of one particle and do not have the factor ( ) 2 log N .We have that max ln . In the limit N → ∞ , the first term of the Equation (9) turns into [9] ( ) ( ) ( ) ( ) e u e e x e x x x a rather irrelevant term as it is linear in 1 λ and n λ , therefore the second derivatives with respect to 1 λ and n λ vanishes. The functions ( ) x u and ( ) are given by Equation (11) and Equation (12).The summation in Equation ( 11) can be simplified in the limit N → ∞ as where the first function is and the second function is given by . 1 λ , n λ is given by [9] ( ) From the Equation ( 9) we have finally therefore we obtain the entanglement in function of the γ parameter in the thermodynamic limit as , corresponds to the XY model.As we have sin 0 sin π 2 sin π 0 In general the integral Equations (11) do not admit analytic solution [9] consequently we cannot solve the integral (35) directly.However an iterative procedure is conceivable to solve the Equation (11).Performing a saddle point integration we can find an expression for the entanglement in the low-temperature limit as ( ) where ( ) ( ) where ( ) is the energy dispersion of the lowest bound states.In the high-temperature limit we have ~ln 2 p E T − , with a high-temperature entropy ln 2 as it should be for a model with two states per site. Critical XXZ chain: the dispersion relation of the free states is [36] ( ) , is the spin wave velocity. Conclusion In summary, we have calculated the entanglement in a quantum wire given by the quantum spin-1/2 anisotropic one-dimensional Heisenberg antiferromagnet.We verify the influence of quantum phase transition in the points of 1 ∆ = − and 1 ∆ = , which correspond to π γ = and 0 γ = points, on entanglement.We use the Bethe's ansatz method to calculate the entanglement p E since it is an exact method to the one-dimensional spin-1/2 Heisenberg chains.Our calculations show that the entanglement is maximum in the point 1 ∆ = and the entan- glement is minimum when 1 ∆ = − .Consequently there is a large influence of the quantum critical region on the entanglement.The influence of the quantum phase transition obtained for this system is large as obtained in Reference [1] for the extended Hubbard model for a finite number of particles N. states of A  and B  , respectively, and , i j M are the matrix elements of an (in general) rectangular matrix M .Using the singular-value-decomposition theorem, we can write = M UD , where U is a unitary matrix, and D is a diagonal matrix d and B d being the dimensions of the Hilbert spaces A  and B  .If the state vector Ω is normalized to unity, 1 Ω =, then the set of complex numbers i where  is the quantum Hamiltonian.The partition function is[32] [33] must have 0 S → as predicted by Nerst's law.For high T we must have S → ∞ .However for low temperature we must have S dominated by the quantum fluctuations near the quantum phase transition where the correlation length ζ diverges in the quantum transition phase ζ → ∞ . model is critical and the correlation lengths diverges like ~1 T ζ .In low-temperature we have that the expression for the reduces a simplest form given by π
3,217.2
2015-12-04T00:00:00.000
[ "Physics" ]
Mutational spectrum of the APC and MUTYH genes and genotype–phenotype correlations in Brazilian FAP, AFAP, and MAP patients Background Patients with multiple colorectal adenomas are currently screened for germline mutations in two genes, APC and MUTYH. APC-mutated patients present classic or attenuated familial adenomatous polyposis (FAP/AFAP), while patients carrying biallelic MUTYH mutations exhibit MUTYH-associated polyposis (MAP). The spectrum of mutations as well as the genotype-phenotype correlations in polyposis syndromes present clinical impact and can be population specific, making important to obtain genetic and clinical data from different populations. Methods DNA sequencing of the complete coding region of the APC and MUTYH genes was performed in 23 unrelated Brazilian polyposis patients. In addition, mutation-negative patients were screened for large genomic rearrangements by multiplex ligation-dependent probe amplification, array-comparative genomic hybridization, and duplex quantitative PCR. Biallelic MUTYH mutations were confirmed by allele-specific PCR. Clinical data of the index cases and their affected relatives were used to assess genotype–phenotype correlations. Results Pathogenic mutations were identified in 20 of the 23 probands (87%): 14 in the APC gene and six in the MUTYH gene; six of them (30%) were described for the first time in this series. Genotype-phenotype correlations revealed divergent results compared with those described in other studies, particularly regarding the extent of polyposis and the occurrence of desmoid tumors in families with mutations before codon 1444 (6/8 families with desmoid). Conclusions This first comprehensive investigation of the APC and MUTYH mutation spectrum in Brazilian polyposis patients showed a high detection rate and identified novel pathogenic mutations. Notably, a significant number of APC-positive families were not consistent with the predicted genotype-phenotype correlations from other populations. Background Patients with multiple colorectal adenomas are screened for germline mutations in two distinct genes, APC and MUTYH. According to the polyp number and age of onset, the phenotype of APC-mutated patients can be classified as classical familial adenomatous polyposis (FAP: more than 100 polyps, early onset) or attenuated FAP (AFAP: fewer than 100 polyps with later onset) [1][2][3]. MUTYH biallelic mutation carriers usually present 10 to 100 polyps and are categorized as having MUTYH-associated polyposis (MAP) [4]. FAP/AFAP (OMIM #175100) is a dominantly inherited colorectal cancer (CRC) predisposing syndrome [1,2] caused by mutations in the tumor suppressor gene adenomatous polyposis coli (APC). The encoded APC protein controls β-catenin turnover in the Wnt pathway [5,6]. Besides colonic polyposis and colorectal cancer, individuals with FAP can present a number of benign extracolonic features, including multiple osteomas, epidermoid cysts, desmoid tumors, and congenital hypertrophy of the retinal pigment epithelium [2]. Over 1100 different pathogenic APC mutations have been reported to date in the Leiden Open Variation Database (http:// www.lovd.nl/2.0/), the majority of them being nonsense mutations or small insertions or deletions that lead to a truncated protein. Mutations causing AFAP have been reported to occur mainly in three regions of APC: at the 5 0 end (the first five exons), in the alternatively spliced region of exon 9, or at the 3 0 end (after codon 1580) [7][8][9]. MAP (OMIM #608456) is a recessively inherited syndrome caused by biallelic mutations in the mutY homolog (MUTYH) gene that maps to chromosome 1p34.1 [4]. MUTYH encodes a DNA glycosylase that plays a key role in the base excision repair pathway by removing mispaired bases caused by the oxidation product 8-oxoG [4]. Nearly 300 different sequence variants have been identified in this gene (LOVD Mutation Database), including about 80 pathogenic mutations distributed throughout the gene at positions corresponding to different functional domains of the encoded protein [10]. In contrast to APC pathogenic variants, which mostly result in a truncated or absent protein, most MUTYH pathogenic variants are missense substitutions and only a minority are splice site or truncating mutations [11]. With regard to clinical features, most MUTYH-mutated patients present 10 to 100 colorectal adenomas, usually with later onset compared with FAP patients [4,12,13]. MUTYH mutation carriers represent approximately 7.5% of patients with more than 100 adenomas without an APC mutation, 40% of all patients with 10-100 polyps, and 0.3-1.7% of patients with fewer than 10 polyps and earlyonset CRC with no family history [12,[14][15][16]. Furthermore, it has also been reported MAP patients having no polyps at the time of CRC diagnosis [17,18]. Because of the observed overlap between the clinical phenotype of FAP/AFAP and MAP syndromes, the identification of the causative mutation has important implications for family management, allowing effective clinical surveillance and accurate genetic counseling. Moreover, the spectrum of mutations and the genotypephenotype correlations may have clinical impact and can be population-specific. Therefore, it is important to obtain genetic and clinical data from FAP/AFAP and MAP families in different populations. The aim of this study was to conduct a comprehensive molecular analysis to determine the spectrum of point mutations and large genomic rearrangements (LGR) in the APC and MUTYH genes in a series of 23 Brazilian polyposis patients. This paper summarizes the mutation screening data and outlines the most cost-effective approach to detect APC and MUTYH mutations in Brazilian polyposis patients. In addition, we discuss the genotype-phenotype associations found in these families in the context of previously described data from the literature. Patients The study examined the Hereditary Colorectal Cancer Registry of A. C. Camargo Hospital (São Paulo, Brazil) [19], for families clinically suspected for FAP (> 100 colorectal adenomas) or AFAP/MAP (10-100 colorectal adenomas), enrolled between January 1998 and July 2011. Between 2010 and 2011, the genetic test was offered to forty registered unrelated polyposis families, from which 23 were available and willing to undergo genetic testing. Index patients were interviewed after providing informed consent and the family history was obtained through verbal report and, whenever possible, confirmed with clinical or pathological reports. This study was performed in compliance with the Declaration of Helsinki and was approved by the ethics committee of A. C. Camargo Hospital (approval number: 1169/08-B). Once a mutation was identified in the index case, genetic counseling and molecular testing were offered to relatives. PCR and sequence analysis Mutation screening was performed by capillary sequencing of all coding exons of the APC [GenBank:NM_000038.5] and MUTYH [GenBank:NM_001128425.1] genes, including the intron-exon boundaries. Patients clinically suspected for FAP (> 100 polyps) were first screened for APC mutations, while patients with attenuated polyposis (< 100 polyps) were first screened for MUTYH mutations. Patients negative for the first screened gene were then screened for the remaining one. Genomic DNA was obtained from leukocytes using a Puregene Genomic DNA Isolation Kit (Gentra Systems, Minneapolis, MN, USA) according to the manufacturer's instructions. PCR reactions used 25 ng of template and 500 nM of each primer in a final volume of 20 μl with GoTaq Green Master Mix (Promega, Madison, WI, USA). Approximately 200 ng of PCR-amplified fragments were purified with ExoSAP-IT (USB Corporation, Cleveland, OH, USA) and sequenced in both directions. Products were analyzed using an ABI 3130xl DNA sequencer (Applied Biosystems, Foster City, CA, USA) and the resulting sequences were aligned using CLCBio Genomics Workbench Software (Muehltal, Germany). The sequences of primers used for these analyses are available upon request. All mutations were confirmed in a second DNA sample. Mutations were recorded and referenced with respect to the cDNA sequence, using the nomenclature guidelines proposed by the Human Genome Sequence Variation Society (www.hgvs.org/mutnomen). Allele-specific PCR Allele-specific PCR [20] was performed for MUTYH mutations to confirm the presence of two heterozygous alleles (compound heterozygosity). Primers were designed to be specific for the wild-type or mutated nucleotide of one of the MUTYH mutations. Sequencing of allelespecific PCR amplicons was performed to reveal the haplotype phase of the second mutation. PCR conditions and primers are available upon request. LGR screening Seven patients were selected for LGR screening using multiplex ligation-dependent probe amplification (MLPA), array-comparative genomic hybridization (aCGH), and duplex quantitative PCR (qPCR): five negative for APC or MUTYH point mutations, one with a novel APC missense variant, and one with a monoallelic MUTYH mutation. All experiments were performed in duplicate. MLPA was performed using the SALSA P043-C1 APC Probemix kit (MRC Holland, Amsterdam, The Netherlands) following the manufacturer's protocol. PCR products were analyzed using an ABI 3130xl DNA sequencer (Applied Biosystems -Foster City, CA, USA), and gene dosage was calculated using Coffalyser V9.4 software (MRC Holland, Amsterdam, The Netherlands). The aCGH platform used in this study was the SurePrint G3 Human CGH Microarray Kit 4 × 180 k (G4449A; Agilent Technologies, Santa Clara, CA, USA), which has an average resolution of 18 kb, with 13 and three probes located within APC and MUTYH, respectively. Briefly, samples were labeled with Cy3-or Cy5-dCTPs by random priming. Purification, hybridization, and washing were performed as recommended by the manufacturer. Data extraction was conducted using Feature Extraction software (Agilent Technologies -Santa Clara, CA, USA). Genomic Workbench software (Agilent Technologies -Santa Clara, CA, USA) was applied to identify constitutive genomic imbalances using the statistical algorithm ADM-2, with a sensitivity threshold of 6.7, and a threshold log 2 ratio of 0.4 for duplication and −0.4 for deletion. Genomic alterations identified by MLPA and aCGH were validated using the duplex qPCR method previously established by our group [21]. Variant analysis Mutations in the APC or MUTYH genes were considered deleterious if they: a) were classified as pathogenic in LOVD database; b) introduced a premature stop codon in the protein sequence (nonsense or frameshift mutation); c) occurred at donor or acceptor splice sites; or d) were whole-exon deletions or duplications. To establish the pathogenicity of one novel missense variant, web-based programs that predict the effect of an ami-no acid substitution were applied (SIFT, Polyphen, and MutationTaster). In addition, the frequency of this variant was assessed in 95 healthy Brazilian individuals. Clinical features and genotype-phenotype correlations The following clinical and pathological data were obtained from all families from the Hereditary Colorectal Cancer Registry of A. C. Camargo Hospital [19]: number of affected individuals, age at diagnosis, number of patients with extracolonic features, and primary sites of extracolonic tumors. The extent of polyposis burden (number of adenomas) was assessed for the index cases through colonoscopy records and/or pathological report from surgical specimens. For most family members this information was unavailable. Patients and their families were grouped according to the affected gene and the index case polyposis burden into five categories: group 1, APC-mutated families with fewer than 100 colorectal adenomas (attenuated polyposis); group 2, APC-mutated families with 100-1000 adenomas (intermediate polyposis); group 3, APC-mutated families with more than 1000 adenomas (severe polyposis); group 4, MUTYHmutated families; and group 5, mutation-negative families. Genotype-phenotype correlations in the three APCmutated groups were compared with those previously described, as reviewed by Nieuwenhuis and Vasen (2007) [22]. This review evaluated a large number of studies in FAP patients and proposed a categorization of the phenotypes according to the severity of the polyposis and the associated site of the APC mutation. Statistical evaluation was performed using the Student's t-test using Prism 5 software (GraphPad, San Diego, CA, USA). Statistical significance was set at a p-value < 0.05. Results Twenty-three Brazilian families with a clinical diagnosis of classical or attenuated polyposis were included in this study. The majority of the index cases (15) presented an intermediate or severe FAP phenotype (> 100 polyps) and 13 of them harbored an APC pathogenic mutation, while one patient was mutation-negative and one had a monoallelic MUTYH mutation. The remaining eight patients presented an attenuated polyposis burden (< 100 polyps), among whom five carried biallelic mutations in the MUTYH gene, one carried a novel APC duplication of exons 1-3, one presented a novel APC missense variant, and one was mutation-negative. Seven novel germline mutations (six pathogenic and one variant of unknown significance) were detected in this cohort, and two of them have been recently published by our group [21,23]. The APC and MUTYH mutation spectrum, including information about previous reports of the detected mutations, is summarized in Table 1. APC mutations Fourteen pathogenic APC mutations were identified in this series: three small duplications, five small deletions, four nonsense mutations, one multiple exon duplication, and one whole-gene deletion. Six of them were novel mutations (Table 1). All patients presented distinct mutations, except for two unrelated probands that presented the hotspot mutation at codon 1309 (c.3927-3931delAAAGA; p.Glu1309Aspfs*4). One patient (ID13) presented a novel missense APC variant of unknown significance: c.5365G > C (p.Val1789Leu). This patient was diagnosed with attenuated polyposis at the age of 56 years, presenting around 20 polyps at the time of clinical diagnosis. In silico studies using three different functional prediction programs (Polyphen, SIFT and MutationTaster), which all classified the p.Val1789Leu variant as having minimal or no effect on protein function, with the following scores: 0 (Polyphen); 0.30 (SIFT); 0.87 (P: 0.99, MutationTaster). Because the proband was the only affected member of the family, it was not possible to perform co-segregation analysis of the variant with the disease within the family; nevertheless, this variant was not detected in a control population of 95 healthy individuals. In this series, two patients presented APC LGRs, identified by MLPA and/or aCGH and confirmed by gene dosage qPCR. Patient ID02 presented a 5.2-Mb deletion at 5q21.3-q22.3 that encompassed the entire APC gene and 19 additional genes, which have been previously published by our group [21]. The second patient (ID17) presented a duplication of APC exons 1-3 that was identified by MLPA ( Figure 1A) and validated by duplex qPCR [21] ( Figure 1B). MUTYH mutations Biallelic germline mutations in the MUTYH gene were identified in five patients, among whom two were homozygotes for the causative mutation and the remaining three were compound heterozygotes for two distinct pathogenic variants. A monoallelic mutation was identified in one patient. One patient (ID19) and her brother were homozygous for the p.Arg241Trp missense mutation, because of a consanguineous marriage between the parents. The second homozygous patient presented a deletion of exons 4-16 (c.348 + 33_*64 + 146del4285insTA) on both alleles, and stated having no known inbreeding in her family. This 4,285 bp deletion was the first LGR to be described in MUTYH, recently published by us [23] and by an independent group that found this deletion in a French patient [33]. For the three patients harboring two distinct pathogenic variants (ID16, ID18, and ID447), we used allele-specific PCR to confirm the biallelic nature of the mutations. All cases presented the hotspot missense mutation p.Tyr179Cys in one allele accompanied by a second truncating mutation in the remaining allele (one deletion, one duplication, and one splice site mutation). One patient (ID24) was a monoallelic carrier of the hotspot missense mutation p.Gly396Asp, and no other mutation could be identified. Clinical features Clinical records and verbal reports obtained from the 23 index patients and their relatives revealed 113 affected individuals among all families; their summarized clinical data are described in Table 2. Polyposis/CRC age at diagnosis Across the entire series, the average age at diagnosis and first symptoms of CRC and/or polyposis was 32.6 years (range 7-67 years). The mean age of onset in APC-positive families was 46.3 years (range 35-56) for group 1 (attenuated FAP); 35.7 (range 18-67) for group 2 (intermediate FAP); and 29.2 (range 7-58) for group 3 (severe FAP). MAP families presented a mean age of 37.9 years (range 27-53 years); while the average age of onset in families with no identified mutation was 27.5 years (range 26-29 years). Comparison among the five groups revealed that the APC-positive group 1, group 2, and MAP patients demonstrated a later age of onset compared with the severe FAP patients (group 3) (Figure 2). Extracolonic manifestations Extracolonic manifestations were reported in all APCmutated families ( Table 2). Gastric and duodenal polyps (upper gastrointestinal polyps) were the most common extracolonic manifestations observed in these patients, and occurred in 11/14 families (79%) and across all three APC groups. Osteomas were observed most often in the severe FAP patients (6/9), and epidermoid cysts/ lipomas occurred in the intermediate and severe FAP patients (5/13). Desmoid tumors were observed in 8/14 APC-positive families (57%) and were associated with different mutation sites, with only two of them occurring after codon 1444. Five families had more than one individual affected by desmoid tumors. MAP families had fewer extracolonic manifestations: only one family presented upper gastrointestinal polyps and none presented desmoid tumors. Regarding other tumor sites, papillary thyroid carcinoma appeared in one family of APC group 3; liver and breast cancers were reported in two families each: one from group 2 and one from group 3; uterine cancer was reported in one group 1 and one group 4 (MAP) family. Finally, lung, hematologic, brain, or skin cancer and melanoma were reported in one family each. Comparison with described APC genotype-phenotype correlations Genotype-phenotype correlations in polyposis syndromes have been evaluated in several studies, and a general association between the location of the mutation and the clinical manifestation has been observed, albeit with some inconsistencies [7,21,[34][35][36]. Recently, Nieuwenhuis and Vasen (2007) [22] performed a meta-analysis and proposed categorization of the phenotypes into three degrees of polyposis severity and the associated site of APC mutation. Attenuated FAP was associated with mutations before codon 157, after codon 1595, and in the alternatively spliced region of exon 9; severe polyposis was related to mutations between codons 1250 and 1464; and an intermediate phenotype was associated with APC mutations located in the remaining sequence of the gene (Figure 3). We compared the clinical and genetic data from our cohort with the APC codon limits defined by Nieuwenhuis and Vasen (2007) [22]. Figure 3A and B show the distribution of the polyposis phenotype of the index cases according to the location of their APC mutation, and compare it with the genotype-phenotype correlations previously proposed. Nine (64%) of 14 FAP families with an APC pathogenic mutation presented the expected polyposis severity according to the location of the APC mutation, while the remaining five families exhibited discordant results from the anticipated phenotype. Nine families presented a profuse polyposis burden (> 1000 (Figure 3A and B). Particular phenotypes Two of the herein identified mutations presented a remarkably more aggressive phenotype than expected given their location and the phenotypes reported in the literature. The mutation c.447dupC (p.Lys150GlnfsX18), identified in family ID05, is located in exon 4; the 5 0 region of the APC gene is generally associated with attenuated polyposis and later age of onset. While the mean age of onset in this family (40 years old) was within the predicted range, the polyposis phenotype caused by this novel mutation was more aggressive than expected, since several members of the family presented more than 100 adenomatous polyps. Furthermore, three relatives had developed a desmoid tumor, usually not observed in patients with a mutation at the 5 0 end of the APC gene. An aggressive phenotypic expression was also observed for the mutation c.3050-3053delATGA (p.Asn1017MetfsX4), identified in family ID10 (Figure 4). Although the mutation is located in the region associated with an intermediate FAP phenotype, the proband presented a high number of polyps (> 1000) at the age of 15, a desmoid tumor at the age of 20, and thyroid carcinoma and jaw keratocysts at 21 years. Her brother and seven cousins also developed polyps at early ages (7,14,15,19,17,22, and two at 29 years old). Desmoid tumors were described in another three relatives. Discussion This is the first report of a comprehensive mutational analysis and genotype-phenotype correlation in Brazilian polyposis families. Through direct sequencing of the APC and MUTYH genes, MLPA, aCGH, and duplex qPCR, we were able to identify pathogenic mutations in 20 of 23 index casesa detection rate of 87%. Of the remaining three patients, two were mutation-negative and one harbored a novel APC missense variant (p.Val1789Leu). Because of the lack of affected relatives for co-segregation analysis and the inconclusive results given by in silico analysis and control population screening, the clinical significance of this alteration is yet to be determined. Interestingly, a parallel study from our institution, performed in high risk cancer patients, revealed that this patient also presents a rare germline microdeletion of the PIP gene possibly associated with an increased cancer risk (Silva 2013, unpublished observations), suggesting that these two alterations may be acting in synergy. The detection rate in polyposis patients from other populations varies markedly, ranging from 39 to 90%; the variation reflects different selection criteria for testing and diverse sensitivity of screening strategies [4,27,[37][38][39][40]. Our data reinforce the need to apply a combination of mutation-screening methods to detect the disease-causing mutation in polyposis patients efficiently. Most APC mutations previously described were identified with conventional methods, for instance denaturing high performance liquid chromatography or the protein truncation test, which can have a relatively low detection rate. Nowadays, the gold standard detection method for polyposis patients is direct DNA sequencing of all APC and MUTYH coding exons (including intron-exon boundaries), accompanied by screening for LGR, as was performed here. The majority of mutations identified in our cohort were distinct, except for two families who shared the codon 1309 hotspot APC mutation and three families who presented the p.Tyr179Cys hotspot mutation in one of the MUTYH alleles. The absence of the commonly reported APC mutation at codon 1061, and the relatively low frequency of the hotspot mutations APC 1309, MUTYH p.Tyr179Cys, and p.Gly396Asp are consistent with the fact that Brazilian patients represent an admixed population, probably lacking a founder APC or MUTYH mutation. Figure 4 Family tree of ID10 family. This family harbored a truncating mutation at codon 1017 located in a region usually associated with an intermediate FAP phenotype. This mutation displayed an aggressive phenotypic expression: the proband (individual III:1, indicated by an arrow) presented her first polyposis symptoms at the age of 15, a desmoid tumor at age 20, and a thyroid carcinoma at age 21. When available, the ages of onset are presented under each individual. Her brother (III:2) and six first cousins (III: 5, 6, 7, 8, 15, and 22) also developed polyps at early ages (14 to 29 years old). The most prematurely affected was a second cousin (individual IV:3), who was diagnosed with polyps at the age of 7. Desmoid tumors were described in another three relatives: two uncles at the age of 40 (II:7 and II:9) and one cousin at 32 years old (III:7). The p.Tyr179Cys and p.Gly396Asp MUTYH mutations were identified in three families and one family, respectively, and corresponded to 44% of all mutated alleles identified in this gene (4/9). These are also the most prevalent mutations in populations of European origin, probably because of a founder effect, and account for approximately 80% of all reported mutant alleles [12]. A recent report described a screen for these two variants in 30 Brazilian patients with clinical phenotypes of MAP and FAP; 5/30 patients were identified as carrying one of these two hotspot mutations, and four of them were in a biallelic state [41]. However, because the entire coding sequence of MUTYH was not evaluated in all patients in this study, we cannot perform comparisons with the frequency found in our patients. In our series, we could not identify a causative mutation in two index cases-one with attenuated polyposis and one with an intermediate polyposis phenotype. A possible explanation for the polyposis phenotype in these patients is the presence of unusual mutations in the APC or MUTYH genes, such as intron or promoter point mutations, epimutations or genetic mosaicism. In this sense, a recent study demonstrated that up to 8% of APC/MUTYH-negative polyposis patients presented a deep intronic APC variant that led to an aberrant transcript [42]. A second possibility is the existence of other susceptibility genes, and with the current possibility of screening all coding genes by next generation exome sequencing, it can be anticipated that novel polyposispredisposing genes will be identified. Considering that this study is the first comprehensive analysis of APC and MUTYH mutations in Brazilian polyposis patients, we attempted to determine the most cost-effective approach to detect the causative mutation in this population. In patients presenting fewer than 100 polyps (N = 8), 62% carried a biallelic mutation in the MUTYH gene. Among patients with more than 100 polyps (15 cases), three cases presented a mutation in APC exon 8 and eight cases (53%) exhibited a mutation between codons 1017 and 1650 of APC exon 15. Interestingly, this initial region of exon 15 comprises only 16% of the coding sequence of APC, yet presented a high mutation rate in our cohort. Therefore, based on our results, an optimized scheme for the molecular diagnosis of APC and MUTYH mutations in the Brazilian population might be obtained as follows: for patients presenting > 100 polyps, codons 1017-1650 of APC exon 15 should be sequenced first, followed by exon 8, and then the remaining APC exons; for patients presenting fewer than 100 polyps, MUTYH should be firstly screened. Because none of the studied MUTYH patients presented only the hotspot mutations p.Tyr179Cys or p.Gly396Asp, the whole gene should be sequenced, instead of undertaking an initial search for these variants, as recommended for other populations [18]. Genotype-phenotype correlations in polyposis syndromes are of great clinical interest, because they can contribute to better genetic counseling and simplify mutation screening. In several studies, an association between the location of the mutation and the clinical manifestations has been observed [7,20,27,[34][35][36]. However, since several contradictions have also been reported [22], it remains unclear whether the genetic information should guide clinical decision-making, such as the extent of the prophylactic colectomy or the protocol for clinical surveillance [7,[43][44][45][46][47]. Regarding the age at which clinical surveillance should begin, the established guidelines suggest that classical FAP patients should start endoscopic surveillance from the early teens, while AFAP and MAP families could start surveillance at age 18-20 [46]. Similar to the literature, our results demonstrated that severe FAP patients had an earlier age of onset (on average 10 years younger than AFAP or MAP patients). However, the most premature case in our series was a 7-year-old patient from a family with an APC mutation at codon 1017a region usually associated with an intermediate phenotype. A particularly aggressive phenotypic expression of this mutation was observed in this family; several relatives presented a high number of polyps (> 1000), an early age of onset, and desmoid tumors. This case demonstrates the importance of considering the family clinical history when planning the surveillance of other family members. One of the most clinically important discrepancies observed in our study concerns desmoid tumors, which, even if histologically benign, can lead to life-threatening complications through their size and impingement on vital structures. Indeed, desmoid tumors represent the second leading cause of death in FAP patients [46,48] and were identified at high frequency in our cohort, occurring in 57% (8/14) of APC-mutated families. Although described as usually associated with mutations after codon 1444 [22], only two out of eight families affected by desmoid tumors in our study harbored mutations after this codon. In concordance with our findings, previous studies with large cohorts have also failed to confirm this association [49], or described different boundaries for the increased risk of these tumors, such as codon 1310 [50] or 1395 [35]. Furthermore, besides the location of APC mutations, several other risk factors are suggested to be related with desmoid tumor development, such as surgical trauma [51], pregnancy [52] and especially positive family history for desmoid tumors [49]. In this regard, the last appears to be the most important risk factor for our population, since five of eight families with desmoid tumors presented more than one relative affected by this tumor. The differences observed in certain phenotypic features between our series and those of others may be because of our relatively small number of FAP and MAP families, may reflect some selection or data collection bias, or may be related to phenotypic peculiarities in this specific population. In this sense, the majority of FAP genotype-phenotype studies were performed in Europeans cohorts [34][35][36][37][38][39][40]43,47,49] and the self-declared ethnic origin of most families from our study was also European (Portuguese, Italian and Spanish, mainly) -except from two Japanese and one Arabian families. However, it is important to highlight that Brazilians represent an extremely admixed population, with most individuals presenting some degree of African and Amerindian ancestry [53]. Furthermore, intra and interfamilial variations in the FAP phenotype are also well documented in other populations, and it is likely that modifying genes and environmental factors, as well as functional polymorphisms of the normal APC allele, play a crucial role in determining the clinical course of disease [54,55]. In this regard, the different genetic background and/or environmental factors of our population could be responsible for the phenotypic differences observed in our study. For instance, in our series, desmoid tumors were much more prevalent in APC-mutated patients (57%) than in others previous studies (10-15%) [22,49,54], indicating that perhaps our set of patients represents a distinct group regarding this extracolonic feature. Finally, the lack of a clear phenotypic expression of the mutations identified in our study complicates clinical predictions based on knowledge of the mutation site, and as a result, we can make no specific surveillance and management recommendations for the Brazilian population. In order to accomplish that, larger studies need to be carried out. Instead, we recommend that clinical decisions regarding an individual patient should not be based strictly on the genotype, but mainly on the colonic phenotype and family clinical history. Conclusions In this comprehensive investigation of the APC and MUTYH mutation spectrum in Brazilian polyposis patients, we identified a high frequency of germline mutations, allowing the identification of several novel pathogenic variants and the proposal of a cost-effective screening approach for this population. Notably, a significant number of APC mutation-positive families were not consistent with predicted genotype-phenotype correlations, and this should be taken into consideration for genetic counseling and patient management of our population.
6,903.4
2013-04-05T00:00:00.000
[ "Medicine", "Biology" ]
Why space could be quantised on a different scale to matter The scale of quantum mechanical effects in matter is set by Planck's constant, $\hbar$. This represents the quantisation scale for material objects. In this article, we present a simple argument why the quantisation scale for space, and hence for gravity, may not be equal to $\hbar$. Indeed, assuming a single quantisation scale for both matter and geometry leads to the `worst prediction in physics', namely, the huge difference between the observed and predicted vacuum energies. Conversely, assuming a different quantum of action for geometry, $\beta \ll \hbar$, allows us to recover the observed density of the Universe. Thus, by measuring its present-day expansion, we may in principle determine, empirically, the scale at which the geometric degrees of freedom should be quantised. Wave-particle duality and Classical mechanics is deterministic [1]. If its initial conditions are known, the probability of finding a particle at a given point on its trajectory, at the appropriate time t, is 100%. The corresponding state is described by a delta function, δ 3 (x − x ′ ), with dimensions of (length) −3 . This is the probability density of the particle located at x = x ′ . In quantum mechanics (QM), probability amplitudes are fundamental. Position eigenstates, |x , are the rigged basis vectors of an abstract Hilbert space, where x|x ′ = δ 3 (x − x ′ ). These have dimensions of (length) −3/2 and more general states may be constructed by the principle of quantum superposition [2]. The resulting wave function, ψ(x), represents the probability amplitude for finding the particle at each point in space, and the corresponding probability density is |ψ(x)| 2 [3]. Since ψ(x) can also be decomposed as a superposition of plane waves, e ik.x , an immediate consequence is the uncertainty principle ∆ ψ x i ∆ ψ k j ≥ (1/2)δ i j , where i, j ∈ {1, 2, 3} label orthogonal Cartesian axes. This is a purely mathematical property of ψ that follows from elementary results of functional analysis [4]. In canonical QM, we relate the particle momentum p to the wave number k via Planck's constant, following the proposal of de Broglie, p = k. It follows that ∆ ψ x i ∆ ψ p j ≥ ( /2)δ i j . (1.1) This is the familiar Heisenberg uncertainty principle (HUP). We stress that the HUP is a consequence of two distinct physical assumptions: 1. the principle of quantum superposition, and 2. the assumption that determines the scale of wave-particle duality. 2 Let us also clarify the meaning of the word 'particle'. We stress that canonical QM treats all particles as point-like, so that eigenstates with zero position uncertainty may be realised, at least formally. However, gravitational effects are expected to modify the HUP by introducing a minimal length, ∆x > 0 [6,7]. Next, we discuss how this relates to theoretical predictions of the vacuum energy. Minimal length and the vacuum energy In canonical QM, the background space is fixed and classical. Individual points are sharply defined and the distances between them can be determined with arbitrary precision [8]. By contrast, thought experiments in the quantum gravity regime suggest the existence of a minimum resolvable length scale of the order of the Planck length, ∆x ≃ l Pl , where l Pl = G/c 3 ≃ 10 −33 cm [6]. Below this, the classical concept of length loses meaning, so that perfectly sharp spacetime points cannot exist [7]. This motivates us to take l Pl as the UV cut off for vacuum field modes, but doing so yields the so-called 'worst prediction in physics' [9], namely, the prediction of a Planck-scale vacuum density: Unfortunately, the observed vacuum density is more than 120 orders of magnitude lower, In Eq. (2.1), the mass scale m ≪ m Pl = /(l Pl c) ≃ 10 −5 g is set by the Standard Model of particle physics [10] and the limits of integration are k Pl = 2π/l Pl , k dS = 2π/l dS , where l dS = 3/Λ is the de Sitter length. This is comparable to the present day radius of the Universe, r U ≃ 10 28 cm, which may be expressed in terms of the cosmological constant, Λ ≃ 10 −56 cm −2 [11]. More detailed calculations alleviate this discrepancy [12], but our naive calculation highlights the problem of treating l Pl and m Pl as interchangeable cutoffs. We now discuss an alternative way to obtain a minimum length of order l Pl without generating unfeasibly high energies. 3 Wave-point duality and β = Clearly, one way to implement a minimum length is to discretise the geometry, as in loop quantum gravity and related approaches [13]. However, in general, quantisation is not discretisation [14]. The key feature of quantum gravity is that it must allow us to assign a quantum state to the background, giving rise to geometric superpositions, and, therefore, superposed gravitational field states [15]. The associated spectrum may be discrete or continuous, finite or infinite. But how to assign a quantum state to space itself? One possible, but simple, answer is that we must begin by assigning a quantum state to each point in the classical background. Individual points can then be mapped to superpositions of points, which results in the unique classical geometry being mapped to a superposition of geometries, as required [16]. In effect, we may apply the quantisation procedure point-wise, and, in the process, eliminate the concept of a classical point from our description of physical reality. This can be achieved by first associating a rigged basis vector, i.e., a ket |x with each coordinate 'x'. We then note that x|x ′ = δ 3 (x − x ′ ) is obtained as the zero-width limit of a probability Gaussian distribution, |g(x − x ′ )| 2 , with standard deviation ∆ g x. Taking ∆ g x > 0 therefore 'smears' sharp spatial points over volumes of order ∼ (∆ g x) 3 , giving rise to a minimum observable length scale [16]. Motivated by thought experiments [6], we set ∆ g x ≃ l Pl . Since g may also be expressed as a superposition of plane waves, an immediate consequence is the wave-point uncertainty relation, ∆ g x i ∆ g k j ≥ (1/2)δ i j . This is an uncertainty relation for delocalised 'points', not point-particles in the classical background of canonical QM [16]. A key question we must then address is, what is the momentum of a quantum geometry wave? For matter waves, p = k, but we have no a priori reason to believe that space must be quantised on the same scale as material bodies. In fact, setting ∆ g x ≃ l Pl and p = k yields ∆ g p ≃ m Pl c, giving a vacuum density of order ρ vac ≃ (∆ g p)/(∆ g x) 3 c ≃ c 5 /( G 2 ). This is essentially the same calculation as that given in Eq. (2.1), which results from the same physical assumptions. Hence, we set where β = is the fundamental quantum of action for geometry. 3 Smearing each point in the background convolves the canonical probability density with a Planck-width Gaussian. The resulting total uncertainties are 2) 3 In the relativistic regime, the tensor nature of gravitational waves must also be accounted for, but this may be neglected in the non-relativistic limit in which Eq. (3.1) remains valid [16]. In this model, a function is associated to each spatial point by doubling the degrees of freedom in the classical phase space and the classical point labeled by x is associated with the quantum probability amplitude g(x − x ′ ). This is the mathematical representation of a delocalized 'point' in the quantum nonlocal geometry. For each x, the additional variable x ′ may take any value in R 3 . Together, x and x ′ cover R 3 × R 3 , which is interpreted as a superposition of 3D Euclidean spaces [16]. The process of 'smearing' points is easiest to visualize in the case of a toy one-dimensional universe. In this case, the original classical geometry is the x-axis and the (x, x ′ ) plane on which g(x − x ′ ) is defined represents the smeared superposition of geometries. These issues are considered in detail in the refs. [16,17,18,19] (see, in particular, see Fig. 1 of ref. [16]), but are not discussed at length in the present article for want of space. Note also that classical points are defined, where necessary, as in standard differential geometry. However, the model considered here is not based on classical points or on the fixed manifolds that form the mathematical basis of classical spacetimes. Instead, we associate each point in the classical background, labelled by x, with a vector in a quantum Hilbert space, |gx . The associated wave function, x ′ |gx = g(x − x ′ ), may be regarded as a Gaussian of width σg ≃ l Pl . This represents the quantum state of a delocalized 'point' in the quantum geometry, but this term is used here in an imprecise sense, only for illustration. (Hence the inverted commas.) for each i, j ∈ {1, 2, 3}, where Ψ := ψg denotes the composite wave function of a particle in smeared space [16,17,18,19]. 4 Finally, we note that the existence of a finite cosmological horizon implies a corresponding limit on the particle momentum, which may be satisfied by setting ∆ g p ≃ Λ/3. The resulting quantum of action for geometry is The new constant β sets the Fourier transform scale for g(x−x ′ ), whereas the matter component ψ(x) transforms at [16,19]. 5 However, this does not violate the existing no-go theorems for the existence of multiple quantisation constants. These apply only to species of material particles [25], and still hold in the smeared-space theory, undisturbed by the quantisation of the background [19]. The vacuum energy, revisited The introduction of a new quantisation scale for space radically alters our picture of the vacuum, including our naive estimate of the vacuum energy. This must be consistent with the generalised uncertainty relations (3.2). Expanding ∆ Ψ X i with ∆ g x i ≃ l Pl gives the generalised uncertainty principle (GUP) and expanding ∆ Ψ P j with ∆ g p j ≃ Λ/3 yields the extended uncertainty principle (EUP), previously considered in the quantum gravity literature [26,27]. Equations (3.2) may also be combined with the HUP, which holds independently for ψ [16,19], to give two new uncertainty relations of the form ∆ Ψ X i ∆ Ψ P j ≥ · · · ≥ ( + β)/2 . δ i j . The central terms in each relation depend on either ∆ ψ x i or ∆ ψ p j , exclusively. Minimising the product of the generalised uncertainties, ∆ Ψ X i ∆ Ψ P j , we obtain the following length and momentum scales: where m dS = /(l dS c) ≃ 10 −66 g is the de Sitter mass. This gives a vacuum energy of order as required. Taking k Λ = 2π/l Λ as the UV cut off in Eq. (2.1), with m = m Λ , also gives the correct order of magnitude value, ρ vac ≃ ρ Λ [16]. 4 Note that, here, space is 'smeared', not in the sense implied by non-commutative geometry [20,21,22,23], but in the way that a quantum reference frame is smeared with respect to its classical counterpart [24]. More specifically, the model presented in [16,17,18,19] represents a nontrivial two-parameter generalisation (including both and β) of the QRF formalism of canonical quantum mechanics. This corresponds to the modified de Broglie relation, [16], where the noncanonical term may be interpreted, heuristically, as the additional momentum 'kick' induced by quantum fluctuations of the nonlocal geometry. As stressed later in the main body of the text, this kind of generalisation evades the well known no go theorems for multiple quantisation constants [25], which apply only to species of material particles. 5 The term 'quantum geometry wave', introduced above Eq. (3.1), therefore has a precise meaning. It refers to the plane wave components ofg β (p − p ′ ), which is the β-scaled Fourier transform of g(x − x ′ ). If σg ≃ l Pl is the width of g(x − x ′ ), the corresponding width of a delocalised point in momentum space isσg ≃ √ Λ. The predictions of canonical quantum theory, in which quantum matter propagates on a sharp classical space(time) background, are recovered by taking the limits σg → 0 andσg → 0, simultaneously. Together, these yield β → 0 [16]. In this model, vacuum modes seek to optimise the generalised uncertainty relations induced by both and β, yielding the observed vacuum energy. Any attempt to excite higher-order modes leads to increased pair-production of neutral dark energy particles, of mass m Λ ≃ 10 −3 eV/c 2 , together with the concomitant expansion of space required to accommodate them, rather than an increase in energy density [19]. The vacuum energy remains approximately constant over large distances, but exhibits granularity on scales of order l Λ ≃ 0.1 mm [16,28,29]. It is therefore intriguing that tentative evidence for small oscillations in the gravitational force, with approximately this wavelength, has already been observed [30,31]. Summary The simple analysis above shows that, if space-time points are delocalised at the Planck length, ∆x ≃ l Pl , the associated momentum uncertainty cannot be of the order of the Planck momentum, ∆p = /∆x ≃ m Pl c. We are then prompted to ask: is it reasonable to assume that quantised waves of space-time carry the same quanta of momentum as matter waves with the same frequency? Though a common assumption, underlying virtually all attempts to quantise gravity that utilise a single action scale, , we note that it has, a priori, no theoretical justification. We have shown that relaxing this stringent requirement by introducing a new quantum of action for geometry, β = , leads to interesting possibilities, with the potential to open up brand new avenues in quantum gravity research [19,32]. These include the proposal that the observed vacuum energy, and the present-day accelerated expansion of the universe that it drives, are related to the quantum properties of space-time [17,18]. In this model, a measurement of the dark energy density constitutes a de facto measurement of the geometry quantisation scale, β, fixing its value to β ≃ × 10 −61 . This essay was written as a non-technical introduction to the smeared-space model, whose formalism was developed over a series of published works [16,17,18,19,32]. It is based on the material presented at the 4th International Conference on Holography, Hanoi, Vietnam (August 2020), and designed to be accessible to a wide and diverse audience. Interested readers are referred to the previous works [16,18], in which the formalism was derived from the physical assumptions introduced above, and [19], which contains the most comprehensive summary of existing results, for full mathematical details. However, since these papers are long and complicated, a more technical, but still brief, introduction to the smeared-space theory is given in the Appendix. A Details of the model In [16], the smeared-space model quantum geometry was proposed, in which each point x in the classical background is associated with a vector in a Hilbert space, where g x |g x = 1. This describes a form of nonlocal geometry that is intrinsically quantum in nature, so that the width of |g(x ′ − x)| 2 is assumed to be of the order of the Planck length [16,18,19], in accordance with our expectations from gedanken experiment arguments [33,34]. It has long been known that classical nonlocal geometries, such as those introduced in [35], can be generated by first identifying each point in the classical manifold with a Dirac delta, δ 3 (x − x ′ ). Nonlocality is then introduced by smearing each delta into a finite-width probability distribution P (x − x ′ ), for example, a normalised Gaussian [36]. In this case, no new degrees of freedom are introduced, beyond those present in canonical quantum mechanics, since x ′ is simply a parameter that determines the position of P . The smeared space model introduced in [16,18] is different in that it first associates each point x ′ with a rigged basis vector of a Hilbert space, |x ′ . The latter is then smeared to produce the normalised state (A.1). In this case, x ′ |g x = g(x ′ − x) is a genuine quantum mechanical amplitude, not a probability distribution. It has dimensions of (length) −3/2 not (length) −3 and, in principle, may contain nontrivial phase information. In this model, |g x represents the state of a Planck-scale localised 'point' in the quantum geometry. Each Planck-scale localised point is then smeared into a superposition of all points in the background space by imposing the map The smearing map (A.2) may be visualised as follows: for each point x ∈ R 3 in the classical geometry it generates one whole 'copy' of R 3 , thereby doubling the size of the classical phase space. The resulting smeared geometry is represented by a 6D volume in which each point (x, x ′ ) is associated with a quantum probability amplitude, g(x ′ − x). This is interpreted as the amplitude for the coherent transition x ↔ x ′ and the 6D phase space is interpreted as a superposition of 3D geometries [16,18,19]. In the nonrelativisitc limit, each geometry in the smeared superposition is Euclidean, but differs from all others by the pair-wise exchange of two points [18]. It is assumed that the interchange x ↔ x ′ exchanges the canonical amplitudes, ψ(x) ↔ ψ(x ′ ), which leads to additional fluctuations in the observed position of the particle, over and above those present in canonical quantum theory. We now review, briefly, how these fluctuations give rise to generalised uncertainty relations (GURs), including the GUP and EUP previously considered in the quantum gravity literature [26,27]. For simplicity, we may imagine |g(x ′ − x)| 2 as a normalised Gaussian centred on x ′ = x, but, here, x ′ is no longer a parameter. By introducing the tensor product structure (A.2) we have doubled the number of degrees of freedom of the theory, vis-à-vis canonical quantum mechanics. Those in the left-hand subspace, labelled by x, represent the degrees of freedom of a canonical quantum particle, whereas those in the right-hand subspace, labelled by x ′ , determine the influence of the background geometry. The action of S on |x (A.2) then induces a map on the canonical quantum state, |ψ = ψ(x) |x d 3 x, such that The square of the smeared-state wave function, |Ψ(x, x ′ )| 2 = |ψ(x)| 2 |g(x ′ − x)| 2 , represents the probability distribution associated with a quantum particle propagating in the quantum geometry. Since |ψ(x)| 2 represents the probability of finding the particle at the fixed classical point x in canonical quantum mechanics, |ψ(x)| 2 |g(x ′ − x)| 2 represents the probability that it will now be found, instead, at a new point x ′ . If g(x) is a Gaussian centred on the origin, x ′ = x remains the most likely value, but fluctuations within a volume of order ∼ σ 3 g , where σ g is the standard deviation of |g(x)| 2 , remain relatively likely [16,18,19]. Furthermore, since an observed position 'x ′ ' cannot determine which point(s) underwent the transition x ↔ x ′ in the smeared superposition of geometries, we must sum over all possibilities by integrating the joint probability distribution |Ψ(x, x ′ )| 2 over d 3 x, yielding where the star denotes a convolution. In this formalism, only primed degrees of freedom represent measurable quantities, whereas unprimed degrees of freedom are physically inaccessible [16,18,19]. The variance of a convolution is equal to the sum of the variances of the individual functions, so that the probability distribution (A.5) gives rise to the GUR This is the detailed derivation of the first of Eqs. (3.2), given in the main text. However, note that the primes on the physically measurable variables were omitted in Eqs. (3.2), for the sake of notational simplicity. It is straightforward to verify that (A.6) is obtained from the standard braket construction (∆ Ψ X i ) 2 = Ψ|(X i ) 2 |Ψ − Ψ|X i |Ψ 2 , wherê is the generalised position-measurement operator and d 3P Next, we note that the HUP, expressed here in terms of the physically accessible primed variables, holds independently of Eq. (A.6). Combining the two and identifying the standard deviation of |g(x)| 2 with the Planck length according to [16], then yields where α = 4(m Pl c) −2 , to first order in the expansion [16]. For ∆ ψ x ′i ≫ σ i g ≃ l Pl , we have that ∆ Ψ X i ≃ ∆ ψ x ′i , and, in this limit, Eq. (A.10) reduces to the standard expression for the GUP [26]. In the momentum space picture, the composite matter-plus-geometry state |Ψ is expanded as as in canonical QM, and where β = is the fundamental quantum of action for geometry [16,18,19]. Note that, in Eq. (A.11), the basis |p p ′ is entangled and cannot be separated into a simple tensor product state, i.e., |p p ′ = |p ⊗ |p ′ . We emphasise this by not writing a comma in between p and p ′ , by contrast with the position space basis, |x, x ′ = |x ⊗ |x ′ . Nonetheless,g β (p ′ − p) can be interpreted as the probability amplitude for the transition p ↔ p ′ in smeared momentum space, by analogy with the position space representation [16]. A unitarily equivalent formalism, which is akin to a quantum reference frame transformation [24] of the formalism sketched here, but with ↔ β, and in which the position and momentum space bases are symmetrized, is presented in [18,19]. which is equivalent to the modified de Broglie relation This holds for particles propagating in the smeared background and the non-canonical term may be interpreted, heuristically, as an additional momentum 'kick' induced by quantum fluctuations of the spacetime [16,18,19]. We now fix β from physical considerations and show how it is related to the minimum length and momentum scales of the GUP and EUP. The general properties of the Fourier transform [4] ensure that the 'wave-point' uncertainty relation, holds in addition to Eq. (A.6) and the HUP (A. 8), and that the inequality is saturated for Gaussian distributions. This is simply Eq. (3.1) from the main text, expressed more rigorously in terms of the requisite primed variables. Next, we identify the standard deviation of |g β (p)| 2 with the de Sitter momentum, which represents the minimum momentum of a particle whose de Broglie wave length is of the order of the radius of the Universe, r U ≃ l dS = 3/Λ, By analogous reasoning to that presented above, the probability of obtaining the observed value 'p ′ ' from a smeared momentum measurement is which gives rise to the momentum space GUR This is the second of Eqs. (3.2) from the main text, expressed in terms of primed variables, and can be obtained from the standard braket construction ( Substituting the HUP (A.8) into Eq. (A. 19) and Taylor expanding to first order then yields where η = (1/2)l −2 dS [16]. For ∆ ψ p ′ j ≫ ∆ g p ′ j ≃ m dS c, we have ∆ Ψ P j ≃ ∆ ψ p ′ j (A.19) and, in this limit, Eq. (A.21) reduces to the standard expression for the EUP. Having obtained both the GUP and EUP from the smeared space formalism, we now show how they can be combined to give the so called extended generalised uncertainty principle (EGUP). This incorporates the effects of both canonical gravitational attraction and the presence of a constant background dark energy density on the microscopic dynamics of quantum particles [26,27]. Combining Eqs. (A.6), (A.8) and (A.19), directly, gives Substituting for ∆ g x ′i and ∆ g p ′ j from Eqs. (A.9) and (A.16), taking the square root and expanding to first order, then ignoring the subdominant term of order ∼ l Pl m dS c, yields ∆ Ψ X i ∆ Ψ P j ≥ 2 δ i j 1 + α(∆ Ψ P j ) 2 + η(∆ Ψ X i ) 2 . (A. 23) This is equivalent to the heuristic EGUP obtained in [27] but with ∆x i and ∆p j replaced by well defined standard deviations, ∆ Ψ X i and ∆ Ψ P j . These represent the width of the composite matter-plus-geometry state |Ψ in the position and momentum space representations, respectively [16,18,19]. Furthermore, it is possible to show that the product of generalised uncertainties, ∆ Ψ X i ∆ Ψ P j , is minimised when ∆ ψ x ′i and ∆ ψ p ′ j take the values show that GURs, including the GUP, EUP and EGUP, may be obtained without non-canonical modifications of the Heisenberg algebra [16,18,19]. (See also [37] for a similar result.) This allows the smeared space model to evade the problems that plague existing modified commutator models, including violation of the equivalence principle, the velocity-dependence of the minimum length, and the soccer ball problem [26,38]. Finally, we note that substituting ∆ g x ′i = σ i g ≃ l Pl and ∆ g p ′ j =σ gj ≃ m dS c into Eqs. (A.24) yields the length and momentum scales given in Eqs. (4.1) of the main text, and, hence, the observed dark energy density over macroscopic distances. By contrast, using the standard HUP for the geometric part of the composite quantum wave function Ψ = ψg, which is equivalent to taking the limit m dS → m Pl , β → in the smeared-space model, yields the familiar 'worst prediction in theoretical physics', i.e., a vacuum energy of the order of the Planck density, ρ vac ≃ ρ Pl .
6,165.6
2020-05-26T00:00:00.000
[ "Physics" ]
Low-threshold optical bistability with multilayer graphene-covering Otto configuration In this paper, we propose a modified Otto configuration to realize tunable and low-threshold optical bistability at terahertz frequencies by attaching multilayer graphene sheets to a nonlinear substrate interface. Our work demonstrates that the threshold of optical bistability can be markedly reduced (three orders of magnitude) by covering the nonlinear substrate with multilayer graphene sheets, due to strong local field enhancement with the excitation of surface plasmons. We present the influences of the Fermi energy of graphene, the incident angle, the thickness of air gap and the relaxation time of graphene on the hysteresis phenomenon and give a way to optimize the surface plasmon resonance, which will enable us to further lower the minimal power requirements for realizing optical bistability due to the strong interaction of light with graphene sheets. These results are promising for realization of terahertz optical switches, optical modulators and logical devices. Introduction Due to its unique and useful optical and electronic properties, graphene has attracted considerable attention from the photonics and optics community. As a next-generation material with the potential to replace traditional materials in electrical and optical devices, graphene has many fascinating optical properties [1][2][3], such as strong light-graphene interaction, mobility, broadband and high-speed operation, etc. It seems to be an ideal optical material candidate for designing controllable optical devices that operate in terahertz and other optical frequency ranges due to the tunability of the charge carrier density and conductivity of graphene [4]. Surface plasmons (SPs) are coherent delocalized electron oscillations that exist at a metal surface. In the past few decades, the subwavelength optics of SPs have been extensively studied by a wide range of scientists [5]. However, even precious metals, which are widely studied and treated as the best effective plasmonic materials, are hardly tunable and exhibit large losses that restrict their flexibility in optical communication and signal processing devices. Naik et al provided some alternative plasmonic materials beyond gold and silver [6]. Moreover, graphene SPs are an appropriate alternative to metal SPs because the former provide strong light-matter interaction and controllability, with the advantage of being highly tunable by some methods. Optical bistability (OB) is a light control method using another light source [7,8]. Bistability in nonlinear optical systems refers to an optical effect where one input intensity can induce two steady transmission light intensities, depending on the history of the input. The phenomenon of OB demonstrates a pragmatic method to optical memory [9], optical information processing [10], optical transistors [11] and so on. In general, OB can be realized at the interface between a linear and a nonlinear material, with the reflected light intensity showing hysteresis [12]. However, the possibility of realizing the hysteresis effect with one atom thick mat erials such as graphene seems to be emerging. In the past few decades, there have been numerous researches on SPs in the field of biochemistry [13], physics [14], communication [15] and so on, but seldom reports about graphene SPs with the phenomenon of OB at terahertz frequencies. The investigation of SP modes in airgraphene-SiO 2 -Si structures showed that graphene-based transverse magnetic (TM) (or transverse electric (TE)) SPs are bound modes, which decay into the air in the range of tens of micrometers [16]. By depositing graphene with silicon photonic crystal cavity, Gu et al achieved an effective nonlinear optical device that could enable ultra-low-power resonant OB, self-induced regenerative oscillations and coherent four-wave mixing [17,18]. Bao et al demonstrated that graphene nanobubbles offer a new and promising type of optical nonlinear medium to overcome the optical path length limitation of atomically thin 2D films, so that OB and all optical switching are obtained in such graphene nanobubbles [19]. He et al made manifest an active tunable device in which the resonance of transmitted or reflected curves can be tuned in a wide range [20]. We have reported a tunable OB at the graphene-covered nonlinear interface, but the threshold for OB is high due to the shortage of effective feedback mechanisms [4]. Peres et al found that a single layer of graphene shows OB in the terahertz range in aerial suspension [21]. He et al showed that for graphene MM structures, the resonant transmission curves can be tuned over a wide range by controlling applied electric fields. As the Fermi level of the graphene layer increases, the resonant transmission becomes stronger, and the resonant dips shift significantly to higher frequency [22]. By considering the nonlinear conductivity of graphene, we established the theor etical relation of nonlinear optical response with respect to dielectric/nonlinear graphene/di electric heterostructures and further demonstrated tunable OB at terahertz frequencies; however it seems that the threshold for OB is still high where SPs cannot be excited [23]. Recently, we proposed a modified Kretschmann-Raether configuration to realize low threshold optical bistable devices at terahertz frequencies [24]. It is found that the switchingup and switching-down intensities required to observe optical bistable behavior are lowered markedly due to the excitation of the graphene SPs, but the influence of the layer-number of graphene on the hysteresis effect is not significant. In this paper, we propose a modified Otto configuration to realize tunable and low-threshold OB at terahertz frequencies by attaching multilayer graphene sheets to the nonlinear substrate interface. Our work demonstrates that the threshold of OB can be markedly reduced (three orders of magnitude compared to monolayer graphene) by covering the nonlinear substrate with multiple layers of graphene sheets, due to the strong local field enhancement with the excitation of the SPs. Model and method Recently, researches have shown that graphene can also support well-confined SP modes in mid-infrared and terahertz ranges [25,26]. As their electronic transport properties can be readily tuned via a gate voltage, graphene structures present an attractive alternative material for supporting SPs. However, due to the relatively large momentum mismatch between the SPs and the light photons, their excitation on graphene still remains a challenging issue. We investigate the excitation of SPs on highly doped graphene sheets with attenuated total reflection (ATR) via the modified Otto configuration. The geometrical setup is shown in figure 1, graphene is attached on a nonlinear substrate of refractive index n 2 = 1.6, which is separated from the Germanium prism (n p = 4) by a small thickness air gap d (n 1 = 1). In the following, the substrate is taken to be nonpolar so that effects of remote phonon scattering may be neglected. The graphene sheets sandwiched between two dielectric media support a single bound SPs mode, which is different from metallic thin films where the SPs dispersion splits into two beams. The applied electrode is also shown in figure 1, here we choose p-type transparent conducting oxide CuAlO 2 thin film as the electrode due to its good conductivity and transparency in the terahertz range.The refractive index of CuAlO 2 is about 2.0 in the midinfrared and terahertz spectra [27]. If the nonlinear substrate is thick enough, the insertion of the transparent electrode will have no impact on the behavior of OB. Moreover, the refractive index of CuAlO 2 is very near the linear refractive index of the nonlinear substrate. Graphene is characterized by a complex surface conductivity σ, which is a function of angular frequency ω = (2πc)/λ, Fermi energy E F , electron-phonon relaxation time τ and absolute temperature T of the environment. The complex surface conductivity σ is contributed by intraband and interband terms σ = σ intra + σ inter , which can be expressed according to the Kubo formula [4,28] where e and ħ are universal constants related to the electron charge and reduced Planck constants respectively. E F is the Fermi energy (or chemical potential) and k B is the Boltzmann constant. In the following studies, we find that the graphene electron-phonon relaxation time can modify the hysteresis behavior markedly, and OB cannot be excited if the value for the relaxation time is too low. But if the structure is fixed, it is not easy to change the relaxation time τ. In the present work, as an example, we choose τ = 0.5 ps. Moreover, we use the wavelength λ = 100 μm in the terahertz range. For short wavelengths, such as in near-infrared light (for example, λ = 1.0 μm), there is no OB phenomenon as a result of lack of the SPs in this wavelength. For the mid-infrared light (for example, λ = 10 μm), OB can occur, however the graphene will be broken down due to the very high power requirements for exciting OB. Now, let us model each graphene monolayer as a surface conducting sheet with T = 300 K. Under the random-phase approximation, intraband scattering dominates in highly doped graphene, and its conductivity takes on a Drude-like [29] form E F = ħv F (πn 2D ) 1/2 , V F = 10 6 m s −1 is the Fermi velocity of electrons, n 2D is the carrier density. Obviously, σ intra is highly dependent on the work frequency and Fermi energy. And the carrier density n 2D can be electrically controlled by an applied gate voltage, thereby leading to a voltage-controlled Fermi energy E F and hence voltage-controlled surface conductivity σ-these could provide an effective route to achieving controlled OB in graphene-covered substrate interfaces. Considering the monolayer graphene and the boundary condition, we can get the total reflection coefficient and total transmission coefficient for the light reflected and transmitted from the substrate interface covered by the graphene where r p1 and t p1 are the reflection coefficient and transmission coefficient at the interface of the Germanium prism and air gap, 1 ε is the dielectric constant of the air gap. r p1 and t p1 are independent of the graphene and nonlinear response, and hence have the same form as the Fresnel formula at the interface of conventional materials. However, r 12 and t 12 , the reflection coefficient and transmission coefficient at the interface of the air gap and the substrate, are completely different from the ordinary Fresnel formula due to the presence of the graphene sheet and nonlinear effect in the substrate. According to our previous work [4], r 12 It should be noted that the optical conductivity of graphene σ appears in (5) and (6) where r t ( ) is the total reflection coefficient (total transmission coefficient). Results and discussions The excitation of SPs with monolayer or multilayer graphene in the modified Otto configuration under lower incident power (linear substrate) is demonstrated with the reflection spectra in figure 2(a). In absence of graphene on the substrate surface, SPs cannot be excited due to the shortage of the negative permit tivity at the interface of the air gap and dielectric substrate for the TM polarization. But with the addition of graphene sheets, the conditions for SPs can be satisfied owing to the negative imaginary part of the optical conductivity of graphene [30]. In order to excite SPs, we have adopted an Otto configuration by adding a high-index prism on the air gap. The properties of TM-polarized SPs in the Otto configuration have been investigated in detail in [30]. But what is most interesting is that the reflectance dips are moving to small incident angle rapidly and the angular widths of SP resonance (SPR) are becoming very narrow with the increase of the layer number of graphene sheets, which is pretty good for realizing the low-threshold OB or optical switching-as we discuss in the next section. Moreover, SPR is very sensitive to the thickness of the air gap and the Fermi energy of graphene. The dependences of the reflectance on the incident angle at the different thickness of the air gap and Fermi energy of graphene have been shown in figures 2(b) and (c) respectively. From these figures, we find that it is very important to optim ize the SPR conditions to obtain the minimum reflectance and the narrowest angular width of SPR (in order to acquire the maximal local field enhancement), which is advantageous in obtaining low-threshold OB. It should also be noted that the large values of Fermi level-as large as E F = 0.9 eVhave been chosen in order to effectively excite SPs, here the reflection dip closes to but does not equal zero, as shown in figure 2(c). Moreover, it is found that Fermi energies as high as E F = 1 eV have been achieved experimentally [31]. The Fermi energies can be further increased if patterned graphene is used, such as ribbons or plates. Now we discuss the optimization of SPR by changing the thickness of the air gap at different layer numbers of graphene, as shown in figure 3. For N = 1 in figure 3(a), the optimal conditions for getting the minimum reflectance dips are situated around d = 1.5 μm and the resonant angle is about θ = 57.5°. However, the angular width is very wide and a minimum reflectance of less than 5% is hard to acheive, which leads to weak local field enhancement. For N = 3 in figure 3(b), the optimal conditions for getting the minimum reflectance dips are situated around d = 4.5 μm and the resonant angle is about θ = 25.5°. In this case, the angular width is becoming very narrow and the minimum reflectance is less than 1%, which can give rise to strong local field enhancement. If further increase in the number of layers of graphene is implemented, the angular width of SPR can be narrowed further and the reflectance dip can reach almost perfect zero reflectance, as shown in figure 3(c). For N = 5, the optimal conditions for getting the minimum reflectance dips are situated around d = 4.0 μm and the resonant angle is about θ = 24.1°. Next, we want to discuss the optical bistable behavior in the modified Otto configuration and demonstrate possible methods to lower the threshold of the optical hysteresis effect. In order to gain larger nonlinear index-as much as n m W 1 10 2 10 2 1 = × − − -we choose a semiconductor, such as GaAs or InSb, as the nonlinear substrate [8]. In order to simplify the discussion, we also assume that this semiconductor is non-dispersive and the linear optical refractive index is n 1.6 20 = . Moreover, compared with common graphene substrates, e.g. Si or SiC, GaAs is a black-gray solid and can be stable in air. Further, the electron mobility of GaAs is 5-6 times larger than silicon. Hence, graphene on GaAs substrate is characterized by a rather high carrier density due to doping, so that we can tune the carrier density in this construction by adding a gate electrode to the surface of the GaAs substrate. At moderate levels of carrier density, the plasmon approaches the surface of graphene and GaAs leading to the effects of graphene SPs. Without graphene on the nonlinear substrate, the hysteresis effect is hard to observe in our considered incident light power due to the lack of effective feedback mechanisms in the prism-air-nonlinear substrate structure. However, the optical bistable behavior can be observed immediately if we cover the nonlinear substrate with a single-layer graphene sheet. As shown in figure 4(a), the typical S-shape curve of the relationship between the incident intensity and the reflected intensity has been discovered, and the hysteresis indicates bistability where two stable output powers can be obtained for a given input power. That is to say, OB can be realized by the insertion of graphene in this modified Otto configuration. In order to further understand the optical bistable behavior, we also show the dependence of the reflectance on the incident light intensity. We choose the incident angle θ = 80° (>the resonant angle θ R = 57.5°), hence it works in the total internal reflection (TIR) mode where the reflectance is high, as shown in figure 2(a) or figure 3(a). However, with the increases of the incident light intensity, the reflectance reduces gradually and it switches to low reflectance state (attenuated total internal reflection, ATIR) when the incident light intensity approaches the threshold value at I In ≈ 4410 kW cm −2 . But on the contrary, the reflectance decreases gradually if we decrease the incident intensity and it can switch from low reflectance state (ATIR mode) to high reflectance state (TIR mode) at the threshold value I In ≈ 3105 kW cm −2 . Apparently, this provides us a method to realize all-optical switching by using this novel optical bistable behavior. But there is a huge barrier due to the requirement of very high incident light intensity. Based on the previous discussion, we have known that we could realize optimal SPR conditions with very narrow angular width and very low reflectance dip by covering multilayer graphene sheets on the nonlinear substrate. Here In addition to the influence of the layer-number of graphene sheets on the hysteresis effect, the Fermi energy of graphene, the incident angle, the thickness of air gap and the relaxation time of graphene are also exerting an important influence on the optical bistable behavior, as shown in figure 5. Figure 5(a) shows the effect of Fermi energy E F on the reflectance bistability. Due to the dependence of the optical conductivity of graphene on the Fermi energy E F , the changing of E F will make a great influence on the hysteresis effect. Moreover, the Fermi energy change as E F = ħv F (πn 2D ) 1/2 , where n is the carrier density n C V V e g g D irac ( ) / = − , C g , V g and V Dirac are the gate capacitance, gate voltage, and the gate voltage corresponding to the charge-neutral Dirac-point. Hence, the Fermi energy can be tuned by an external applied voltage, so the hysteresis effect can be electric field controlled. By decreasing the Fermi energy, the imaginary part of optical conductivity is decreased, hence needing less incident intensity to maintain OB, leading to the lower intensity shifts of switch-on and switch-off. However, as Fermi energy E F increases both the switch-up and switch-down threshold light intensity shift to higher incident light intensity rapidly and the width of the hysteresis loop is also enhanced markedly. This might be useful to control the threshold value and the hysteresis cycle width of the bistable curve simply by adjusting the Fermi energy of the graphene sheet. Consequently, it provides an effective way of manipulating OB by an external applied voltage. Besides the dependence of the reflectance on the Fermi energy, it is also influenced by the parameters of the incident light, which are shown in figure 5(b). When the incident angle increases, it needs more light intensity to switch the reflectance from TIR mode to ATIR mode, hence the bistability seems obvious and the switch-on and switch-off are enhanced. What is more, OB is also strongly dependent on the thickness of the air gap, as shown in figure 5(c). It is clear that both the switch-on and switch-off shift to high values with increasing thickness due to the thickness deviating from the optimal thickness and hence needing greater incident light intensity. Furthermore, it is worth studying the influence of the losses (the electron-relaxation time and the losses have an inverse relationship) in graphene on the bistable behavior in THz. Figure 5(d) displays the curves of OB at different relaxation times of graphene. It can be seen that the influence of the losses in graphene on the hysteresis is obvious and significant. As the electron-phonon relaxation time decreases, the loss is increased and it needs more incident light intensity to realize the switching from TIR mode to ATIR mode, hence both the switch-on and switch-off threshold shift to higher values, showing the incident intensity should be enhanced to give rise to OB. Moreover, it is presented that if the relaxation time is small enough, the hysteresis loop may disappear and OB does not exist. Conclusion In summary, we analyzed the hysteresis response of a modified Otto configuration with graphene sheets covering a nonlinear dielectric substrate. Our work demonstrated that the threshold of OB can be markedly reduced by covering multilayer graphene sheets on the nonlinear substrate due to strong local field enhancement with the excitation of SPs, where the threshold value can be reduced by three orders of magnitude by increasing the layer number of graphene to five. Moreover, we also presented that the OB is dependent on the properties of graphene (such as the Fermi energy and relaxation time), the thickness of air gap, the angle of the incident light and the incident wavelength. The tunability and the low threshold of OB with graphene could prepare the way to optical logic, optical memory, optical transistors and all-optical switching, etc.
4,839.8
2016-05-25T00:00:00.000
[ "Physics" ]
Analysis of the Application of Service Innovation to the Study of Public Service Quality in the "House of the Sky" in Sidenreng Rappang Regency Service innovation is adopted not only as a measure to deal with changes in the organizational environment. More than that, the purpose of service innovation is to improve the quality of public services. Public service innovation continues to be carried out by the government in various sectors, one of which is in the education sector. The education sector is a sector that has an important role that can make a major contribution to national development. Quality education actually gives birth to superior resources in carrying out nation building. This study aims to find out how the application of the "Rumah Langit" service innovation affects the quality of public services felt by the people of Sindenreng Rappang Regency. This research will be conducted from May to November 2023. The research locations include four villages in Sidenreng Rappang Regency. The research method to be used is to use quantitative methods with the research population, namely the people of Sidenreng Rappang Regency who are recipients of the "Rumah Langit" service innovation. This research sample was taken using simple random sampling technique. The tests used in this research are validity tests, reliability tests, descriptive statistical analysis and inferential statistical analysis. The research results show that the indicators used for each variable are in the good category. Through the results of the significance test, it shows that service innovation has a significant effect, where the significant value is smaller than the alpha value, while the calculated t value is greater than the t table, so it can be concluded that there is a significant influence between service innovation on the quality of public services in the "Rumah Langit" service in Sidenreng Rappang Regency. INTRODUCTION Entering the era of industrial revolution and technological acceleration resulted in global changes in all aspects of life.Various sector line innovations are needed to be able to keep up with the flow of changes caused by these changes (Alami et al., 2022;Joda et al., 2021;Ottenbacher & Gnoth, 2005;Sheikh et al., 2021).One of the important issues that continues to change is the issue of public services.Important service changes in public services in Indonesia adjust the conditions of the development of science and technology that are increasingly advanced and increasingly fierce competition.Service is the process of fulfilling needs through the activities of others directly, a concept that ideally should always be actual in various institutional aspects (Khan & Ghalib, 2012;Mishra et al., 2021;Windrum & Koch, 2008). Not only in business institutions, innovation is also a very important aspect for government institutions.As a public service-oriented institution, various innovations can provide space for the state to interact with non-governmental institutions more effectively and efficiently. Public service performance can be improved if there are "exit" and "voice" mechanisms.The "exit" mechanism means that if public services are not of high quality, the public must have the opportunity to choose another public service provider institution that they like.Meanwhile, the "voice" mechanism means that there is an opportunity to express dissatisfaction with public service delivery institutions (Lai et al., 2021;Li et al., 2016;Pariyasiri, 2022).The "exit" and "voice" mechanisms can be input for public service delivery agencies in creating service innovation and novelty.This mechanism can be used by the government to find out the needs of the services that will be carried out. In Indonesia, public service providers are regulated by the government through the Ministry of Administrative Reform and Bureaucratic Reform (PANRB).The Ministry of PANRB has the task of organising affairs in the field of state apparatus empowerment and bureaucratic reform to assist the President in organising state government in various fields.One of the areas that this institution is responsible for is the Education Sector.Various efforts have been made by the government to encourage and capture new innovations in the field of education from various groups. Looking at the current conditions in Indonesia, education in Indonesia is still relatively low.The education gap is still very much felt in various regions in Indonesia.Education Statistics 2022 as one of the portraits of Indonesian education describes the condition of Indonesian education based on the results of Susenas March 2022.Merdeka Belajar is one of the steps to transform education for the realisation of Indonesia's Superior Human Resources (HR) (Buralkiyeva et al., 2022).The programme is an effort to support the achievement of the 9 Development Priority Agenda.This programme is expected to be able to create high-quality education for all Indonesians, characterised by high participation rates at every level of education, quality learning outcomes, and equitable quality of education both geographically and in terms of socio-economic status.One of the indicators related to education development is the dropout rate (Senshaw & Twinomurinzi, 2022). Based on the education statistics report, 2022, it can be seen that the higher the level of education, the higher the dropout rate.In general, there are 1 in 1,000 people who drop out of school at the primary school level.This percentage is smaller than the dropout rate at the junior secondary and senior secondary levels.Out of 1,000 people who attended junior secondary school, 10 dropped out of school.Meanwhile, the dropout rate at the senior high school level is 13 out of 1,000 people who received a senior high school education dropped out. In Sidenreng Rappang district alone, 6,429 children are out of school from 106 villages.When compared to national figures, the dropout rate in 2021 was around 83.7 thousand people.For South Sulawesi Province, the composition of the number of children dropping out of elementary, junior high, high school, and vocational schools is 6,107 people in 2021.When looking at the data above, the problem of out-of-school children is still a problem that needs an immediate solution, so that these children do not become a burden in the future. One of the challenges of regional development is the high number of out-of-school children caused by various factors such as economic problems, child marriage, or because the child no longer wants to go to school.Starting from this problem condition, an idea was born to make an effort based on increasing community awareness and involving local government and community leaders in dealing with this ATS problem.The innovation of the Special Community Room for Integrated Free NonFormal Education Services (RUMAH LANGIT) was formed specifically to handle out-of-school children closely related to increasing productivity. The purpose of implementing the RUMAH LANGIT programme is to reduce out-of-school children, especially children who are potentially in conflict with the law.The targets of this innovation are all children who do not go to school and drop out of school in 4 (four) locus villages namely Mojong, Otting, Compong, Mattirotasi Villages. Rumah Langit facilitates these out-of-school children to get lessons like students in formal schools at a flexible place and time so as not to interfere with their daily activities.The Rumah Langit innovation has been implemented since 2021 and Mattirotasi Village was chosen as the pilot project.The first step was to identify data on out-of-school children through independent data collection by the Rumah Langit team accompanied by community leaders.At that time, 15 children were found not attending school.After persuasive approaches, 9 children were found willing to join Rumah Langit and then followed up by enrolling them in non-formal schools.Even though the children have been enrolled in non-formal schools, they are still being nurtured by "Rumah Langit" to be given additional training and skills and arts. In 2022, the Ministry of Villages, Disadvantaged Regions and Transmigration, together with UNICEF, developed a data collection model called the Out-of-School Children Community-Based Development Information System (SIPBM-ATS).This data collection system is attached to the Ministry of Villages, PDT and Transmigration portal so that districts that need it can access it easily.This data collection model can provide information related to out-of-school children and children at risk of dropping out of school that serves as primary data to complement existing secondary data and sectoral data belonging to villages and districts.This SIPBM contains data by name by address on children who are out of school and also those at risk of dropping out of school as well as knowing the causes of why the child is not in school.The data in SIPBM is then used as a reference in handling out-of-school children through the Rumah Langit innovation so that independent data collection is no longer carried out. In addition to involving community leaders, "Rumah Langit" also collaborates with village officials and the Education Office to return children to school.In 2022, three additional villages were added to the locus, namely Otting Village, Compong Village, and Mojong Village.These three villages were chosen as the locus because of the high rate of child marriage caused by local customs, the potential for children to deal with the law (online fraud and drug abuse) is quite high in these villages so it is feared that if not immediately addressed, it will have a wider impact and spread to other areas.After being successfully implemented in four villages in Sidenreng Rappang District, the output of this activity was 7 students who returned to school in Otting Village, 14 students in Mojong Village, 9 Students in Mattirotasi Village, and 21 students in Compong Village. This research was conducted to answer the question of whether or not there is an effect of service innovation on the quality of public services perceived after the implementation of the Rumah Langit programme.The purpose of this research is to answer the problems as stated as well as to measure the success of the implementation of this Rumah Langit innovation.Therefore, researchers are interested in conducting this research entitled "Analysis of the Application of Service Innovation to the Quality of Public Services -Case Study: "Rumah Langit" innovation in Sidenreng Rappang Regency".The results of this study can be taken into consideration whether the Rumah Langit innovation has the potential for reapplication in other areas and become input for Rumah Langit organisers to provide services that can improve the quality of public services produced. This research was conducted to answer the questions of how the implementation of the Rumah Langit service innovation, how the quality of public services carried out in the Rumah Langit programme, and to see how service innovation affects the quality of public services in Sidenreng Rappang Regency, South Sulawesi. METHOD Type of Research The type of research used in this study is quantitative research.According to (Creswell & Creswell, 2017), quantitative research methods are defined as research methods based on the philosophy of positivism, because they are used to analyse certain populations or samples, using research instruments for data collection, the data analysis process is quantitative or statistical, which aims to test predetermined hypotheses.Quantitative research method is a type of research whose specifications are systematic, well structured and clearly structured from the beginning to the development of its research design. Population and Sample In this study, the population is the dropout community who is the object of implementing the "Rumah Langit" service innovation in Sidenreng Rappang Regency as well as stakeholders and related institutions that are the targets and implementers of innovation, totalling 166.The sampling technique used in this research is probability sampling.According to (Creswell & Clark, 2017)"pro-bability sampling is a sampling technique that provides equal opportunities for each member of the population to be selected as a sample member".The type of probability sampling used in this study is simple random sampling.This simple random sampling technique is a technique of taking sample members from a population that is carried out randomly without looking at the strata (levels) in the population.In this study, researchers used the Slovin technique.The Slovin formula according to (Nomleni et al., 2019) to determine the sample is as follows: Based on the above calculations, the sample of 117 respondents in this study was adjusted to the population and the number of institutions involved in the application of Rumah Langit service innovations. Data Analysis Technique The data analysis techniques in this study are descriptive statistical analysis and inferential statistical analysis. RESULTS AND DISCUSSION Descriptive statistical analysis is statistics used to analyse data by providing a description or giving an overview of the data that has been collected with the intention of avoiding general conclusions or generalisations.This research instrument in the form of a questionnaire used consists of 46 statement items using five alternative answers that have been assigned a weighted value.The details of the statement items where the service innovation variable (X) consists of 5 indicators and 21 statements while for the service quality variable (Y) there are 5 indicators and 25 statements. Service Innovation (X) Descriptive analysis of innovation characteristics variables is measured based on 5 indicators, namely relative adventages, compatibility, complex-ity, triability, observability.To find out the overall answer regarding the application of service innovation at Rumah Langit in Sidenreng Rappang Regency based on the service innovation indicators that have been put forward. The level of achievement of the percentage of innovation characteristics is in the good category (77.1 per cent).Judging from the results of measuring the characteristics of innovation through its indicators, it shows that all indicators support the quality of these variables.each indicator used in measuring service innovation at "Rumah Langit" in Sidenreng Rappang Regency where for indicators of relative advantages in the good category with a percentage level of 76.7 percent, compatibility in the good category with a percentage level of 78.3 percent, complexity in the good category with a percentage level of 75.2 percent, triability in the good category with a percentage level of 76 percent and observability in the good category with a percentage level of 78.5 percent. Service Quality (Y) Descriptive analysis of service quality variables is intended to determine respondents' assessment of each variable indicator, namely tangible, reliability, veness, assurance, empathy.To find out the overall answer regarding the quality of public services carried out in the Rumah Langit programme in Sidenreng Rappang Regency based on the service innovation indicators that have been put forward. the level of achievement The percentage of service quality is in the good category (74.4 per cent).Judging from the results of measuring service quality through its indicators, it shows that all indicators support the quality of these variables. Each indicator used in measuring service quality at "Rumah Langit" in Sidenreng Rappang Regency where for tangible indicators in the good category with a percentage level of 75.4 percent, reliability in the good category with a percentage level of 74 percent, responsiveness in the good category with a percentage level of 74.2 percent, assurance in the good category with a percentage level of 72.5 percent and empathy in the good category with a percentage level of 75.4 percent. The result obtained a significance value of 0.200 which means that the significance is greater than 0.05.Based on this data, it can be concluded that the value is normally distributed.The results also show that the linearity line leads to the upper right, this indicates that there is a linear and positive relationship between the service innovation variable (X) and service quality (Y).To find out how the effect of service innovation (X) on service quality (Y), based on the table above, the significance value of 0.001 is smaller than 0.05 and the calculated t value of 9.505 is greater than the t table of 1.658, meaning that variable X on variable Y has a partially significant effect.The reference in determining the t table and determining the predicate for the significant effect of variable X on variable Y is by comparing the sig.value which is smaller than the reference error rate (5%) while to determine the t table value, the free degree (df) = n-k.Then the t table value obtained is 1.658. Discussion Based on the results of research using the five indicators, it can be concluded that the application of service innovation in the sky house in Sidenreng Rappang district is in the good category, this shows that the implementation of the sky house service innovation has succeeded in increasing access to services for the people of Sidenreng Rappang, one of which is in the education sector.The application of service innovation in the education sector in Sidenreng Rappang district can help improve the quality and accessibility of education for the local community, in its application service innovation in the education sector in the Sidenreng Rappang area must be tailored to local needs and conditions, there needs to be cooperation and coordination between the government, schools and the local community to ensure the successful implementation of service innovation in the education sector (Carter, 2020;Kristiyanthi & Dharmadiaksa, 2019). Based on the results of the research that has been carried out, the quality of public services of the sky house in Sidenreng Rappang district seen from the five indicators is in the good category and the majority of respondents show agreement with the elements or which include these indicators.Therefore, it can be expected that the public services provided by the sky house in Sidenreng Rappang district can continue to focus and improve the quality of education and assist the community in gaining access to better education (Carter, 2020;Kristiyanthi & Dharmadiaksa, 2019). Based on the results of the study, it shows that service innovation can have an influence on the quality of public services.From the data collection that has been carried out and through the testing process to find out how much influence service innovation has on the quality of public services, through the significance test results show that service innovation has a significant effect.Where the significant value is smaller than the alpha value, while the t value is greater than the t table.This fulfils the condition that partially the service innovation variable has a significant effect on the public service quality variable. The discussion above also indicates that the hypothesis set by previous researchers is in accordance with the results obtained, in other words that (H1) there is a significant influence between service innovation on the quality of public services in the study of the "sky house" service of Sidenreng Rappang district. CONCLUSION The application of innovation in sky house services in Sidenreng Rappang district is in the good category, where the majority of respondents chose to agree on statements reviewed from 5 aspects, namely relative advantages, compatibility, complexity, triability and observability.The quality of public services of the sky house in Sidenreng Rappang district is included in the good category, where the majority of respondents chose to agree on statements reviewed from 5 aspects, namely tangible, reliability, responsiveness, assurance, and empathy.There is a significant influence between service innovation on the quality of public services at the sky house in Sidenreng Rappang district, where the results of the significance test analysis results show that the value of Thitung is greater than Ttabel, which means that service innovation on service quality has an influence.
4,362.4
2024-05-23T00:00:00.000
[ "Education", "Economics" ]
Glandular Trichomes and Essential Oils Variability in Species of the Genus Phlomis L.: A Review The genus Phlomis is one of the largest genera in the Lamiaceae family and includes species used since ancient times in traditional medicine, as flavoring for food and as fragrance in cosmetics. The secretory structures (represented by glandular trichomes) as well as the essential oils produced by them constitute the subject of this review. While representatives of this genus are not typically regarded as large producers of essential oils compared to other species of the Lamiaceae family, the components identified in their essential oils and their biological properties necessitate more investigation of this genus. A comprehensive analysis of the specialized literature was conducted for each of the 93 currently accepted species to identify all the results obtained by researchers regarding the secretory structures and essential oils of this genus up to the present time. Glandular trichomes, still insufficiently studied, present morphological peculiarities that differentiate this genus within the family: they are of two categories: capitate (with a wide distribution in this genus) and dendroid. The peltate trichomes, characteristic of many species of this family, are absent. The essential oils from the species of the genus Phlomis have been much more widely studied than the secretory structures. They show considerable variability depending on the species and the environmental conditions. Introduction Glandular trichomes, which are found in almost one-third of plant species [1], play an important role in the life of plants by serving as specialized structures for the production and storage of various substances, including essential oils and other secondary metabolites [2].These substances can act as a defense mechanism against biotic and abiotic stresses [3], thereby helping plants to survive and thrive in their natural habitats.Additionally, glandular trichomes can also attract pollinators [4] and deter pests [5], contributing to the reproductive success and overall fitness of the plant species. The genus Phlomis (family Lamiaceae) includes 93 species (excluding hybrids and subspecies) accepted today and spread over three continents, Asia, Europe and Africa, in temperate or subtropical climates.This is according to the current data on Plants of the World Online (POWO) [6] (https://powo.science.kew.org;accessed on 18 February 2024), administered by the Royal Botanic Gardens, Kew, UK and World Flora Online (WFO) [7] (https://about.worldfloraonline.org;accessed at 18 February 2024), These databases were considered in carrying out the investigations in this paper. From a taxonomic point of view, the genus Phlomis belongs to the family Lamiaceae (Lamiales) [8], the subfamily Lamioideae and the tribe Phlomideae (which includes the genera Phlomis and Phlomoides) [6,9].The genus is monophyletic [10], including perennial species [11].The phylogenetic studies carried out in the last two decades led to the inclusion of a significant number of species from the genus Phlomis (which now contains only sub-shrubs or shrubs) in the genus Phlomoides (with herbaceous species) (today, they are accepted as distinct genera [12], after numerous controversies that followed their separation by Moench in 1794) [11].Azizian and Cutler [13] found anatomical and morphological affinities between the species belonging to the genera Phlomis and Eremostachys.But later, most of the species of this genus were included in the genus Phlomoides [11].Relevant is the example of the species Phlomoides tuberosa Moench, removed by Moench in 1794 [11] from the genus Phlomis, reintroduced by Bunge in 1830 [14] in the genus Phlomis (P.tuberosa L.) and resettled by Mathisen et al. [11] in the genus Phlomoides. The species of plants from the Lamiaceae family have been used since ancient times by people in medicine, food, hygiene, cosmetics and agriculture due to their secondary metabolites and, primarily, due to their essential oils [15,16].They are found in large quantities and with varied compositions in most of the species belonging to this family [2].The essential oils from the Phlomis species have antibacterial activity: for example, the oil of Phlomis lanata [17] and P. salicifolia [18] acts especially on the Gram-negative bacteria Escherichia coli and Pseudomonas aeruginosa, the oil of P. fruticosa [19] and P. olivieri [20] showed an antibacterial effect against both Gram-positive and Gram-negative bacteria (Bacillus subtilis and E. coli) and the oil of P. rigida was active against the Gram-positive bacteria Staphylococcus aureus [21].The antifungal effects of oils from various Phlomis species have also been investigated; those from P. cretica, P. samia [22], P. lanata [17] and P. rigida [21] have been proven to exert antifungal action on some pathogenic species of Candida sp. Essential oils extracted from certain species of the Phlomis genus have demonstrated potent antioxidant properties.For instance, the oil derived from P. bourgaei [23] and P. pungens var.pungens [24] displayed notable metal chelation activity, while the oil of P. armeniaca exhibited a significant reducing capacity in the presence of ferric and cupric ions [24].The effect of the inhibition of α-amylase enzyme activity by P. nissolii essential oil and of α-glucosidase by P. armeniaca oil [24] can be associated with the use of these species in the treatment of diabetes [25]. Glandular trichomes are considered true "natural biofactories" [2] essential oils but are also for other secondary metabolism compounds, their secretion being polymorphic, depending on the species and their structure [26].Although there are numerous studies on the morphology and histochemistry of glandular trichomes in species of the Lamiaceae family [27][28][29][30][31][32], the number of research works that refer to the Phlomis genus remains quite limited. According to the data available in the literature, trichomes from the species of the Lamiaceae family are of two major types: peltate and capitate [1,[33][34][35].Peltate trichomes are formed by a basal cell, a foot cell and a variable number of secretory cells, arranged in a single plane [36].They are usually specialized in the secretion of essential oils, which they store in the subcuticular space.Capitate trichomes have a basal cell, one or more stalk cells (of variable lengths) and one to four (rarely more) glandular cells [37].Often, their secretion is mixed, with a variable structure.Sometimes, it can consist exclusively of hydrophilic compounds (as, for example, in some capitate trichomes from Salvia officinalis [28]), but in some cases (especially in species that do not present peltate trichomes), they can mainly secrete essential oils [38].Besides the biological role of producing essential oils, glandular trichomes [39,40] (along with non-glandular ones) [41] have an important role in the taxonomic delimitation of species and genera from the Lamiaceae family. The purpose of this review is to provide up-to-date information on the state of research on glandular trichomes and the composition of essential oils from all currently recognized Phlomis species.The research concerned querying the Web of Science, Scopus and Google Academic databases with keywords consisting of the scientific names of the 93 Phlomis species accepted (as well as their synonyms, used in the past), plus "glandular trichomes", "glandular hairs", "secretory hairs" and "essential oil", in order to identify papers that contain information about these aspects.In the case of essential oils, since the chemical composition of the product extracted from the leaves or aerial parts at anthesis is analyzed in most cases, this was considered in the present study; for each case, the first five com-Plants 2024, 13, 1338 3 of 32 pounds of the essential oil, in descending order of concentration, were mentioned in the synthetic table. Glandular Trichomes in the Phlomis Genus The secretory trichomes of species from the Lamiaceae family have been investigated from morphological, histochemical and ultrastructural points of view [27][28][29][30] in an attempt to understand as precisely as possible the mechanisms of synthesis of the secondary metabolites elaborated by them.But for the Phlomis genus, compared to the total number of species currently accepted, the number of species for which there are data (complete or partial) in the literature remains small (13 species out of 93 in total). Structure of the Glandular Trichomes The data available in the literature regarding the morphology of glandular trichomes in the Phlomis species are synthesized in Table 1 and Figure 1.Considering the morphological characters described by various authors [13,39,[42][43][44][45][46][47][48][49], we grouped the capitate trichomes into five categories (C1-C5) and the dendroid trichomes also into six categories (D1-D6).There is still variability in relation to these categories; the classification was made to simplify the description.In establishing the subtypes, the number of secretory cells, the relative size of the stalk, the number of component cells and the positioning and density of the branches (in the case of dendroid trichomes) were taken into consideration. Table 1.Types of glandular trichomes described in some species of the genus Phlomis. Dendroid glandular trichomes are very rarely described in the literature, but they are not found only in Lamiaceae species; Gangaram et al. [50] describe a similar type of trichome in Barleria albostellata C.B. Clarke and Ahmad [51] describe them in Dyschoriste vagans (Wight) Kuntze, from the Acanthaceae family, a family related to the Lamiaceae, both belonging to the Lamiales order [52].The term "dendroid glandular trichome" was used by Nikolakaki and Christodoulakis [44], Yetişen [48], El-Banhawy, Al-Juhani [42] and Gostin [46].These trichomes were also described as "compound glandular hairs" by Azizian and Cutler [13], "stellate type with glandular arms" by Çalı [49] and "branched stalked" by Giuliani [39].We chose to use this terminology due to the fact that the nonsecretory part of the trichome is clearly dendroid, an accepted term for non-glandular trichomes in the same category [53]. The following types have been described: capitate glandular trichomes-C1: 1 basal cell + 1 stalk cell + 1-2 glandular cells; C2: 1 basal cell + 2-3 stalk cells + 1 glandular cell; C3: 1 basal cell + 1 stalk cells + 4 glandular cells; C4: basal cell + 1-2 stalk cells + 2 glandular cells; C5: uni-or biseriate stalk with 4-5 cells + 2 glandular cells; dendroid glandular trichomes-D1: 4-10 branches; D2: 7-9 branches, long stalk cell; D3: branches inserted at the Peltate glandular trichomes were reported in two species of the genus Phlomis-P.oliveri and P russeliana.In the literature, peltate trichomes are defined as having a basal cell, a short leg and a secretory head consisting of 4-12 cells that are covered by a common cuticle [54].In the absence of clear characters that differentiate the peltate trichomes from the capitate ones from a morphological point of view, their classification in one category or another by some authors remains controversial.Werker [33] leaves the possibility of being included in the category of capitate trichomes and those with four secretory cells, of different sizes; peltate trichomes (also called "glandular scales") should have secretory cells flattened in a horizontal plane.Muravnik [35] describes the petaloid trichomes, characteristic of the Lamiaceae species, as having "a disk-shaped head".The research of Azizian and Cutler [13] on the trichomes of Phlomis lanata and P. chimerae place trichomes with four secretory cells in the "capitate" category, not the "peltate" category.Therefore, they investigate the species P. russeliana and do not mention the "peltate trichomes" category related to it. Referring to these observations, and taking into account that in the case of the other species of Phlomis investigated regarding the presence of secretory trichomes, the "peltate trichomes" category is not described, we consider that future investigations (micromorphological, anatomical and histochemical) are necessary to clarify the presence or absence of peltate trichomes in P. oliveri and P. russeliana. One of the problems encountered in the inventory of the categories of glandular trichomes was the illustrations in some analyzed papers (which were insufficient or of poor quality); this did not always allow an objective evaluation of the type of trichome described by the authors, it being necessary to take into account the authors' description.Without sufficient photo documentation, these data must be considered cautiously, as a reevaluation of the species is necessary from this point of view. Secretion of Glandular Trichomes Although they are the most widespread within the Lamiaceae family and responsible for the production of significant amounts of essential oils, peltate trichomes are absent in species of the Phlomis genus.Their role in the secretion of volatile oils has long been considered major within the family, some authors [33] considering species lacking peltate trichomes as not belonging to the category of aromatic plant species (for example, Prasium majus L.).However, the progress of research on the histochemistry of glandular trichomes indicated the presence of secretion consisting of essential oils in capitate and dendroid trichomes [39,46]. The genus Phlomis, along with other genera like Sideritis, Lycopus, Micromeria [55], Marubium and Balota [40], are not known for being the most abundant producers of essential oils within the Lamiaceae family.However, despite their relatively lower quantity of essential oils compared to some other Lamiaceae species, the essential oils derived from these genera are recognized for their valuable bioactive compounds.Phlomis species also present other secondary metabolites (besides the essential oil components) for which these species are utilized for medicinal purposes, including iridoid glucosides, flavonoids, phenylethanoid glycoside [25,56], phenylpropanoids and phenolic acids [57].These compounds have been utilized in traditional medicine practices for centuries, indicating their historical significance and therapeutic potential [58,59].Despite their long history of use, the full range of their medicinal properties and applications has yet to be fully explored and exploited in modern medicine and pharmacology. The classification made by Werker [33], depending on the timing of secretion, which groups the secretory trichomes into short-term glandular hairs (capitate) and long-term glandular hairs (peltate), cannot be applied to the species of the genus Phlomis.Being devoid of peltate trichomes, the synthesis of essential oils is localized at the level of capitate and dendroid trichomes, which show continuous activity also on mature leaves [38,44,48] (where they are found also in the secretory phase and not only the post-secretory one). Unlike the peltate trichomes, the capitate ones present much more varied secretion products.In Phlomis herba-venti, in C2-type capitate trichomes, visible drops' essential oils were identified (by staining with the NADI reagent and Sudan III) [46], while C1type trichomes have a mixed secretion containing, besides lipids, phenolic compounds and polysaccharides (identified by staining with toluidine blue and Ruthenium Red with the PAS reagent) [46].The capitate trichomes of type C1 from P. fruticosa show only hydrophilic secretion (polysaccharides and mucopolysaccharides), being positive when stained with Ruthenium red and Alcian blue [39].At the same time, C4-type trichomes accumulate terpenes, polyphenols and flavonoids, showing strong positive reactions to Fluoral Yellow-088, NADI reagent and aluminum trichloride.A similar reaction was shown by the dendroid trichomes (D4 type) described in this species [39]. Positive reactions for compound terpenes and phenolics were also observed in type C1 and C4 trichomes from the Phlomis fruticosa species [44].The essential oil accumulates as droplets in the space between the cuticle and the external wall of the glandular cells [44]; this space stores phytotoxic compounds, serving as a primary defense mechanism at the plant's surface [60].The subcuticular space was observed primarily in capitate trichomes, while the extrusion of secretion products in dendroid trichomes typically occurs through the cell wall and the cuticle, into the external environment [46]. Dendroid trichomes also present mixed secretions, with lower amounts of essential oils than capitate ones; in Phlomis herba-venti, the glandular cells of these trichomes showed positive reactions to phenolic compounds, sesquiterpenes, polysaccharides and lipids [46].Neck cells were observed in all dendroid glandular trichomes, often recording the same positive histochemical reactions as with secretory cells or secretory products [44].They represent a special structure, with properties different from those of ordinary stalk cells, compared to which they are considerably shorter.The neck cells are involved in the secretion process, and there is communication with the neighboring cells, as shown by the ultrastructural studies performed on the capitate trichomes from Stachys heraclea All.[61].Their lower transverse walls are cutinized [35], the structure being similar to Casparyan strips from the root or stem.In this way, the flow of substances is controlled (especially secondary metabolites secreted by the cell/glandular cell + neck cell complex), preventing the reflux towards the rest of the stalk cells [46] and the dissipation of the active substances in the trichome body.This structural peculiarity was also observed in secretory trichomes from other species of Lamiaceae [62]. The branch cells of dendroid trichomes are alive at their full development; in confocal microscopy observations, viable chloroplasts were observed in all these cells in P. herbaventi [46].Non-glandular trichomes from Lamiaceae do not represent inert structures, with only a protective role against physical factors, but produce various categories of substances (proteins, lipids, terpenes, alkaloids, phenolic compounds and polysaccharides) that modulate the interactions between plants and other species from ecosystem.They complement the active protective role that glandular trichomes have for the plant [41]. The main role of glandular trichomes is to protect plants against herbivores: as a result, species that present trichomes (both glandular and protective) are less consumed by herbivores or attacked by parasites, a fact observed by researchers in individuals of the same species that present polymorphism for trichome production [63].Moreover, Phlomis species are known to be little consumed by herbivores or attacked by phytophagous insects [64]. Second, trichomes located in the reproductive sphere (on sepals, petals or even ovaries) can play a role in attracting pollinators through the volatile substances they eliminate [65]; among the components found in the essential oil of Phlomis species, 1,8-cineole, linalool and (E, E)-α-farnesene have been proven to be attractive to various species of pollinating insects (Hymenoptera) [65] and bicyclogermacrene for some Diptera species [66]. Essential Oils from Phlomis Species Species belonging to the Phlomis genus generally produce lower amounts of essential oils than other species from the Lamiaceae family [9].However, the diversity of the component elements with valuable therapeutic properties or with the potential to be used in agriculture and industry makes a more careful evaluation of them necessary.The biological activity was observed both at the level of the essential oils as a whole and in the case of their components investigated separately [16]; many times, their effect was manifested synergistically [5]. Table 2 presents the existing data in the specialized literature regarding the composition of essential oils from all 93 recognized species.Because the geographical origin of the analyzed species is important, the area from which they are native and the predominant biome were indicated for each species (cf.POWO) [6].Among the 93 species, complete or partial information on the composition of the essential oils was found for 48 (51.61%).The identification of the components of the essential oils was achieved by gas chromatography coupled to mass spectrometry (GC-MS) techniques.The lack of information for the other species leaves open a significant area of research in this field to cover the "white spots".The increased variability of the composition of essential oils is well known not only in the representatives of the Lamiaceae family [16] but also in those of other botanical families such as Asteraceae [122] and Lauraceae [123].This fact is due both to genotypic variations (between individuals of the same species, which belong to different populations) as well as to environmental conditions or agrotechnical factors in the case of cultivated species [124,125].Also, in the Phlomis genus, there is a high, well-known variability between the composition of the essential oils produced by the glandular trichomes on the leaves compared to those in the floral sphere [68,77], as well as between the oils produced in different stages of the ontogenetic development of plants [105]. The presence of different compounds in essential oils is part of a wider register of the modulation of interrelationships in ecosystems, between plants (immobile organisms) and various animal species (especially insects); mobile individuals have the role of performing various "services" for those in the first category, or, on the contrary, immobile plants must defend themselves against them, using biochemical signals because physical movement is impossible. Analyzing the main compounds of the essential oils from 25 species of the genus Phlomis (of which 4 were reclassified in the genus Phlomoides: P. younghunsbandii, P. szechuanensis, P. megalantha and P. umbrosa), Amor [56] classifies them into four chemotypes: 1-which contains predominantly sesquiterpenes, 2-which contains both monoterpenes and sesquiterpenes, 3-which contains fatty acids, aliphatic compounds and alcohol and 4-which contains terpenes, fatty acids, aliphatic compounds and alcohol.Chemotype 3 includes only three species of those currently found in the genus Phlomoides (P.younghunsbandii, P. szechuanensis and P. umbrosa) [6]. Sesquiterpenes from Essential Oils Among the components of the essential oils extracted from the leaves of species belonging to the genus Phlomis, sesquiterpenes comprise the largest share (Figure 2).Germacrene-D is one of the main components found in 31 out of the 47 species for which data are available in the literature, while β-caryophyllene is present in 25 species.β-caryophyllene and its oxidized form, caryophyllene oxide, are present among the first five components of essential oils from the majority of Phlomis species, with the exception of P. aurea [74], P. brachyodon [76], P. bucharica [81], P. cashmeriana [85], P. lurestanica [104], P. monocephala [67], P. platystegia [76], P. salicifolia [81] and P. thapsoides [120].We notice that in six species of Phlomis, both germacrene-D and β-caryophyllene are missing from the main components of the essential oils.In these, the chemotypes are mainly based on monoterpenes and fatty acids. Some of the main constituents identified in the essential oil from various species of Phlomis-germacrene D and β-caryophyllene-are substances with a well-known deterrent role, which protects plants against herbivores [126].β-farnesene is the main constituent of the essential oil of Phlomis elliptica (28.9%) [89] and P. samia (20.7%) [22], being part of the first 5 constituents of 15 other species of Phlomis.(E)-β-farnesene has an interesting biological role, being an alarm pheromone for insects from the Aphididae family [127].It is emitted by aphids when they are attacked by enemies to warn individuals from the same group [128].For this reason, (E)-β-farnesene acts as a repellent against these harmful insects, which avoid plants whose oil contains this compound.However, the repellent effect does not manifest equally against all insect species: Mumm and Hilker [129] showed that (E)-β-farnesene has an attractive effect against the wasp Chrysonotomyia ruforum Krausse (Hymenoptera, Eulophidae), an oophagous parasitoid for Diprion pini L. (Hymenoptera, Diprionidae). Monoterpenes from Essential Oils Monoterpenes (Figure 3) are found less often and in smaller quantities in essential oils from the Phlomis species; however, the oils from some species proved to be richer in monoterpenes than in sesquiterpenes.Monoterpenes and their derivatives give flavor and aroma to the essential oils in which they are found [130].Monoterpenes (Figure 3) are found less often and in smaller quantities in essential oils from the Phlomis species; however, the oils from some species proved to be richer in monoterpenes than in sesquiterpenes.Monoterpenes and their derivatives give flavor and aroma to the essential oils in which they are found [130].Linalool is part of the group of acyclic monoterpenoids and represents an important component in essential oils for its pharmacological effects.Research has highlighted its antidepressant [131], immunomodulator and antimicrobial roles.It was indicated as the main component in the essential oil of Phlomis leucophracta (36.4%) [98], being also found in important quantities in the oils of P. fruticosa (8.0%) [22], P. nissolii (11.3%) [24], P. cretica (7.5%) [22] and P. platystegia (7.72%) [76]. Limonene is the main component of Phlomis leucophracta oil (14.56-27.86%),refs [70,99] observed this in two other populations distinct from the one investigated by Sarikurkcu et al. [98].The essential oil from P. leucophracta possesses very strong antioxidant activity, similar to that of ascorbic acid, which denotes the increased and still unexploited potential of these species for use in the pharmaceutical and food industries [98,132]. 1-8 cineole (also known as eucalyptol) is frequently found in the oil from different species of Lamiaceae: populations of Lavandula angustifolia from Brazil and L. x intermedia from Iran or Mexico make up between 31.6% and 47.94% of the composition of the essential oil from this compound [133].In the Phlomis species, it is found in larger quantities in P. bucharica (13.69%) [81] and in P. regelii (15.9%) [115], both species being part of the predominant chemotype with monoterpenes.1-8 cineoles have a strong anti-inflammatory and antioxidant effect [134], as well as an insecticide effect [135]. Thymol and its isomer, carvacrol [136], have phytotoxic, cytotoxic and genotoxic properties, being able to be used as selective bioherbicides [137]; they also have important antibacterial effects [138], being recommended even in the case of bacteria resistant to classic antibiotics. Recent studies are increasingly highlighting the anticancer action of some monoterpenes; among those found in the composition of the oil of the Phlomis species, linalool shows cytotoxic, apoptotic and antiproliferative properties on breast cancer cells [139], α-pinene induces apoptosis in vitro on the human gastric adenocarcinoma cell-line (AGS) [140] and limonene acts on receptors involved in the chemoresistance of cancer cells [141]. Methyl palmitate (the main component of Phlomis salicifolia oil) [81] has an effect similar to that of brood pheromone in honeybees [143]; these hormones are produced by larvae and trigger feeding instincts in nurse insects, including by increasing the amount of pollen collected from various species.This species, endemic to Central Asia, grows in a semi-arid habitat [81], where the pollinating insects are few in number, and the plant species have to make considerable "efforts" to attract them.Hexadecane (8.97%) is the main component of the essential oil from Phlomis lurestanica, an endemic species from the mountainous areas of Iran [144]. The presence of different compounds in essential oils is part of a wider register of the modulation of interrelationships in ecosystems, between plants (immobile organisms) and various animal species (especially insects); mobile individuals have the role of performing various "services" for those in the first category, or, on the contrary, immobile plants must defend themselves against them, using biochemical signals because physical movement is impossible. Conclusions The review of specialized literature aimed at identifying the results of the research conducted so far on the secretory structures and volatile oils from species of the Phlomis genus, which has highlighted the fact that the level of knowledge is still insufficient.If, in terms of the chemical composition of essential oils, 51.61% of the taxonomically accepted species have had their component elements described (even partially), the knowledge regarding glandular trichomes is limited to only 13 species (13.97%). Although there is a substantial amount of information available regarding the essential oils from species of the genus Phlomis, future studies are needed to fully understand their composition.There are still 45 species whose essential oils remain completely unknown, and they may represent a potential source of biologically active compounds. The genus Phlomis is unique among the genera of the Lamiaceae family because of the presence of a rare type of glandular trichomes, namely, dendroid glandular trichomes.They have from one to four secretory cells arranged on a stalk of a trichome morphologically similar to the non-glandular ones, with which it coexists in the indumentum on the vegetative or reproductive organs.But the current data on their morphology and structure are still very limited.Based on the data available so far, peltate trichomes are absent in species of the genus Phlomis.Investigations regarding the histochemistry of glandular trichomes (carried out in order to locate secretion products) are rare, and those regarding their ultrastructure are completely missing.Considering that trichomes, both glandular and non-glandular, serve as taxonomically significant traits for plants, it is imperative to conduct further investigations into species within the Phlomis genus in order to help clarify some classification and phylogenetic problems that exist in this taxon. Figure 1 . Figure 1.Types of glandular trichomes described in some species of the genus Phlomis: bc-base cell, stc-stalk cell, gc-glandular cell, br-branch, nc-neck cell. Figure 2 . Figure 2. The main sesquiterpenes in the composition of essential oils of species of the Phlomis genus. Figure 2 . Figure 2. The main sesquiterpenes in the composition of essential oils of species of the Phlomis genus. Figure 3 . Figure 3.The main monoterpenes in the composition of essential oils of the species of the Phlomis genus. Figure 3 . Figure 3.The main monoterpenes in the composition of essential oils of the species of the Phlomis genus. Table 2 . The chemical composition of essential oils from species of the Phlomis genus (a synthesis).The first five components of the essential oil extracted from the leaves, in descending order of concentration (where available), were considered.(A dash '-' in the table means that there is no information available about these species; for Phlomis fruticosa L., collected fromBar, Montenegro: locality A is exposed to the sun and locality B is in the forest).
6,539.4
2024-05-01T00:00:00.000
[ "Biology", "Environmental Science", "Chemistry" ]
What is organizational inequality? Why is it increasing as macroeconomic inequality increases? Inequality has been increasing for decades in both rich and developing countries and the academic literature addressing it struggles to provide explanations, let alone solutions. This article is concerned with a relatively underexplored area, the relationship between macro-level inequality and organizational inequality. The core focus of the article is the recognition that the two phenomena are closely bound up one with the other. This is made possible by adopting Rousseau’s notion of inequality as hierarchy and willingness to accept subordination to authority and disparity of treatment. In doing so, we highlight similarities and dissimilarities between Rousseau and Marx. Inequality remains an issue of hierarchy at both the macro and organizational levels. As it was for Rousseau, so it is today but it is much more layered than in Rousseau’s day: inequality in society is the accepted degree of hierarchy among its members, inequality in the economy and at work is the extent to which, accepted or not, there is an imbalance of power, financial resources, remuneration of work and access to opportunities and services. The increase in inequality is due to a radical change in the socio-economic model of advanced economies. This change involves a shift towards financialization, a pressure on labour through flexibility, the decline of trade unions’ power and the retrenchment of public social spending. Introduction and objectives Inequality has been increasing for decades in both rich and developing countries and the academic literature addressing it struggles to provide explanations, let alone solutions. This article is concerned with a relatively underexplored area, the relationship between macrolevel inequality and organizational inequality. The core focus of the article is the demonstration that the two phenomena are closely bound up one with the other, so, concomitantly, conforming that the ties between them need urgently to be adequately studied. Macro-level inequality receives much attention from politicians and from economists; for the latter, in particular, inequality has long been a central focus of research. One motive for this sustained interest in inequality among economists and politicians is their awareness of the very high social costs that result from inequality. While we can readily find definitions, conceptualizations and measurements of inequality in society, this is far from the case with regard to inequality in organizations. With the exception of the issues of race and gender equality at work (Piasna & Drahokoupil 2017) and the pay-gap between executives and workers (Kliman 2015), organizational inequality has been the subject of little research or debate. Management and organizational literature has included studies of inequality at work but few provide longitudinal or comparative examinations on a national let alone international level. Furthermore, specific examples of inequality (be that gender, race, executives' compensation) are studied separately without comprehensive analysis. Joint analyses combining the macro and the organizational level are particularly rare (Amis et al. 2018;Radoynovska 2018). While tools exist for the measurement of inequality and its social costs at the macro level, much less is known about how macro inequality affects organizations and how organizations contribute to macro inequality (Edwards 2015;Mair et al. 2016;Riaz 2015). An article (Dunne et al. 2017), discussing Piketty's influential work, argues that management and organizational scholars should 'contribute to the cross-disciplinary inequality research project which Capital in the 21 st Century proposes . . .'. To overcome the limitations of Piketty's work should be the aim of management and organizational scholars. But not many of them do engage with inequality and very few answered the invitation for collaborators issued by Piketty himself (Dunne et al. 2017). This article provides one primary contribution: The development of knowledge and understanding of the phenomenon of inequality. Through a critical review of the literature, including previous empirical work by one of the authors, the causes of inequality are identified, and it is shown that the same patterns of rising inequality occurred in the past three decades at macro and organizational level and for the same reasons. This is made possible by adopting Rousseau's notion of inequality as hierarchy and willingness to accept subordination to authority and disparity of treatment. In doing so, we highlight similarities and dissimilarities between Rousseau and Marx. Critical to this contribution to theory development is analysis at both the macro and micro level, made possible by the collaborative work of an economist and an organizational scholar. This type of collaboration is uncommon yet demonstrates the fruitful benefits resulting from productive dialogue between economics and management literatures in the consideration of core themes, terms and concepts. This inevitably requires a synthesis of previously unconnected sources, a form of selective literature review, to establish the conceptual and theoretical foundations for the process of theory-building and development. Accordingly, we will define inequality in section 'Inequality at work and in society', its rise in modern societies in section 'The rise of inequality in contemporary societies', and the relationship between this rise and financial capitalism (section 'Financial capitalism and inequality'), labour-market deregulation (section 'Labour market reform and inequality') and retrenchment of welfare state (section 'Welfare and inequality'). Inequality at work and in society We chose to adopt Rousseau in this article for three reasons. He was the first to devote substantial contributions to the topic of inequality. We can draw numerous links between his work and that of Karl Marx. His notion of inequality is functional to our effort to discuss the relationship between macro and organizational inequality. Hierarchy at work or in society are different representations of the same phenomenon: inequality. The 18th-century French philosopher, Rousseau, offers a provocative perspective on inequality that explains why we accept unequal conditions at work and in the society, why we accept institutional inequality. In 1755, he argued that equality (lack of hierarchy) was the natural condition of mankind, which would subsequently, with the rise of societies, be sacrificed to the acceptance of a social contract. Rousseau's (1755Rousseau's ( , 1886 notion of moral and political inequality was not merely concerned with wealth or income but also with abuse of power, prevarications against the weakest and quality of life. About a century later, Karl Marx built his theoretical system on previous contributions, including the work by Rousseau (Engle 2008;Shi 2015). Marx extends the notion of alienation discussed by Rousseau. There are important connections and similarities, but there are also key differences between the two: the idea of class struggle is almost entirely absent in Rousseau, as the use of dialectics; in Marx, we do not find the notion of state of nature and social contract. Progress and modernity were at the centre of Rousseau's reflection on inequality. Both Rousseau and Marx proposed a radically innovative vision of society, believed in cyclicity of progress and inspired revolutions. Both Marx and Rousseau were critical on the effects of modernity. Marx developed and extended the notion of alienation, originally introduced by Rousseau. They both discussed the origin of property and the family as the origin of the State. They both refer to a sort of general interest: 'The famous formula of Rousseau of the general will finds its echo in Marx: It is only in the name of the general rights of society that a class society can claim general supremacy' (Engle 2008: 5). They both adopt a historical method, and both are concerned about inequality and attribute its origin to the 'invention' of property although the role of property is different among Rousseau and Marx. They both focus on inequality and fear the dominance of individual over community interest, but they offered dissimilar solutions because they identified different driving causes of inequality. Marx saw in economics (the base) the origin of inequality, hence, he argued for the eradication of the economic class system. Rousseau instead believed that inequality was introduced in the state of nature by political institutions (the superstructure), hence, he argued for individualism to be replaced by the notion of 'general will' as the key political principle. For Rousseau, the establishment of society and government meant the acceptance of inequality (also in the form of hierarchy) as well as its increase. At macro level, inequality is the consequence of the end of primitive communities and the growth of increasingly large and complex forms of social organization and administration. Similarly, we can argue that the rise of organizational inequality is associated with the appearance of modern and complex forms of organization of labour and production. Industrial endeavours could not be achieved through an informal communitarian organization of work nor under a family or domestic regime. In both cases, inequality arose in the form of hierarchy, subordination at work, imbalance of economic and financial power, concentration of knowledge and information, monopolies. Capitalism was inaugurated with the dispossession from prospective workers of their independent means of sustenance (Marx 1991(Marx [1867) and its development is contemporary with that of the nation state (Pozo 2007). The arrival of politics (with empowerment of a political class and delegation of power) is associated with the acceptance of macro inequality. The arrival of management can be associated with the acceptance of organizational inequality. The rules of government and management, to which mankind surrendered at the expense of equality, emerged at different historical points. First came the authority of government in the form of politics and economics. Much later, and most intensely, during the industrial revolution, the authority of management and organization was established as an unavoidable condition of modernization, development and progress. Although politics and management can be seen as the cause and justification of inequality, they also potentially represent the solution to inequality. Both can implement policies that prevent inequality from increasing or can work towards equality, though equality today is not conceived in a form in any way close to Rousseau's natural condition of primitive communities. Contemporary justifications of inequality (be it executives' pay or the income gap in developed countries or asymmetry of power at work) are based on the complex nature of businesses and of societies which supposedly would not function in an environment based on principles of communitarian life, direct democracy, equality and self-organization. Both macro inequality and organizational inequality are rooted in the acceptance of a social contract of the kind described by Rousseau (1772). His approach is particularly helpful because it explains why people and workers believe that the acceptance of a degree of inequality is required by their membership of advanced societies (Pozo, 2007) and complex organizations. About a century after Rousseau, his intuition was further formalized in sociology with the transitional processes and characteristics of emerging industrial societies, for instance by Durkheim (with his reflection on shifting forms and contours of solidarity) and by Tönnies (with his classification of Gemeinschaft e Gesellschaft). Managerial styles are themselves a good representation of what is considered an acceptable level of inequality. Literature (Hofstede 1980) on power distance and inequality shows that, for instance, Nordic countries accept less inequality than other Organisation for Economic Co-operation and Development (OECD) countries both at the social level (less discrimination of women, less income inequality, less power distance between professions and classes) and at work (smaller salary differentials, less distance between managers and employees, fewer hierarchy levels). Several scholars have tried to explain the tolerance to inequality. At macro level, Marx (1991Marx ( [1867), Engels (1987) and Thompson (1963) argued that the lack of class-consciousness and organization among the workers impedes a collective reaction to exploitation and inequality. Still looking at the macro level but with examples of organizational practices, Braverman (1974) argued that modern capitalism has developed mechanisms of co-optation and control of the working class that end with them accepting if not supporting capitalist society and modes of production. Fox (1985) described in detail how the working class engages with collective processes of resistance and negotiation (at both macro and micro level) in the attempt to get the best possible deal from employers but failing to do so. Burawoy (1979) argued that workers are under subtle hegemonic regimes, by which they are not only controlled but fully embedded in capitalist logics against which they no longer react. Sennett (1998) illustrated in detail how deskilling (a form of inequality in itself) makes us all more fragile and unarmed against capitalism. Cooke and Kothary (2001) argued that even the most progressive employee relations and management styles are actually not improving but worsening the condition of inequality at work. While Burawoy's dynamics were predominantly macro, those described by Sennett, Cooke and Kothary can be observed at either macro or organizational level. Finally, contemporary experiments and surveys (World Bank 2015) have measured the gap between perceived inequality and actual inequality. People tend to underestimate the level of inequality because they usually overestimate their position in the income distribution. In addition, people are over-optimistic about levels of upward social mobility. Such biases encourage patience, even if they do not justify inequality. In short, organizational culture, psychological bias or institutional work may contribute to the justification and perpetuation of growing levels of inequality in organizations. Or, in other words, national culture has justified the surrender to government, politics and management, and consequently to inequality. As Rousseau (1755Rousseau ( , 1772 noted with reference to private property, man became inured to the idea as soon as the first person enclosed a plot of land and said, 'this is mine'. Similarly, any given degree of inequality can become generally accepted through processes of institutionalization and subtle domination at either national or organizational level. The rise of inequality in contemporary societies Over the last three decades at least, income inequality within developed countries has increased markedly (Jaumotte & Osorio Buitron 2015). The richest 10% of the population in the OECD countries earns about 10 times the income of the poorest 10%. This ratio was 'only' about seven in the late 1980s (OECD 2014). At the same time, the Gini coefficient averaged across OECD countries increased from about 0.27 to 0.33 (OECD 2014). In a way, this contradicts the famous Kuznets curve (1955) according to which inequality increases in the initial phase of the development process, and then decreases as economies become richer. Piketty (2014) rejects the idea of the bell curve. What he proposes is a horizontal 'S' curve: inequality increases again when countries reach an advanced stage of development, unless specific counteracting policies are implemented. Other explanations for inequalities have been put forward by Van Reenen (2011) who found support for trade-induced technological change associated with inequality. Chusseau and Dumont (2012) show that globalization, skill-biased technological change and changes in labour-market institutions that have weakened the welfare state, explain a substantial portion of the increase in inequality in a group of 12 developed countries. Atkinson et al. (2011) instead point out the decreases in the progressivity of taxation systems, particularly at the top of the distribution, as main drivers of inequality. Similarly, Facundo et al. (2013) argue that reductions in the top marginal income tax rate are the most important factor explaining inequality. From the late 1970s, political changes created the basis (Wrenn 2014) for a new paradigm of political economy, first in the United States and in the United Kingdom with the Reagan and Thatcher administrations, and later in other advanced and emerging economies. The rise of neoliberalism gave way to financial deregulation and contributed to both the expansion of capital globally, in search of higher profits, and the intensification of finance as a prominent aspect of the economy. Finance became a prominent organizational function as corporations gave priority to financial investments rather than to core business investments. Arrighi (2010) found that the United Kingdom first and the United States later shifted from manufacturing to finance at the end of a long cycle after which profits, and therefore, the remuneration of capital in the secondary sector, became lower than financial profits. Since then, finance has played an increased role at the organizational level (Davis 2009;Fligstein 1987;Thompson 2013). In the following three sections, looking at finance, labour-market regulation and welfare, we provide explanations of the increased levels of inequality at both macro and organizational level. Financial capitalism and inequality In the past decades, western economies have experienced a phenomenon called financialization: 'a contemporary drive to subordinate and reconstitute all forms of economic activity -consumption as well as production -in relation to its financial relevance and significance' (Willmott 2010). Most definitions of financialization agree on the identification of the financialization process as a phenomenon where there is a growing dominance of capital financial systems over bank-based financial systems (Krippner 2005); or, more broadly, where there is an increasing role of financial motives, financial markets, financial actors, and financial institutions in the operation of domestic and international economies (Epstein 2005: 3-4). Finance, at macro level The combined effect of globalized markets and financialization has shaped what is defined as 'financial capitalism'. This new paradigm is characterized by a strong dependence on the financial sector, by the globalization and intensification of international trade and capital mobility, and by the 'flexibilization' of the labour market (Epstein 2005;ILO 2013). Many other economies followed the American example of a finance-led regime of accumulation, which used other institutional forms such as flexible labour and compressed wages in order to increase firms' competitiveness (Tridico 2012;Shelkova 2015). Lin and Tomaskovic-Devey (2011) argue that the increasing reliance by firms on earnings gained through financial channels has strengthened owners' and elite workers' negotiating power relative to other workers. This results in the exclusion of most workers from revenues therefore generating an increase of inequality. Fordism was based on an implicit pact. Productivity gains resulting from the implementation of Taylorism would have meant mass consumption and growth. It worked well for a long time. Since the 1980s, post-Fordism financial capitalism appears to have been based on a similar pact. The gains in productivity resulting from labour flexibility should have meant more profits for corporations and financial gains for shareholders (dividends, capital gains, private pension funds). In turn, this would sustain demand and growth (Durand & Légé 2013). The working class and middle class were now among the shareholders and might have benefitted but the pact does not seem to have worked (Erturk et al. 2007b;Strange 1997). The cyclical financial crisis damaged the savings of shareholders (often all but the rich ones). Aggregate demand has been weak and so has growth in mature economies. In short, financialization has contributed to the increase of income inequality (Roberts & Kwon 2017). But Finance did not just gain an ideological foothold in the economy: Finance became crucial in our organizations and so affected directly the consciousness and practices of workers. Finance, at organizational level While financialization then was taking place at the macro level, something similar was occurring at the organizational level. Financialization is also an organizational phenomenon (Thompson 2013) as in the last three decades the finance function has become more important for careers and opportunities (Fligstein 1987), capturing resources and strategic priority from core business operations and other staff functions previously more prominent, like personnel. The emphasis on finance rather than work can be observed even in the academic sphere. Work has almost disappeared as a research theme in management studies (Fleming & Mandarini 2009). What used to be sociology of work has been rebranded organizational behaviour, and what used to be industrial relations has become humanresource management (Bernardi, 2020). Finance has become a very prominent discipline in management education and finance professors have begun enjoying salaries higher, on average, than those of their colleagues from other disciplines (AACSB 2018). There are three main reasons why finance has become so important at the organizational level. First, in the business sector finance has become more important for every organization. A firm's financial performance has become a key indicator of success, closely and frequently scrutinized by markets, to be achieved at every cost, even at that of sacrificing long-term plans and investments on core business. Hence, finance has become a key organizational unit and an important skill for successful careers in every sector. Second, during decades of low growth and international competition in core business markets, financial investments have become a remunerative diversification strategy, if not a new de facto core business in many cases (Peet 2011). A bigger share of the economy has moved from manufacture to finance and real estate. It has been argued that entire nations, such as the United Kingdom, explicitly abandoned industries in the primary and secondary sectors to specialize in finance (Arrighi 2010). Among non-financial corporations, it has been observed that increased financialization has caused a reduction in workers' bargaining power and employment protection (Alvarez 2015;Darcillon 2015). Third, financial goals (market capitalization, return of investment (ROI), earnings per share (EPS)), rather than product or market goals (productivity, innovation, market share), became key performance indicators for managers and executives. If these goals were achieved, they were rewarded with financial tools (options, pension funds, shares, profitshares and financial derivatives resulting from mergers and acquisitions). This practice allowed the payment of colossal bonuses in virtually every industry. This latest consideration has been the subject of public indignation since the 2008 financial crisis as the bonus culture has persisted even in banks that were in the red or had been bailed out by the state. The combination of those three reasons is clearly linked with the growing inequality at the macro and micro level. The US super-CEO pay culture has been exported through the United Kingdom to continental Europe, where executives now enjoy similar pay rates (Erturk et al. 2005). In UK Financial Times Stock Exchange Group (FTSE) 100 companies, the ratio between chief executive and average employee pay was nine in 1978-1979 and rose to 54 in 2002/2003. In US Business Week 350 companies, the ratio was 50 in 1980, increasing up to 525 in 2000 and stabilizing at 281:1 in 2002 (Erturk et al. 2005). Despite being the object of considerable attention from protest movements and the subject of regulation in the banking sector (capital requirements regulation and directive -CRR/CRD IV) by the European Commission, the executive bonus culture has withstood attacks made on it (Kliman 2015). At this point, it may be argued that finance is more powerful than democratic processes and institutions (Walby 2013), and that governments are integral components of financial capitalism (Peet 2011). Daguerre (2014) studied the prominent role of corporate elites, notably financial elites, in informing and shaping the political debate that started at the beginning of the 2008 crisis. The increased role of corporate finance and the unequal destination of the 'value' 'produced' by successful managers and entrepreneurs have been justified by a new discourse. The pre-1940 critique of the rentier (value capture) gave way to the post-1980 discursive construction of the shareholder (value creation): Thus, in the 1990s in the U.K. and U.S.A. we have 'value for money' and 'best value' in political discourse about public policy, just as we have 'shareholder value' or 'value based management' in management discourse. Value was everywhere partly because it was used in a kind of social rhetoric about happy endings which many desired and few could question or dispute. (Erturk et al. 2007a: 72) According to this dominant discourse, if executives are creating value rather than capturing rents, this should not cause outrage among the working class, especially now that they are made actors in the system by being given some form of financial involvement. In short, finance has not only favoured scandalous compensation packages for executives but also a lack of connection between company results, stock value and bonuses. Finance has contributed to growing inequality as a cause but also as a source and justification of inequalities. In current times, the complexity and the dimension of finance have often been used as a reason for the surrender of authority to governments or markets. In other words, it has been used as a reason to adhere to a social contract drafted by 'Finance Personnel' (Fligstein 1987). Labour market reform and inequality In the age of financial capitalism, labour-capital relations are changing and, in most cases, labour represents the weaker side. Trade unions have lost their power as a result of labour-law reforms and other institutional changes (Crouch 2015), not least because of lower levels of social conflict in advanced nations compared to previous decades. Some technological transformations in manufacturing and in services have decreased trade unions' ability to effectively represent workers and increase their membership (Acemoglu et al. 2001;Pulignano & Stewart 2006). Not surprisingly, the number of strikes has declined too (Godard 2011). The newest modes of production have been shaped by new regulations in firing, hiring, unemployment benefits, minimum wage and so on. In most Western countries, non-standard work contracts and temporary jobs have become widespread, contributing to precariousness and instability among workers (Adams & Deakin 2014; Rodgers and Rodgers 1989). Labour-market reforms have been introduced across the world, in varying degrees, supposedly to improve competitiveness as part of the globalization process, which most policymakers and governments believed would boost their national economy. Labour flexibility has increased almost everywhere in Europe and in advanced economies over the last 20 years (Crouch 2015;OECD 2013). Individualized employment relations at organizational level Globalization has been used in managerial rhetoric to justify new corporate strategies (delocalization, outsourcing, flexibilization of work), while the field of industrial relations, as an academic discipline and as business practice, has lost ground (Darlington 2009;Meardi 2014). It has now widely been replaced by the Human Resource Management paradigm (Steyaert & Janssens 1999). Labour flexibilization at the macro level has been complemented by the emergence of individualized employment relations longer based on collective bargaining (Darlington 2009;Radoynovska 2018). Contracts are now, on average, of shorter term, firing is easier, a performance-based component of salary is common in many industries, and flexible career paths within or outside the organization are considered the norm. The variability of employment conditions under the same employer is larger today because of the coexistence of younger and older generations with different contracts and because of the personalization of arrangements (Brown 2001). Flexibility is now embedded in contracts and in work practices and is assumed as a structural component of life by those born after the 1970s (Thompson 2013). Diversity, or inequality, of compensation at work (Decker 2010;Lucas 2015) has become greater for several reasons (Bloom 1999): (1) because of the wider gap between executive and subordinate salaries, (2) because of more competitive internal and external labour markets and (3) because of weaker or lower skilled actors joining the labour market (women, migrants, workers with learning difficulties). Unsurprisingly, compensation has become a confidential aspect of employment relations. But inequality hides also behind other aspects of the employment relationship, for instance training (Sutherland 2016). Kristal and Cohen (2016) found that much of rising inequality in the United States is driven by worker disempowerment rather than by market forces. Kerrissey (2015) successfully attempted to measure how inequality at the national level is connected to industrial relations and labour-market regulation. She found that strong labour rights are tightly linked to lower inequality across a large range of countries. Labour flexibility at macro level According to Thompson (2013), financialization has been interacting with, accelerating, and exacerbating longer term trends such as labour-market insecurity, externalization and internationalization. Within financial capitalism, the bargaining position of capital relative to labour in higher income countries has increased significantly (Feenstra 1998). The decline in union power may well account for a portion of the increased wage inequality in the United States and in other countries (Borjas and Ramey 1995;Gordon 2012). Of particular interest seems to be the case of the United States where an inverse relation between trade-union membership and inequality is clear throughout most of the 20th century (see Figure 1). Gordon (2012) argues that between the New Deal, which granted workers collective bargaining rights, and the end of the 1960s, 'labour unions both sustained prosperity, and ensured that it was shared'. Since the 1970s, and in particular, during the Reagan administration, 'unions came under attack -in the workplace, in the courts, and in public policy'. As a result, union membership has fallen and income inequality has worsened, reaching levels not seen since the 1920s. Other studies argue that also between 1975 and 1995 trade-union membership declined and inequality increased. Pontusson (2013) suggests that this can be explained by a weaker interest in equality and redistribution policies among workers whose living and working conditions had improved. Trade-union density alone is probably not enough to encapsulate the power relations in a labour market (Jaumotte & Osorio Buitron 2015), but the results are the same using other labour-market indicators: for instance, active labour-market policy, passive labour-market policy, bargaining wage coordination, unemployment subsidies, minimum wage, employment-protection legislation. Tridico (2018) shows that the more protection in the labour market, the lower the Gini level, and vice versa in a large longitudinal sample of OECD nations. A similar result was obtained by Butcher et al. (2012) and by Tridico and Paternesi Meloni (2018) who found that minimum wages have an impact on wage inequality, particularly in the United Kingdom and the United States during the 1990s and 2000s. Jaumotte and Osorio Buitron (2015) also found that minimum wage reductions caused an increase in inequality but trade-union density played a much bigger role. In short, the lesser role of collective bargaining and the individualization (Heyman 2005) of employment relations entail growing inequality in working conditions among employees both of the same sector and of the same company. The political and managerial discourse on flexible working relations has been explained with (and enforced by) the complexity of markets and global competition. This discourse and the consequent surrender to the authority of markets and management have become significantly stronger only very recently. Like the individuals that once traded their natural liberty for the greater power of the community, workers today accept hierarchy, imbalance of power and inequality in the employment relation for the greater good of the organization whose survival and profitability is the source of their salary. Welfare and inequality Public spending is connected to both macro and organizational inequality. On one hand, public spending redistributes resources to taper income inequality (wage and pension integrations, housing and other benefits); on the other hand, publicly funded services allow workers to join the labour market (healthcare, education, childcare and benefits for working parents, work insurance). In other words, public spending constitutes a partial remedy to inequality and allows labour markets to work more efficiently despite income and organizational inequality (Moreno-Ternero & Veneziani 2017). However, while inequality has been growing, public spending on social services has declined, and so has the ability of the state to redistribute wealth during a period when it was needed. An ideological component of financial capitalism is the call to decrease the role of public spending. The welfare state represents to many another cost to compress, akin to labour. In order to improve firms' competitiveness and to boost economic growth, advocates of the so-called 'efficiency thesis' argue that social spending needs to be reduced (Allan & Scruggs 2004;Castells 2004). The development of corporate occupational-welfare policies (pension, healthcare, childcare, education services offered by the organizations in the form of employment benefits) can be observed in this perspective (Shalev 1996). Recently, they have been growing in a clear attempt to cover aspects of life and employment formerly fully supported by the welfare state. As Figure 2 shows, there is a clear negative relationship between inequality and welfare expenditures in the sense that countries that spend more on welfare generally have a lower level of inequality (Tridico 2018). A recent study (Bapuji 2015) explains that cutting social spending would not only affect macro inequality, but would also negatively affect organizations. Cooper (2008) studied the variability of opportunities among workers of different social-class background in the US labour market. He argues that in a 'risk society', less public spending threatens the opportunities of the disadvantaged but at the same time actually endangers society's 'winners'. Van der Wel and Halvorsen (2014) found that commitment at work increases as social spending gets more generous, while Wang et al. (2015) found that pay disparity contributes to higher turnover and endangers workers' participation and innovation. In short, corporate occupational welfare is an issue of inequality because such a welfare culture, unlike a state public welfare, might allocate opportunities and resources on a nonprogressive basis (everybody receiving the same benefits, regardless of income), if not a regressive one (the higher the position in the organization, the richer the occupational welfare-package received by the employee). Rousseau might argue that occupational welfare is an additional element of organizational social contracts. It was a phenomenon during Fordism and is re-emerging today as an explicit element of employment relations, while the provision of a welfare state declines. The surrender to the authority of management to enjoy occupational welfare is akin to the surrender to government to enjoy the provisions of public welfare. Both imply the acceptance of inequality. Discussion To sum up our review of macroeconomic inequality, inequality increases when financialization increases, when labour flexibility increases, when trade unions are weaker and when the level of public social spending decreases. Tridico (2018) has modelled those variables and proved their relationship with income inequality among OECD countries in the period 1990-2013. At organizational level, it has not been possible to conduct longitudinal tests with an international dataset to model the causes of inequality, unlike what several studies have done at macroeconomic level. However, reviewing management and organizational literature, while it cannot provide equivalent data, does allow us to come to similar conclusions. Organizational inequality increases as per the effect of finance and financial incentives in organizations (Davis 2009;Erturk et al. 2005;Fligstein 1987), flexibility and individualization of employment relations (Bloom 1999;Crouch 2015;Heyman 2005), trade-union power and its formal role in industrial relations (Darlington 2009;Fox 1985;Meardi 2014;Ramsay et al. 2000), and the extent to which occupational welfare is an alternative to public welfare and contributes to intra and inter organizational inequality (Shalev 1996). Individual biases (World Bank 2015) and national culture (Le Garrec 2018) also explain different attitudes towards organizational inequality. These are the result of institutional work, co-optation and persuasion at either macro or micro level (Braverman 1974;Burawoy 1979;Cooke & Kothary 2001;Engels 1987;Fox 1985;Hofstede 1980;Tridico 2012;Holman 2013;Marx 1991Marx [1867 ;Sennet 1998;Thompson 1963). Inequality remains an issue of hierarchy at both the macro and organizational levels. As it was for Rousseau, so it is today but it is much more layered than in Rousseau's day: inequality in society is the accepted degree of hierarchy among its members, inequality in the economy and at work is the extent to which, accepted or not, there is an imbalance of power, financial resources, remuneration of work and access to opportunities (Peragine 2004) and services. The increase in inequality is due to a radical change in the socioeconomic model of advanced economies. This change involves a shift towards financialization, a pressure on labour through flexibility, the decline of trade unions' power and the retrenchment of public social spending. We have contributed to theory development with an original conceptualization at both the macro and micro level, combining insights from an economist and an organizational scholar. This unconventional approach demonstrates the usefulness of uniting economics and management literatures. Economists often overlook management and organization, thus either neglecting to support their arguments at the micro level or failing to defend accusations that their models are oversimplifying. Similarly, organizational scholars often overlook economics. The consequence of this is that organization studies rarely address the macro level implications (efficiency, well-being, unemployment, growth, innovation, inequality) of what happens at the firm level. In doing so, they also fail to realize that macro-level problems (such as inequality, poverty, unemployment) are rooted in individual and organizational actions. Conclusion Organizational inequality concerns all possible forms of divergence in the treatment of and opportunities for workers in the same organization. Finance plays a role in increasing the divergence of treatment of members of organizations and in justifying the acceptance of inequality. The evolution of human-resource management practices towards individualization of employment relations can also contribute to the institutionalization of inequality among workers. Some national institutions, whether systems of industrial relations or models of welfare state, may be able to slow down or accelerate organizational inequality. The managerial rhetoric on flexibility, but also organizational and national cultures provide a justification for the acceptance of growing levels of inequality in organizations. Moreover, both macroeconomic and organizational inequality are based on the same discourse: policymakers (national level) and managers (organizational level) face inescapable decisions, actions that modernity makes unavoidable. As Rousseau might frame it, inequality is the consequence of accepting a social/organizational contract where, supposedly, progress is traded with equality and freedom as a mandatory path towards modernity. While Rousseau focused on the specialization of roles in the society (political, military, religious, administrative, etc.), Marx focused on the division of labour at work. In both cases, the acceptance of inequality resides in the acceptance as necessary of those hierarchical divisions, be these different roles in the society or different tasks in a factory, and the consequent income and wealth inequalities. Class is being accepted as a necessary characteristic of modern nation states (Pozo 2007) and functional to capitalism. Inequality is being accepted as a characteristic of both capitalism and class. Organizational inequality is not in itself an explanation of macro inequality, which has to do with macro trends such as the increasing divergence between the remuneration of capital and the remuneration of labour. However, macro and organizational inequality have the same causes: reforms of labour-market regulation and weakening of trade unions. As such, both forms of inequality warrant greater joint attention from economists and organizational scholars. Inequality is arguably among the core concepts of the social sciences, pivotal to the contemporary relevance of economic, sociological, political and cultural analysis. Yet, organizational inequality has been rarely acknowledged, let alone adequately conceptualized within the academy. This article is conceived as a modest contribution to the opening up of an important debate concerning the contribution of organizations, their dynamics and their modus operandi, to the reproductive capacity of contemporary forms of capitalism. Further study of the nature of organizational inequality, its causes, and its alignment with macroeconomic inequality, will enhance our conceptual and theoretical understanding of the economic dynamics and socio-political consequences of a capitalist model in which macroeconomic inequality and organizational inequality appear to have taken on an increasingly symbiotic role and significance. We have focused on the effects of inequality, be it macro or micro disparity in pay or wealth. But the true inequality is the cause of those effects, our tolerance, acceptance of diversity of treatment and social and political hierarchy. Rousseau told us that we tolerate it because we accepted the social pact, as today we accept the corporate, organizational pacts. Marx would argue that the problem is the base, not the effects of the superstructure. Welfare state can mitigate the effects of inequality but not the causes and not our customary acceptance of social and political hierarchy. The redistribution operated by the tax system and by public services still gives for granted the primary inequality. Hence, the studies on macro and organizational inequality should focus not on the effects (and their partial mitigation) but on the origin (the base), and the economic, social and political institutions (the superstructure) that defend inequality as necessary and overall convenient in modern and complex societies.
9,153.2
2020-09-21T00:00:00.000
[ "Economics" ]
Break-even year: a concept for understanding intergenerational trade-offs in climate change mitigation policy Global climate change mitigation is often framed in public discussions as a tradeoff between environmental protection and harm to the economy. However, climate-economy models have consistently calculated that the immediate implementation of greenhouse gas emissions restriction (via e.g. a global carbon price) would be in humanity’s best interest on purely economic grounds. Despite this, the implementation of global climate policy has been notoriously difficult to achieve. This evokes an apparent paradox: if the implementation of a global carbon price is not only beneficial to the environment, but is also ‘economically optimal’, why has it been so difficult to enact? One potential reason for this difficulty is that economically optimal greenhouse gas emissions restrictions are not economically beneficial for the generation of people that launch them. The purpose of this article is to explore this issue by introducing the concept of the break-even year, which we define as the year when the economically optimal policy begins to produce global mean net economic benefits. We show that in a commonly used climate-economy model (DICE), the break-even year is relatively far into the future—around 2080 for mitigation policy beginning in the early 2020s. Notably, the break-even year is not sensitive to the uncertain magnitudes of the costs of climate change mitigation policy or the costs of economic damages from climate change. This result makes it explicit and understandable why an economically optimal policy can be difficult to implement in practice. Introduction Potential solutions to climate change are often framed as a tradeoff between reducing humanity's impact on the environment on one hand and harming the economy on the other. More specifically, it is thought that we can reduce our climate change related impact by imposing a global price on carbon that results in an increase in the price of energy which entails a reduction in production and consumption (van Vuuren et al 2020). Under this framing, it is natural for people to strongly disagree about climate policy prescriptions since individuals will inevitably diverge in the relative value they place on environmental versus economic concerns. However, the issue of whether or not it is in humanity's collective best-interest to reduce greenhouse gas emissions may not be as complicated and subjective as the above framing makes it seem. In fact, as long as climate change costs the economy anything (and the cost increases steadily with emissions), then it is in humanity's collective best interest, on purely economic grounds, to restrict greenhouse gas emissions (Nordhaus 1977). In particular, it has been consistently calculated for decades that the 'economically optimal' greenhouse gas emissions pathway is one of immediate restraint, followed by consistent reduction to net zero emissions within about a century or sooner (Nordhaus 1992, Nordhaus 2017, Glanemann et al 2020. Despite this, climate change mitigation policy has been notoriously difficult to implement at the global level even with a United Nations Convention (UNFCCC 1992) long dedicated to doing just that. This gives rise to an apparent paradox: Why is it so difficult to motivate global society to implement greenhouse gas emissions reduction policies if these policies confer not only environmental but also economic benefit? Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. One well-studied reason is related to the so-called tragedy of the commons (Hardin 1968) where each agent appreciates that it is in their own best-interest not to reduce their emissions despite it being in the global bestinterest to do so. An intergenerational version of the tragedy of the commons, that we investigate here, has to do with when emissions reduction policies would become economically beneficial. Economically optimal emissions reductions pathways (and their associated carbon price trajectories) are often calculated with Integrated Assessment Models (IAMs) like the Dynamic Integrated Climate Economy model (DICE) (Nordhaus 1992, Nordhaus 2017. These models weigh the benefits of avoided economic damages from climate change against the costs of mitigating greenhouse gas emissions and calculate the single emissions pathway that maximizes the present discounted value of global social welfare (W), In equation (1), L(t), is the global population, ρ is the pure rate of social time preference or the generational discount rate on welfare, c(t) is per-capita consumption and α is the elasticity of marginal utility of consumption (which can also be interpreted as generational inequality aversion, since economic growth implies that consumption increases for future generations). In DICE, 'economically optimal' refers to the emissions pathway that optimizes W-the present discounted value of total human utility integrated over time. In other words, DICE calculates the policy that would be put into place by an omniscient utilitarian social planner whose goal is to maximize global human well-being. The planner's goal is to maximize economic well-being for all present and future humans from a standpoint in time, where it assigns value to the present and future exclusively based on the discount rate. This framing is useful because it allows for the calculation of a single optimal policy for all people through time. However, when net costs and benefits are collapsed back into a time-integrated welfare function (W), it can obscure the question of when climate change mitigation begins to produce net economic benefits and thus where the dividing line is between generational sacrifice and generational benefit. There has been a great deal of discussion in the literature regarding how to weigh the wellbeing of different generations in the context of climate policy which has played out largely in debates over the appropriate choice for the rate of social time preference, ρ (Nordhaus 2007, Arrow et al 2013. Despite this vigorous discussion on trading off the wellbeing of current versus future generations, the specific point in time for which policy begins to confer net benefits to society is rarely highlighted. The purpose of the present paper is to highlight that this point in time exists and to discuss when it might be. Towards this end, we use the DICE model (Nordhaus 1992, Nordhaus 2017) to identify the break-even year: the year when net global per-capita consumption, under economically optimal policy, begins to exceed net percapita consumption under a no-policy case. We hope that this metric will be useful in not only explaining why climate policies can be difficult to implement but also that it will help facilitate the design of solutions like e.g. subsidizing the generation prior to the break-even year with additional taxes originating from the generation subsequent the break-even year (Kotlikoff et al 2019). Methods We investigate the break-even year with the DICE model (Nordhaus 1992, Nordhaus 2017 which is an idealized cost-benefit model for climate change mitigation policy. Below we briefly discuss the mathematical representations of how both climate change and climate change mitigation inhibits economic output in DICE. These impacts are complex functions of space and time but DICE attempts to aggregate their net effect in a simple 'damage function' which relates global economic output loss (Ω) to increases in global temperature above preindustrial levels (T) via a simple quadratic relationship, Economic damages from climate change The parameters in this equation are tuned to literature surveys on estimated economic damages at various levels of warming (Tol 2009, Tol 2018 and from there, are adjusted upward by 25% in an effort to account for nonmonetizable impacts to e.g. biodiversity (Nordhaus 2013). The climate sensitivity used in DICE is 3.1°C per CO 2 -doubling, consistent with recent estimates (e.g. Sherwood et al 2020). As we show below, the break-even year is not sensitive to the magnitude of this function (e.g. the magnitude of j 1 ). However, the break-even year would be sensitive to the shape and fundamental character of this function. For example, there is an active discussion in the literature on the degree to which economic damages to climate change are felt primarily instantaneously at each timestep (level effects) or if they are felt primarily on economic growth (growth effects) ( Also, even if global economic damages are felt on levels of production and they can be approximated roughly with a quadratic function of global temperature (equation (2)), specific regions will inevitably experience fundamentally different damage trajectories. For example, in some regions, there is reason to believe that economic damages may have a concave relationship with temperature rather than a convex relationship (i.e. damages experience saturation or diminishing returns with warming) (Ricke et al 2016). In regions dominated by such effects, the benefits of avoided damages from climate change mitigation may be weighted more towards the near-term which could push the break-even year nearer in time. Economic costs of mitigating climate change DICE models global aggregate mitigation costs as an instantaneous (i.e. in that timestep) loss of global output via a simple power function of the fraction of greenhouse gas emissions abated μ(t), represents the larger cost of carbon emission-free energy, like renewable wind and solar energy, relative to the combustion of fossil fuels (or equivalently, the cost of carbon capture and storage and/or atmospheric CO 2 removal). ξ(t) accounts for the non-policy induced reduction in the greenhouse gas emissions intensity of the economy through natural increases in energy efficiency (e.g. via improved technology or a transition to a more service-oriented economy) and increases in the fraction of primary energy produced from non-carbon emitting sources. The fraction of greenhouse gas emissions controlled, μ(t), is the choice variable in the DICE optimization framework. The convexity parameter θ>1 represents the notion that the expense of marginal emissions reductions increases with the fraction of emissions abated (Nordhaus 1991). Although this representation of mitigation cost is highly idealized, it is calibrated against, and thus produces similar results to, those originating from disaggregated process-based IAMs that simulate a full energy technology portfolio, cost reduction through learning, technology diffusion rates, regional disaggregation, capital costs, etc (Blanford et al 2009, Clarke et al 2009, Rogelj et al 2013, Kriegler et al 2014, Gillingham et al 2018. Similar to the damage function, the break-even year is not sensitive to the magnitude of the mitigation cost function but it would be sensitive to the shape and fundamental character of this function. Figure 1 illustrates the calculation of the break-even year using default parameter values in DICE (Nordhaus 2017). Figure 1(a) shows the mitigation costs and climate change induced economic losses (damages) as a fraction of total global economic output for both the optimal implementation of climate change mitigation policy (dashed lines) and a no policy case (solid lines). In the no policy case, there is no mitigation cost by definition, and the climate-change induced economic damages grow continuously, reaching ∼7% of global output lost per year by the middle of the 22nd century ( figure 1(a)). Under the optimal policy case, the cost of mitigation starts small and ramps up slowly over the 20th century, peaking at ∼1% of gross output spent per year on mitigation in the 2120s. This mitigation cost is sufficient to limit damages such that they stabilize at ∼3% of gross output lost per year by the 2120s (figure 1(a)). Results In DICE, damages from climate change are perpetually higher than mitigation costs ( figure 1(a)). However, the break-even year depends on when the current year benefits of cumulative mitigation effort exceed the current year costs associated with that ongoing effort. Thus, the break-even year can be illustrated schematically by subtracting the damage costs in the optimal policy case from the damage costs in the no-policy case and displaying the absolute value of this benefit of avoided damages against the cost of mitigation ( figure 1(b)). The equivalent break-even year expressed in terms of the mitigation policy's impact on global per-capita consumption is shown in figure 1(c). The calculated economically optimal policy entails an initially modest and slowly ramped reduction of greenhouse gas emissions, which effectively spreads mitigation costs over time (Tol 1997). The benefits of avoided damages are subject to geophysical time lags (Tebaldi and Friedlingstein 2013, Samset et al 2020) and thus they do not emerge strongly until the 22nd century. Thus, despite the higher weighing of the present compared to the future (a positive value of ρ in equation (1)), the near term is characterized by mitigation costs that are higher than avoided damages and the long term is characterized by avoided damages that are higher than mitigation costs. The point of cross-over, or the break-even year, is late in the century, approximately 2080 for economically optimal policy that begins in the early 2020s. Because mitigation costs eventually peak and then decrease with time (as technology progresses) but damages from climate change increase super-linearly, the benefits of mitigation in the 22nd century are much larger than the costs of mitigation over the 21st century (cf red area to blue area in figure 1(c)). We can now see explicitly why an economically optimal emissions reduction pathway may be difficult to implement in practice. For DICE's social planner, global society can be thought of as a perpetual single entity. For that entity, the specific break-even year of the mitigation policy is of little consequence since welfare is maximized regardless of when the break-even year occurs and mitigation costs can be thought of as society investing in the near term for its own benefit in the long-term. However, if the level of focus is shifted from perpetual global society, towards the level of discrete generations of people, then the break-even year becomes consequential since generations' lifespans will disproportionately sample time periods of either economic loss or gain. Specifically, figure 1(c) shows that the economically optimal policy is not economically beneficial for global society over the next 60 years which is the majority of an average human lifetime. To examine this idea further, we calculate the effect of climate change mitigation policy on global lifetime per-capita consumption for various generations, as a function of birth year and lifespan (figure 2). This further illustrates that the net economic benefits from climate change mitigation policy may not be realized for some time. Specifically, every generation born prior to ∼2025 would need to live past their 120th birthdays for mitigation policy to induce an increase in time-discounted consumption. On the other hand, every generation born after ∼2075 would experience net gains in time-discounted consumption due to climate change mitigation no matter how long their lives are. Global life expectancy at birth in 2020 was around 72 years (WHO 2018), which if it were to persist, would indicate that people born near 2050 would be the first global generation to experience an increase in time-discounted consumption due to climate change mitigation policy. As was noted in relation to figure 1, the policy-induced gains in consumption for future generations are very large compared to the near-term losses for the current generations (note the change in scale of the color bar in figure 2). There is great uncertainty in both the costs of mitigating climate change as well as the economic damages from climate change (Diaz and Moore 2017, van Vuuren et al 2020). However, the break-even year for the optimal climate change mitigation policy is largely insensitive to the magnitude of these factors (figures 3(a) and (b)). When damages per degree of global warming increase, it is economically optimal to mitigate more in the near term, sacrificing more consumption in the near term, in order to limit more global warming in the long term and reap more benefits of avoided damages ( figure 3(a)). When mitigation costs increase per ton of CO 2 abated, it is economically optimal to mitigate less in the near term, sacrificing less consumption in the near term, which results in more global warming and less avoided damages in the long-term ( figure 3(b)). In both cases (changing the magnitude of mitigation costs and changing the magnitude of damages), however, the temporal asymmetry between the mitigation costs and the benefits of avoided damages remain proportional and thus the break-even year holds late in the 21st century (around 2080). The break-even year is sensitive to the social time preference (ρ) on the benefits and costs of implementing the optimal mitigation policy ( figure 3(c)). Lower time preference rates indicate a higher relative weight put on the well-being of future generations and thus encourages expanded and intensified near-term mitigation costs for the sake of reaping more benefits of avoided damages further into the future. Thus, weighing the future relatively more, moves the break-even year further into the future ( figure 3(c)). Consumption is represented as a present discounted value with temporal discounting of 5%/year. The 5%/year discount rate was chosen to be consistent with the default DICE configuration where a 1.5%/year pure rate of time preference is combined with an elasticity of marginal utility and growth rate parameter to yield an effective discount rate of approximately 5%/year according to the Ramsey formula (Nordhaus 2013). If average life expectancy (∼72 years in 2020) does not change substantially, the global generation born near the middle of the 21st century (∼2050) would be the first to experience cumulative economic net benefit from climate-change mitigation policy. Discussion and conclusion Economically optimal climate change mitigation policy maximizes global social welfare integrated through time. Exploring costs and benefits as a function of time, however, reveals that there is a break-even year before which climate policy imposes a net loss on welfare. We have illustrated that the break-even year is insensitive to the magnitude of the costs of mitigation and to the magnitude of the benefits of avoided damages. We have studied global aggregate outcomes, but the break-even year would of course be sensitive to the sign of the costs and benefits of climate policy which might vary from country to country. For example, (Burke et al 2015) suggested that there was an optimal temperature for economic growth and that many high-income countries are currently on the cold side of this optimum. Thus, for these high-income countries, warming is initially beneficial and thus climate change mitigation represents a lose-lose for some period of time. This effect would likely push their country-level break-even year further into the future than the global break-even year. On the other hand, some countries may stand to gain immediately from climate change mitigation policy, if for example the policy made a naturally endowed resource relatively more attractive in the global market (e.g. uranium, lithium or wind and solar resources). If such a country was on the warm side of the optimum for economic growth (Burke et al 2015) then there may be no break-even year for that country because climate change mitigation policy would be a win-win from the outset. There are of course other barriers to implementing climate policy beyond the idea that it may not be in the economic best interest of the implementing generations or that it may be a lose-lose prospect for some countries. These include many cognitive biases that may cause humans to act less-than fully rationally. For example, the property that CO 2 is not detectible by human senses makes its danger inherently less salient. Furthermore, the deleterious impacts of CO 2 tend not to produce novel events but rather they alter the probability of familiar events like heatwaves, floods and droughts. Assessing these impacts thus requires a type of probabilistic thinking that is not intuitive (Newell and Pitman 2010). Another hurdle is that concern about global warming has become intrinsically tied to political identity in several countries (Unsworth and Fielding 2014) which makes evidencebased reasoning less persuasive than it would be otherwise. One way to potentially overcome some of the above barriers would be to emphasize the air-quality cobenefits of climate change mitigation. The DICE model estimates economic damages associated only with the CO 2 release from fossil fuel burning and it does not incorporate estimates of economic damages from degraded air quality (from the negative health effect from sulphate aerosols, photochemical smog, etc). This is relevant because air pollution impacts are much more salient than climate impacts (e.g. you can see and smell air pollution) and they are experienced locally in both space and time (Peng et al 2017, Markandya et al 2018. These characteristics mean that their alleviation represents a benefit that is not subject to the same long delay that the climate-related benefit would be and they are benefits that could be appreciated within a term of a politician. Overall, the schematic, first order calculations presented here highlight that the implementation of an economically optimal global climate change mitigation policy will impose a net cost on the current generation. This clarifies one reason why it is practically difficult to implement a policy that is supposedly both environmentally beneficial and economically optimal. Our purpose is to bring this important issue to the foreground in order to stimulate discussions of potential solutions. Regardless of the specific solution to this problem we believe that in order for it to be overcome, it should be grappled with explicitly rather than obscured. Figure 3. Effect of economically optimal climate change mitigation policy on annual per-capita consumption (relative to the no policy case) for different magnitudes of damages (a), different magnitudes of the costs of mitigation (b) and different rates of social time preference (c). The break-even year is insensitive to the magnitude of the costs of mitigation and the magnitude of the costs of damages. The break-even year moves further into the future with lower social discount rates because giving a higher relative weight to the long-term encourages more sacrifice in the short-term. The vertical dashed lines represent payback periods-the point in time after which the cumulative sum of (non-discounted) consumption becomes positive.
5,055.4
2020-09-01T00:00:00.000
[ "Environmental Science", "Economics" ]
The effectiveness of health care finance in promoting health: does the condition of health get better by spending more? Medical research in the United States remains a global reference, endowed with unrivalled financing, a source of endless advancements, and recognized with many accolades; with 45 per cent of the winners, the United States outrageously dominates the Nobel Prize for Medicine. The volume of health spending in the United States is far more than any other country; however, the health outcomes are far below expectation. An American child Born in 2016 will live on average 78.6 years, which places the country around the thirty-fifth place in the world, somewhere between Cuba and Qatar; the United States has other modest results, as evidenced by the ranking of countries in terms of infant mortality in 2015, which placed the country 33rd out of 35 member countries, ahead of only Turkey and Mexico. Although the United States ranks 35th out of 190 countries based on infant mortality in 2015, it is still far behind Cuba, which was 30th and the first “non-high” income country. In 2016, US health expenditures/gross domestic product (GDP) exceeded 16%, with an average of 10,000 USD/inhabitants, while Cuban health expenditures/GDP did not exceed 11% during the same period. We aim through the present work to show that the state of health doesn't improve by spending more. However, it improves by spending more on programs that we know from the evidence can improve health outcomes. Abstract Medical research in the United States remains a global reference, endowed with unrivalled financing, a source of endless advancements, and recognized with many accolades; with 45 per cent of the winners, the United States outrageously dominates the Nobel Prize for Medicine. The volume of health spending in the United States is far more than any other country; however, the health outcomes are far below expectation. An American child Born in 2016 will live on average 78. 6 Opinion We will first provide proof of our hypothesis by comparing the American healthcare system to that of Cuba, then to that of Spain. Several studies have focused on the characteristics and specificities of the Cuban health system, as well as its effectiveness and efficiency [1]. The Cuban government has achieved universal access to health care for all categories of the population. Despite relatively limited resources and the devastating effect of the United States economic sanctions enforced for more than a half-century. Cuba has managed to generalize access to health care for all categories of the population, and to achieve results similar to those of the most developed countries: Cuba is the only country with a health system closely linked to research and development [2]; this small nation has built its health system around preventive medicine, the results have been extraordinary so far [3]; it has the lowest infant mortality rate (4.2% live births), it is lower than that of the United States, and it is among the lowest in the world [4]. Although Cuba's political and economic system does not converge with other countries on the continent, it remains that the effectiveness of health financing and the results achieved in Cuban public health... may be a model for Low-and Middle-income Countries. Cuba is one of the best countries in the Americas and the Third World, with results similar to those of the most developed countries, this country recorded a life expectancy of around 78 years. This life expectancy is the highest for people over 60 years in Latin America [5]. We must not forget that Cuba has been under an embargo for 60 years, and that has not stopped that country from training doctors and then sending thousands of them to middle-income or low-income countries like Algeria, or helping high-income countries like Italy during the COVID-19 pandemic. Spain is positioned as the third-best health system, behind the untouchables Hong Kong and Singapore, in the Bloomberg ranking of 2019 [6]. Spain reaches the top of the ranking of the healthiest countries in the world, moving from sixth place in 2017 to the first place in 2019. The Bloomberg index is based on many criteria, such as: life expectancy, risks related to alcohol and tobacco, access to drinking water, the practice of physical activity, etc. Unfortunately, the United States is not among the 20 richest countries in the world in this ranking [7]. Spain is among the countries of the European Union which will have very high life expectancy by 2040; the population of this country will even have the longest lifespan in the world, nearly 86 years [8]. Public health care providers offer primary health care to children, women and the elderly, as they offer, in the majority of cases, acute care and longterm care. The number of people dying from cardiovascular diseases and cancers in Spain has dropped significantly over the past decade. We are now going to analyse the thesis and the antithesis, as we are going to look for the causes that place the American health care system in an uncomfortable rank, which does not always match the volume of financing of the health care system of this country. We believe that there are some who do not agree with the idea of comparison, and there are even some who may say that it will not be fair to compare the US health system with that of Cuba because the United States has one of the most competitive and complex healthcare systems. If all things remain equal, more spending will improve health services and therefore better health. Of course, there is little evidence on the causal relationship between increased public spending and the positive impact on the health status of a population. However, when comparing the contexts of poor and rich countries, the effect of public health expenditure will be very interesting, by increasing its volume, we can clearly see the positive results on the populations of low-and middle-income countries. What will be certain is the inverse relationship between public expenditure and that of the private sector in this sector. However, what is guaranteed as results is not the case for high-income countries, in this case the USA [9,10]. Another explanation is the ability of high-income countries to better harness additional public health spending, in order to improve their health outcomes, in ways that other countries do not. If we return to the case of the American health care system, there is another point to consider: that of government regulation. If we take the case of prescription drugs, one of the biggest drivers of overspending in this country, the United States allows market competition to dictate prices; Pharmaceutical companies and insurers are both responsible for prices, and the volume of drug expenditure. However, in countries where health expenditure is lower, there is government intervention that controls prices either directly or indirectly. If we take the United States as an example, you can see why there is no absolute correlation 1 to 1 between health expenses and health results. The United States has an amazing excess of the quality of the health care services provided for those who are insured, which counts in this analysis. However, we believe that if we adopt a more global point of view, we will find a certain correlation between spending and results, which has different degrees. It is also possible that any correlation that you observe is not causal, because public expenditure to improve daily life: sanitation, access to food and clean water, etc. is also important to improve the state of health of the population. However, the relationship is bidirectional, and there is evidence that health care expenses have net economic benefits. More health investments cause better quality of health care. However, the difficulty is that the organization of the health system is not the same for all countries. However, we know how prices are established in the USA; there are monopolistic pricing conditions in force [11]. What is the cost of hepatitis C antiviral therapy? Well, according to the US monopoly system, the average price is 80,000 USD; In Spain, it is around 60,000 USD for the health care provider; And in India, which does not follow this approach to determine the price, it is 130 USD [12]. It is clear that the United States has always spent more than 17% of its GDP on health, to finance a system that could simply be considered onerous and unfair. Everyone knows the reputation of notable institutions: Mayo Clinic, Mount Sinai Hospital, Cleveland Clinic, Johns Hopkins etc. But these prestigious establishments, at the highest level of quality and progress in terms of care, struggle to accommodate patients without health coverage, or with limited coverage, or who find it difficult to take charge of complex care or costly procedures, by sticking to a restrictive list of interventions and medications. Similarly, outpatients insured by the Health Maintenance Organization (HMO) are often refused reimbursement for an act or treatment deemed inappropriate or even non-essential by the insurer [11,13]. If the answer to the question is now quite clear, what are the solutions that can make a health system more efficient? The answer to this question requires the collaboration of researchers to find solutions to the health problem, which do not involve increased expenditure. Compared to other industrialized countries, health inequalities in the United States are among the largest in the world. Therefore, we believe that investing more money would not solve the problem of health equity with a health system like the one that currently exists. We believe that as researchers; we are challenged to take this issue seriously, until we can change the way operations are conducted on it, especially around health. We believe that health status improves by spending more on programs that we know, based on the evidence, can make a difference in health outcomes. Therefore, it is not a question of the amount of budget but a question of how these funds will be used; what are the priorities? What are the percentages allocated to each area? And how is this allocation linked to evidence, studies and outcomes for better health? We are not saying that the American healthcare system is a bad health care system; even the best are not perfect. In Spain for instance, a situation in which the doctors are mostly public officials, hired by the government and low compensated, but who are protected from dismissal unless they commit a significant wrong. Thus, the objective, according to the Spanish national health system "Instituto Nacional de la Salud", is to avoid making a serious error that interferes with the rules, the latter are recommendations of good practice by the WHO [7,14]. So, the goal is not to cure everyone; rather, it is about obeying instructions and you will be a faithful servant, perhaps even a competent physician. Although the system seems less than optimal, it has proven to be so effective that Spanish enjoy one of the longest lives in the world, despite having health insurance that costs around USD 100 per person available to everyone. Because the system is not both sustainable and fair, that is the problem. When a system is in crisis, it will be considered as an obsolete system! This happened during the COVID-19 pandemic. Spain, with a population of 47 million inhabitants, one of the European countries most affected by the pandemic, recorded on Wednesday, February 24, 2021: 3161,432 cases recorded and 68,079 deaths due to the coronavirus [15]. As in other countries, this assessment is however clearly underestimated, where many victims could not be tested during the first wave of the epidemic, in the spring of 2020, due to the saturation of the health system. The expected results can measure the performance of a health system. Otherwise: if citizens do not fully benefit from care services through demographic, financial and organizational accessibility; and if the state of health does not reflect the wonderful image of the world´s leading power so the reputation of a country cannot hide the shortcomings of a health system!
2,802.8
2022-06-06T00:00:00.000
[ "Medicine", "Political Science", "Economics" ]
Engineering of an Anti-Inflammatory Peptide Based on the Disulfide-Rich Linaclotide Scaffold Inflammatory bowel diseases are a set of complex and debilitating diseases, for which there is no satisfactory treatment. Peptides as small as three amino acids have been shown to have anti-inflammatory activity in mouse models of colitis, but they are likely to be unstable, limiting their development as drug leads. Here, we have grafted a tripeptide from the annexin A1 protein into linaclotide, a 14-amino-acid peptide with three disulfide bonds, which is currently in clinical use for patients with chronic constipation or irritable bowel syndrome. This engineered disulfide-rich peptide maintained the overall fold of the original synthetic guanylate cyclase C agonist peptide, and reduced inflammation in a mouse model of acute colitis. This is the first study to show that this disulfide-rich peptide can be used as a scaffold to confer a new bioactivity. Introduction Peptides display a range of potentially useful biological functions such as anti-inflammatory [1], anti-cancer [2], anti-HIV [3], antimicrobial [4], and insecticidal [5] activities, among others. Peptides as drug leads have a range of advantages over small molecules and proteins, including target specificity, low toxicity, and immunogenicity, but one major limitation for small, unstructured peptides is a lack of stability in vivo [6]. Small, unconstrained peptides can be degraded within a few minutes in the blood, which decreases their potential as therapeutic agents. Despite the stability issues that can be associated with peptides, there are several examples of peptides used in the clinic, including cyclosporine, Prialt ® (Ziconotide), and linaclotide. Cyclosporine is a 12-residue cyclic peptide that has famously revolutionized organ transplant therapy due to its potent immunosuppressant activities [7]. Prialt is a synthetic version of the cone-snail venom peptide MVIIA and is currently used for the treatment of chronic pain [8]. This cone-snail venom peptide is a calcium channel antagonist, containing 25 residues and three disulfide bonds in a cystine knot motif [9]. Linaclotide, a 14-amino-acid peptide with three disulfide bonds, interacts with guanylate cyclase-C, generating cyclic guanosine monophosphate (cGMP), and is currently in clinical use for patients with chronic constipation or irritable bowel syndrome [10]. Linaclotide is administered orally and improves bowel function and abdominal discomfort [11]. These three peptides are constrained either by a cyclic backbone or disulfide bonds, highlighting the advantages of using covalent constraints to improve the therapeutic potential of peptides. We have recently used the cyclic peptide SFTI-1 as a scaffold to stabilize a tripeptide with anti-inflammatory activity [12]. SFTI-1 is a 14-residue cyclic peptide isolated from the seeds of sunflowers (Helianthus annuus), and is one of the most potent trypsin inhibitors known [13]. It contains two short antiparallel β-strands linked by a single disulfide bond, and the structure contains a network of hydrogen bonds which makes it extremely stable for engineering modifications [14,15]. MC-12 is a tripeptide originally derived from annexin A1, which we grafted into the binding loop of SFTI-1. Our grafted peptide, termed cyc-MC12, exhibited a significantly improved therapeutic efficacy in a murine model of chemically-induced acute colitis, and in vitro stability compared to MC-12 [12]. Colitis is one of the major forms of inflammatory bowel diseases (IBD), which are a set of debilitating chronic inflammatory disorders of the gastrointestinal tract [16]. Current treatments are not satisfactory, and consequently new drug leads are being sought from a range of sources, including small molecules from plants and bioactive regions of larger proteins [17][18][19][20]. To further explore the potential of using disulfide-rich/cyclic peptide scaffolds for IBD applications, we have used the linaclotide scaffold for grafting the MC-12 sequence. Given that the linaclotide scaffold is highly constrained and orally active, we decided to explore its potential as a scaffold for engineering novel bioactivities. Linaclotide regulates guanylate cyclase C (GCC) and is used in the treatment of irritable bowel syndrome (IBS), but has not been demonstrated to regulate autoimmune diseases such as inflammatory bowel disease (IBD), making this study the first to examine its potential as a scaffold in IBD. Peptide Synthesis and Purification The peptides were synthesized using fluorenylmethyloxycarbonyl (Fmoc) chemistry-based solid-phase peptide synthesis with 2-chlorotrityl chloride resin on a 0.1 mmole scale. Amino acids (2 equiv.) were activated in 5 equiv. of N,N,N ,N -tetramethyl-O-(1H-benzotriazol-1-yl)uranium hexafluorophosphate and 10 equiv. of N,N-diisopropylethylamine in dimethylformamide (DMF) (1.5 mL). Deprotection was carried out in two repetitions: Starting with 2 min of 20% piperidine in DMF (5 mL), followed by 3 min of the same solution. The C-terminal amino acid was coupled manually to the resin, and the remainder of the peptide was assembled using a Protein Technologies PS3 synthesizer following the Fmoc approach. Peptides were cleaved from the resin using a mixture of trifluoroacetic acid (TFA)/water/triisopropylsilane (95:2.5:2.5) for 2-3 h. Each peptide was precipitated with diethylether after cleavage, dissolved in 50% acetonitrile/0.05% TFA, and then lyophilised. RP-HPLC was used for purification on a C18 preparative column (Phenomenex Jupiter 250 × 21.2 mm, 10 µm, 300 Å) with 1% gradient of solvent B (solvent A: 0.05% TFA; solvent B: 90% acetonitrile, 0.05% TFA). Masses were analysed using matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometry. The reduced peptides were oxidised in 0.1 M ammonium bicarbonate (pH 8.5) buffer containing 2 mM reduced glutathione for 24 h at room temperature. The oxidized peptides were purified using RP-HPLC and masses analysed using MALDI-TOF mass spectrometry. NMR Spectroscopy and Structural Analysis After purification, the peptides were resuspended at a final concentration of~0. 2 13 C HSQC spectra were acquired at 290 K using a 600 MHz AVANCE III NMR spectrometer (Bruker, Karlsruhe, Germany). NOESY spectra were acquired with mixing times of 200-300 ms, and TOCSY spectra were acquired with isotropic mixing periods of 80 ms. Standard Bruker pulse sequences were used with an excitation sculpting scheme for solvent suppression. Spectra were referenced to internal 4,4-dimethyl-4-silapentane-1-sulfonic acid (DSS). NMR assignments were made using established protocols [21], and the secondary shifts derived by subtracting the random coil αH shift from the experimental αH shifts [22]. The 2D NOESY spectra of MC12-linaclotide were automatically assigned and an ensemble of structures calculated using the program CYANA [23]. Dihedral-angle restraints were derived based on the J αN coupling constants measured from the one-dimensional spectra. The final structures were visualized using MOLMOL [24]. TNBS Colitis Assay The animal experiments were conducted in accordance with the James Cook University Animal Ethics Committee approved guidelines. Five male BALB/c were used for each group (five weeks old). Mice were purchased from the Animal Resources Centre (Perth, Australia) and housed in the animal care facility unit at James Cook University in Cairns under specific pathogen-free conditions, with unlimited access to food and water in their cages. Mice were divided randomly into four groups: Naïve, 2,4,6-trinitrobenzenesulfonic acid (TNBS, Saint Louis, MO, USA), MC12-linaclotide plus TNBS, and linaclotide plus TNBS. Mice received intraperitoneal (i.p) injections of peptides at a dosage of 3 mg/kg body weight. Prior to intra-rectal administration of TNBS, mice were anaesthetized using mild ketamine/xylazine solution. After anaesthesia, each mouse received 100 µL of 5% (w/v) TNBS solution in 60% ethanol by intra-colonic instillation using a 20 gauge soft catheter (Terumo, Tokyo, Japan), which was inserted into the anus and up to the colon. Mice were monitored daily for body weight, piloerection, survival, decreased motor activity, rectal bleeding, and stool consistency. Mice were humanely euthanised using gas asphyxiation, where CO 2 was applied directly to the individual cage for approximately 1.5 min, animals were removed from the cage, and death was confirmed. After the cull, the macroscopic pathology score was calculated for each colon. Briefly, colons were harvested, opened longitudinally, and washed with sterile phosphate buffer saline. The tissues were assessed for changes in macroscopic appearance, and scored for pathological changes as follows: adhesion (0 to 3), bowel wall thickening (0 to 3), mucosal oedema (0 to 3), ulceration (0 to 3), and colon length as described previously [25]. All animal experiments were conducted in duplicate to ensure reproducibility of the findings. Peptide Design and Synthesis The linaclotide sequence contains three inter-cysteine loops, comprising two or three residues. To avoid changing the inter-cysteine loop sizes, we grafted MC-12 into the second loop, which contains three residues, as shown in Figure 1. Linaclotide and the grafted peptide (MC12-linaclotide) were synthesised by Fmoc solid-phase peptide synthesis, purified using RP-HPLC, and the mass was analysed with MALDI-TOF mass spectrometry. The purified, reduced peptides were oxidised in ammonium bicarbonate with glutathione as a shuffling reagent, and a single major product was evident based on RP-HPLC analysis. This major isomer was purified and the mass confirmed. Structural Analysis The structures of linaclotide and MC12-linaclotide were analysed using NMR spectroscopy. NMR spectra were recorded in aqueous solution. Two-dimensional TOCSY and NOESY spectra allowed assignment of the majority of the resonances, and the secondary chemical shifts were determined by subtracting random coil chemical shifts from the αH chemical shifts [22]. Cys1, Cys2, Cys5, and Cys6 could not be assigned for either linaclotide or MC12-linaclotide. It is likely that these residues have very broad peaks, which prevented detection. A comparison of the secondary shifts for the assigned residues is shown in Figure 2A. The secondary shifts are similar between MC12linaclotide and linaclotide, indicating that the peptides have the same overall fold. One-dimensional NMR spectra were recorded for MC12-linaclotide over the pH range 3.5 to 6 to determine if the missing resonances were evident. Peaks corresponding to the amide protons of residues 1, 2, 5, and 6 were not present in any of the spectra. In the one-dimensional spectrum recorded at pH 6, many of the amide protons were not evident, indicating that they had broadened beyond detection. The three-dimensional structure of MC12-lincalotide was calculated using CYANA [23,27], based on NOE and dihedral angle restraint data [28]. Although the N-terminal region is disordered, a 3 10 helix from residues 7-10 was present based on analysis with MOLMOL. A superposition of the 20 lowest energy structures is given in Figure 3A, which highlights the disorder at the N-and C-termini, despite the high proportion of disulfide bonds in the peptide. To determine if a non-aqueous environment stabilizes the structure, spectra were recorded in the presence of 100 mM deuterated SDS. A comparison of the secondary shifts in aqueous solution and in SDS is given in Figure 2B. The shifts are similar between the two conditions, indicating that SDS does not significantly influence the overall fold. However, a larger number of NOEs were evident in the NOESY spectra recorded in the presence of SDS, and enabled a more well-defined structure to be calculated, as shown in Figure 3B. Biomedicines 2018, 6, x FOR PEER REVIEW 6 of 11 a 310 helix from residues 7-10 was present based on analysis with MOLMOL. A superposition of the 20 lowest energy structures is given in Figure 3A, which highlights the disorder at the N-and Ctermini, despite the high proportion of disulfide bonds in the peptide. To determine if a non-aqueous environment stabilizes the structure, spectra were recorded in the presence of 100 mM deuterated SDS. A comparison of the secondary shifts in aqueous solution and in SDS is given in Figure 2B. The shifts are similar between the two conditions, indicating that SDS does not significantly influence the overall fold. However, a larger number of NOEs were evident in the NOESY spectra recorded in the presence of SDS, and enabled a more well-defined structure to be calculated, as shown in Figure 3B. TNBS Mouse Colitis Model The effect of MC12-linaclotide in the TNBS-induced colitis mouse model was assessed. Mice were either left untreated (naïve), treated with TNBS alone, or were treated with peptides at a dose of 3 mg/kg five hours prior to the administration of TNBS. On day 3, mice were humanely euthanised using gas asphyxiation and examined for assessment of protection against colitis. The TNBS-treated mice lost weight and did not recover during the experiment, consistent with inflammation and colonic mucosa damage. By contrast, MC12-linaclotide-treated mice displayed statistically significant protective effects against weight loss ( Figure 4A). MC12-linaclotide treatment resulted in statistically significantly improved macroscopic pathology scores compared to untreated mice that received TNBS ( Figure 4B). No difference in colon lengths between groups was observed (results not shown). By contrast, linaclotide did not display protective effects, indicating that the grafting of the MC-12 sequence was responsible for the bioactivity. (results not shown). By contrast, linaclotide did not display protective effects, indicating that the grafting of the MC-12 sequence was responsible for the bioactivity. Data was analysed using GraphPad Prism. Statistical analyses of weights were performed using the two-way ANOVA, with multiple comparisons of the groups over different days. Macroscopic score was analysed using unpaired Mann-Whitney non-parametric tests. All values are expressed as mean ± SEM. Results were considered significant when p < 0.05. Tissue p-IκB-α (Ser32) and p-NF-κB p65 (Ser536) Measurements Analysis of colon tissue homogenates for phosphorylated transcription factor levels in mice [29] illustrates that linaclotide-treated mice (MC12-linaclotide + TNBS) have levels of phospho-NF-κB p65 (Ser536) and phospho-IκBα (Ser32) that are not statistically different to the TNBS-only treated mice. Data was analysed using GraphPad Prism. Statistical analyses of weights were performed using the two-way ANOVA, with multiple comparisons of the groups over different days. Macroscopic score was analysed using unpaired Mann-Whitney non-parametric tests. All values are expressed as mean ± SEM. Results were considered significant when p < 0.05. Tissue p-IκB-α (Ser32) and p-NF-κB p65 (Ser536) Measurements Analysis of colon tissue homogenates for phosphorylated transcription factor levels in mice [29] illustrates that linaclotide-treated mice (MC12-linaclotide + TNBS) have levels of phospho-NF-κB p65 (Ser536) and phospho-IκBα (Ser32) that are not statistically different to the TNBS-only treated mice. There appears to be a trend for lower levels of phosphorylated NFkB in mice treated with MC12-linaclotide plus TNBS, which might suggest that the peptide is able to reduce the production of pro-inflammatory cytokines, but larger sample sizes may be required to reach statistical significance. Discussion Grafting bioactive sequences into peptide scaffolds is proving to be a useful approach for the design of novel drug leads [12,30,31]. Here, we show for the first time that the highly disulfide-rich peptide, linaclotide, can be used as a scaffold to confer anti-inflammatory activity in a TNBS mouse model of colitis. The MC-12 tri-peptide has previously been shown to have protective effects in mouse models of colitis [32], and we have shown that grafting it into the SFTI-1 scaffold improves the potency and stability of the peptide [12]. Our current study confirms the importance of the MC-12 tri-peptide sequence (QAW), as linaclotide alone did not alleviate the symptoms of colitis, in contrast to the grafted MC12-linaclotide peptide which had a moderate but statistically significant influence on weight loss and macroscopic score. MC-12 is thought to interact with NF-κB and it is likely that the grafted peptides have the same mechanism of action, but this has yet to be explored. Analysis of the structures of the grafted MC12-linaclotide and linaclotide with NMR spectroscopy has been complicated due to the lack of spectral data in the N-terminal regions of both peptides. The missing peaks are most likely the result of structural flexibility in this region. Despite the incomplete assignments, the NMR analysis indicates that the synthetic forms of MC12-linaclotide and linaclotide used in the current study have similar overall folds. The three-dimensional structure of MC12-linaclotide was determined and as expected, was disordered at the N-terminus, but displayed a relatively well-defined region corresponding to the grafted MC-12 sequence, as shown in Figure 3A. The disorder at the N-terminus clearly results from the lack of assignments in the region, but this apparent flexibility is intriguing given the high percentage of cysteine residues in this peptide. A comparison of the secondary shifts of MC12-linaclotide in aqueous solution and 100 mM SDS ( Figure 2B) highlights the similarity between the two conditions. However, in the presence of SDS, the α-protons of residues 2, 5, and 6 could be assigned, in contrast to the spectra recorded in aqueous solution. Peaks corresponding to these residues were not present in the aqueous solution data, and were broad in the SDS spectra. The structures determined in the presence of SDS are more well-defined than those in aqueous solution, indicating that SDS is having an impact on the dynamics of the peptide. In summary, we have shown that linaclotide can serve as a scaffold to accommodate a small bioactive sequence and allow appropriate binding to a biological target. In particular, linaclotide is an interesting scaffold for the design of novel lead molecules for inflammatory bowel disease. Overall, this study provides further insight into grafting bioactive sequences into stable peptide scaffolds. In summary, we have shown that linaclotide can serve as a scaffold to accommodate a small bioactive sequence and allow appropriate binding to a biological target. In particular, linaclotide is an interesting scaffold for the design of novel lead molecules for inflammatory bowel disease. Overall, this study provides further insight into grafting bioactive sequences into stable peptide scaffolds.
3,938.2
2018-10-06T00:00:00.000
[ "Medicine", "Engineering" ]
Designing deep CNN models based on sparse coding for aerial imagery: a deep-features reduction approach ABSTRACT Traditional methods focus on low-level handcrafted features representations and it is difficult to design a comprehensive classification algorithm for remote sensing scene classification problems. Recently, convolutional neural networks (CNNs) have obtained remarkable performance outcomes, setting several remote sensing benchmarks. Furthermore, direct applications of UAV remote sensing images that use deep convolutional networks are extremely challenging given high input data dimensionality with relatively small amounts of available labelled data. We, therefore, propose a CNN approach to scene classification that architecturally incorporates sparse coding (SC) technique for dimension reduction to minimize overfitting. Outcomes were compared with principal component analysis (PCA) and global average pooling (GAP) alternatives that use fully connected layer(s) in pre-trained CNN architecture(s) to minimize overfitting. SC was used to encode deep features extracted from the last convolutional layer of pre-trained CNN models by using different features maps in which deep features had been converted into low-dimensional SC features. These same sparse-coded features were concatenated by means of different pooling techniques to obtain global image features for scene classification. The proposed algorithm outperformed current state-of-the-art algorithms based on handcrafted features. When using our own UAV-based dataset and existing datasets, it was also exceptionally efficient computationally when learning data representations, producing a 93.64% accuracy rate.. Introduction UAV imagery is used for environment considerations and the monitoring of various resources. These remotely acquired data hold spatial and spectral properties that can be analyzed and applied for modeling (Jaakkola et al., 2010). UAV-based systems are cost-efficient reliable solutions for the monitoring of land-based objects. Limitations include non-uniform terrain such as hills, mountains and forests. Deployment is sometimes difficult when attempting to access resources and legal constraints can also hinder UAV monitoring. Very high-resolution (VHR) acquired data are used for classification but holds wide-ranging variation regarding objects of interest. Increasing levels of detail increases computational complexity when using pixelbased classification approaches and can pose uncertainties in spectral signatures of urban objects. Acquired images also contain numerous statistical properties due to pixel dimensionality; hence, they are sometimes difficult to classify. Several papers on remote sensing images have considered improved classifiers and features representations. For scene classification, constructing a comprehensive (holistic) representation is a feasible solution. The Bags-of-Visual-Words (BOW) model (Sun, Sun, Wang, Yu, & Xiangjuan, 2012) for scene classification is effectively the most attractive solution currently in use by the remote sensing community. In last few years, extensive research has considered more suitable image descriptors for particular classification tasks. Local image descriptors such as local binary patterns (LBP) (Otávio A.B. Penatti, Valle, & Torres, 2012), histograms of oriented gradients (HOG) (T. Ojala, Pietikainen, & Maenpaa, 2002) and scale-invariant feature transform (SIFT) (Ojala, Pietikäinen, & Topi, 2000) have been used to handcraft features descriptors, especially for scene classification and object recognition tasks. Another features descriptor is based on SIFT-BOVW, and has been successfully applied in remote sensing applications for image scene classification tasks. Although these studies proved effective for scene classification of VHR images, there have been no improvements in BoVW-type techniques due to discrete constraints presented by the BoVW model. Currently, deep learning methods (Maggiori, Tarabalka, Charpiat, & Alliez, 2016, Zhang, Zhang, & Bo, 2016, Zeiler & Fergus, 2014 have been successfully applied to classical image problems, including object recognition and detection, natural language processing, speech recognition and scene classification. Deep learning techniques have achieved much success with numerous improvements compared to state-of-the-art classical features extraction methods. As such, they are of significant interest in academic and industrial communities (Bengio, 2009). Deep convolutional neural networks (CNNs) (Robinson & Yun, 2016) are recognized as the most worthwhile and are used in leading deep learning approaches for object recognition and detection tasks (Scherer, Müller, & Behnke, 2010). Their success is due to the efficient use of graphics-processing units (GPUs), rectified linear units, new dropout regularization and effective data augmentation (Chen & Lin, 2014). CNNs are popular in the computer vision community and in various applications based on natural language processing, hyperspectral image processing and medical image analytics. Their major advantage lies in deep architecture(s) , which allows for the extraction of a set of discriminating features at multiple levels of abstraction. However, training a deep CNN from scratch (full training) presents limitations. First, they require large amounts of labelled scratch data. Second, deep training necessities extensive memory and computational resources, without which the training process is extremely time-consuming. Third, deep CNN training is enormously complicated due to overfitting and convergence issues, which resolutions often call for reiterative adjustments in the network's learning parameters. Consequently, deep learning from scratch is not only tiresome and time-consuming but also requires diligence, expertise and colossal amounts of patience. An alternative to training from scratch is to finetune a CNN network based on large datasets derived from different extant applications. Pre-trained CNN models have been successfully applied in various computer vision tasks as baselines to transfer learning or as features generators (Robinson & Yun, 2016). Features extraction from pre-trained CNNs models that have been previously used as fine-tuned initial or later network components provide discriminating deep features useful for classification tasks. However, features extracted from the final layer of a pre-trained CNN have particularly highdimensional features space based on small UAV datasets. High-dimensional features space can produce overfitting. Overfitting problems are solved by using (i) features reduction approaches based on a standard principal component analysis (PCA); (ii) sparse coding (SC) with pre-specified dictionaries; or (iii) a global average pooling (GAP) layer. In addition to introducing a GAP layer, these features-dimension reduction techniques can be applied to pre-trained CNN architecture(s) to further classify aerial images using standard classification algorithms. The main contributions in this paper are as follows: • Extraction of deep features from the final convolution layer (C5) of a pre-trained CNN model. ○ Pre-trained CNN models limit input image size due to the fully connected (FC) layer. To use any image size, the fully connected layer can be removed and the final convolutional layer can be used for deep features extraction. • Our dataset uses small training images. Due to size, CNN pre-trained models can produce overfitting. ○ Different features reduction techniques are used to reduce overfitting: these include a dropout layer, regularization, PCA and GAP. We therefore propose a SC technique using a hybrid dictionary to compress features in lower subspace to minimize overfitting based on CNN deep features. Performance of the proposed technique is compared with PCA, GAP, dropout and pretrained CNN architecture(s) with fully connected layers (FC-CNNs) (see Section 5). • The hybrid dictionary is based on the Ricker wavelet function and used to generate ridgelet based elements. It is introduced here specifically to extract sparse-coded features. • Different spatial pooling techniques were applied to sparse-coded features to obtain global image features for scene classification. • Different classification algorithms (SVM, RF and ELM) were tested to measure classification accuracy. A CNN extracts features from high-level image data. Deep features provide detailed spatial information created by hierarchical structures. We investigated deep features' characteristics using a classification framework with sparse representations. The objective was to introduce a features reduction technique utilizing the last layer of a pre-trained CNN architecture to minimize overfitting in small UAV datasets. Furthermore, instead of using a fully connected layer, the GAP layer is introduced to minimize overfitting problems in pre-trained CNN architecture. Comparisons were made with outcomes from extant deep features techniques using standard pre-trained CNN-based architecture(s) for small UAV datasets and state-of-the-art datasets. We also compared our proposed algorithm with existing handcrafted features-based classification techniques and are pleased to report results that showed greater accuracy and less overfitting. The GAP layer provided less computational costs and also minimized overfitting with a small UAV dataset. Related works Features extraction techniques are used to reduce data dimensionality in high-dimensional remote sensing data applications. The traditional data reduction method, per literature review, is PCA (Ham, Lee, Mika, & Bernhard, 2004). Different nonlinear dimensionality reduction methods, based on dictionary learning and manifold learning, have also been used for features reduction. The manifold learning method (Ham et al., 2004) is a state-of-the-art tool used for local descriptors in remote sensing image manifolds (Bachmann, Ainsworth, & Fusina, 2006). Auto-encoders are also used (Del, Licciardi, & Duca, 2009) to extract shallow structures in remote sensing but is computationally expensive and requires tuning for many parameters (regularization, learning rate), which heuristically limits network structure. The literature holds few reports on deep architecture use in remote sensing image classification by means of low-dimensional high-resolution images. The strength of deep networks that enhance noisy aerial image classification has been explored (Goodfellow, Lee, Le, Saxe, & Ng, 2009) . Another author proposed a hybrid deep neural network to explore scale-variant features to detect vehicles in satellite images (Xueyun Chen et al., 2014). The framework based on stacked auto-encoders has been used to classify high-resolution data (Hoo-Chang Shin, Orton, Collins, & Leach, 2013). Deep learning features extraction methods can handle nonlinear spatial-spectral image analysis and efficiently train algorithms that go unnoticed by state-of-the-art frameworks. A dictionary based on sparse representation modeling can efficiently represent features in low-dimensional space. Different dictionaries can be used for image classification and spatial-spectral sparse representation. Sparse representation by means of learned dictionaries is used to pan-sharpen images (Liping & Juncheng, 2014). A sparse Bags-of-Words tool can code automatic target detection (B. Zhao, Zhong, & Zhang, 2016). A saliency-based algorithm can code for segmentation (Zhang, Bo, & Zhang, 2015) and unsupervised learning using sparse features based on aerial images (Ranzato, Huang, Boureau, & LeCun, 2007). Thus, SC facilitates unsupervised classification instead of unsupervised features extraction techniques. The present work employs SC for features reduction via its incorporation with well-known CNN architecture(s) to minimize overfitting. Dataset The system is based on UAV composed of state-of-theart Global Navigation Satellite System (GNSS) receivers on system board as well as the ground base station. The battery timing is almost 2 h (Michini et al., 2011). The images acquired have a resolution almost half of the centimeter. The software controlling the UAV as shown in Figure 1(a,b). The trajectory and all paths are automatically given to UAV for flying and landing using the joystick and controlling software associated with this UAV machine. Our UAV can fly 72 km per hour and the battery timing is usually 3 h to maintain the flight. Our area of interest to monitor the vegetation or trees near power transmission poles is a village of Tambunan, Sabah, East Malaysia (Abdul Qayyum et al., 2017). The area is very challenging due to bad weather conditions, changing weather within minutes, cloudy environment and most of the time, rainy season. The vendor experienced that the only best window time is between 6.30 and 7.30 in the morning. The ground sample distance (GSD) varies and it depends on the height of the UAV. The 5 cm GSD is at the height of 150 m and 10 cm GSD for 300-meter height. The maximum height is 2000 m and the normal height is m. The 15 cm GSD is for the height of 700 m. This is the normal range to collect the data. In our design experiment, we used height of 700 m for 15 km span area in a square kilometer. The image sample used for classification based on four different classes (building, power poles, roads, trees) is used for classification. Methodology The first step is aerial data acquisition using a UAV sensor. Deep features are extracted by pre-trained CNN models that employ small training images. The last convolutional layer, before the fully connected layer, is used for deep features. Features reduction techniques are applied to convert deep features into low-dimensional space. For our purposes, different features reduction techniques were used with the proposed SC tool, which is based on a hybrid dictionary, to express features in low-dimensional space. Finally, we employed a different approach to classification. Each step of the proposed method is now described. Introduction of CNN and pre-trained models The CNN has a variety of applications in the area of image and pattern recognition, natural language processing, video analysis, object recognition, speech recognition (Abdel-Hamid et al., 2014), object detection and classification (Scherer et al., 2010), scene parsing and scene classification (Abdul Qayyum et al., 2017). The deep CNN models capture the data in hierarchical way and these models are based on sequential modules, where output of previous module is the input of next module. These modules are called layers. Each layer is parameterized by a set of random weights and biases units. The weights in CNNs models are shared locally; the same weights are applied at every location of the input at every layer. The filter is formed by connecting the weights with output unit. The CNN consists of convolutional layer which contains input and set of learnable filters, a nonlinear function and pooling layer, which sums the statistics of the features to reduce the computation cost. The CNN architectures have numerous hyper-parameters (learning rate, momentum and regularization) for training the layers using input data samples. The operations accomplished in a convolutional layer can be summarized as Where A lÀ1 is the input feature map to the l th layer, f l ¼ ðw l ; b l Þ is the set of learnable parameters based on weights and biases of the layer, ϕðÞ is the pointwise nonlinearity, pool is a subsampling operation, t is the size of the pooling region and à the symbol denotes linear convolution. The input data for first layer is based on UAV colored image. i.e., A 0 ¼ 1, where I 2 R V 0 ÂH 0 ÂC 0 is the input image; V 0 and H 0 are its width and height, respectively; and C 0 is the number of input spectral channels. The input to a subsequent layer l is a feature map A lÀ1 2 R V lÀ1 ÂH lÀ1 ÂC lÀ1 , where V lÀ1 and H lÀ1 are the width and height of the l th layer's input feature map and C lÀ1 is the number of outputs of the ðl À 1Þ th layer. The most important steps to design a CNNs models are the number of layers, the size of receptive field and type of spatial pooling and another important feature is how to train such architectures. The deep CNNs can be trained using standard back propagation (Kwon, Kim, Jinoh Kim, Suh, & Kim, 2017) algorithms. For classification problems and especially scene classification, the main problem is how to generalize the algorithm by utilizing deep learning algorithms, for example, a CNN (Kwon et al., 2017). We discussed the brief review of different CNN successful modern architectures as shown in our work. AlexNet It is an innovative deep CNN architecture and as compared to older networks, AlexNet is the combination of five convolutional layers and three fully connected layers. The main advantage and successful point is its network used data augmentation, dropout and rectified linear units (ReLU) non-linearity. The ReLU function contains only a positive number of the training phase; the data augmentation is used to reduce overfitting with larger CNNs and uses small patches of images with horizontally and vertically flipped patches of original images. The dropout method also reduces co-adaptations of neurons and substantially reduces overfitting in fully connected layers. The first convolutional layer has 96 channels and 55 width and height. The second convolutional layer consists of 256 numbers of planes and 27 by 27 width and height. The third layer has 384 numbers of channels with 13 × 13 width and height and similarly the size of the fourth convolutional layer is same as with third convolutional layer. The fifth convolutional layer has same width and height as compared to fourth, but a number of channels are different which is equal to 256. The layer 6, 7 and 8 are fully connected layers with 4096 dimensions in FC6 and FC7. The FC8 has a different size which is equal to 1000 dimension. CaffeNet It is used for general purpose CNNs and also used in other deep models with changeable and perhaps the fastest existing implementations for effective training. It has a similar architecture as AlexNet with two modifications. The first is that it is used without data augmentation and the second is that it uses a different order of pooling and normalization layers. The performance of this network model is similar to AlexNet. VGGNet The VGG-F is very fast and simple to use and similar to AlexNet. The fundamental difference between VGG-F and AlexNet is that it uses a smaller number of filters and strides in convolutional layers. Furthermore, there are two other types of VGGNet, VGG-M and VGG-S. The VGG-M is the medium architecture and has a change in the pooling and smaller stride in the first convolutional layer. VGG-S has very slow and simplified architecture and uses a smaller number of filters in fifth layers. Particularly, The VGG-F has been used in this experiment due to simple and fast response. GoogLeNet The GoogLeNet (Hu, Xia, Jingwen, & Zhang, 2015) proposed in ILSVRC-2014 competition for classification and detection. Its main distinctiveness is the use of inception modules that decrease the complexity of the lavish filters of traditional architectures. The inception modules have multiple filters used in parallel with different resolutions. GoogLeNet utilizes the filters of various sizes at the same layer that maintains more spatial information. It uses small number of parameters in the network which may provide lesser overfitting problem. The GoogLeNet has 22 layers and more than 50 convolutional layers dispersed inside the inception modules. CNN deep features based on sparse coding Features generated by the CNN model using spatial data are high dimensional and not effective for classification purposes. We therefore propose a SC technique using subspace-based representations of deep features that improve classification performance by reducing features space. Images hold high-dimensional objects that present a difficult task for machines classification. Nonetheless, UAV images contain high-dimensional spatial and spectral features that can be represented in low-dimensional subspace. Low-dimensional features are encompassed by adapting fixed as well as adaptive dictionaries atoms that employ training pixels of the same class. Test pixels of an unknown class are represented as a linear combination of dictionary atoms. SC is used at the last convolutional layer to reduce features space acquired from pre-trained CNN architecture. Such reduced dimensions improve overfitting for small datasets. Figure 2 outlines the proposed method used for features reduction. Sparse representation and the proposed hybrid dictionary Sparse representation is widely used in numerous signal and image processing applications, in machine learning, in neuroscience to study brain signals, and in biomedical imaging (Fang & Shutao, 2010). Its main purpose is to represent signals as linear combinations with typical patterns called atoms that are then extracted from an over-complete dictionary. For instance, if y is a signal or 2D image that belongs to N dimensional space; then y 2 R n and D 2 R nÂM can be used to form a dictionary matrix where M atoms are defined as column vectors, d j 2 R M and j ¼ 1; :::; M. We were interested to find a sparse vector such that x 2 R n and y ffi Dx. The problem then becomes one of the optimization (Mailhe, Gribonval, Bimbot, & Vandergheynst, 2009): where ε is the reconstruction error of signal y using dictionary D; and x is the sparse code Alternatively: where ρ is specific sparsity level. The vector, x 2 R k , represents coefficients of a given signal y with respect to dictionary D. Compared to PCA, SC computes a sparse vector with a minimum of non-zero coefficients. Sparse coefficient construction necessitates l 0 -norm, which is used to count non-zero entries of a specified vector. This construction type becomes a NP hard problem (A. Qayyum et al., 2016) and can only be solved by optimization greedy algorithms (OGA). The simplest and most efficient of these algorithms are matching pursuit (MP) (Chen, Donoho, & Saunders, 1998) and orthogonal matching pursuit (OMP) (Mailhe et al., 2009). Their essential gist is to acquire the sparsest of coefficients by competently employing dictionary elements. A dictionary is "over-complete" when the number of pixels in the image patch are fewer than the number of elements in the dictionary used for sparse representation of the image patch. This caused us to improve various over-complete dictionaries that have more atoms than dimensions in the signal, which then guarantees signification within a wider range of signal phenomena. Our attentive effort to minimize loss by an over-complete dictionary showed promising properties provided by orthogonal transforms. The most important steps are the "how" of construction and the "how" of choosing an efficient dictionary that offers an optimized solution with sparse coefficients. Figure 3 illustrates a block diagram, based on SC, of the proposed approach using features maps from the last convolutional layer to build features vectors for use as training images. Selecting an over-complete dictionary is fundamentally important to the process of generating sparse recovery atoms. Tasks used in image processing applications include fusion, super-resolution, denoising and others, depending on the type of dictionary castoffs used for sparse representation. In this paper, sparse representation is used to estimate basis elements responsible for the efficient generation of sparse coefficients. Therefore, a ridgeletbased over-complete dictionary was the better choice for sparse-based image classification. We propose the Ricker wavelet function, a negative normalized derivative of the Gaussian function to produce ridgelets as basis elements for the hybrid dictionary, and for which experimental results demonstrated superior performance. Defined 2D ridgelets were based on a wavelet type function Equation (4): (4) where ψðÞ is the wavelet function based on the Ricker wavelet, shown in Equation (5). Ridgelet bases were obtained by choosing different values for x; y and θ in Equation (4). These bases were then used as vectors for enclosure in the hybrid dictionary using ridgelet functions. Ridgelet analysis of object representation is very effective with singularities along lines derived by a concatenation of 1D wavelet transforms (Do & Vetterli, 2003). Singularities are frequently joined together along edges in an image, which in fact inspired our use of ridgelet transforms. Thus, the proposed hybrid dictionary (based on ridgelet bases functions) promised to be the better choice for the construction of an over-complete dictionary, which, in turn, would provide better approximations for sparse representation. After dictionary generation, sparse coefficients were extracted based on the proposed dictionary and then pooling was applied to sparse-coded features which were converted into global features for image classification. Pooling is used by numerous visual recognition methods to combine nearby spatial features detectors into local and global features that can be used to remove irrelevant data from input images. Pooling offers many advantages to image transformation and provides a more compact representation that obtains robust noise management. The pooling operation is typically a sum, an average, a max, or some other commutative combination. The max pooling function is defined by the absolute value of sparse-coded coefficients Equation (6). where Z j is the j th element of the pooling function; x ij represents the matrix element at the i th row and j th column of X, which is the sparse-coded feature matrix; and M shows the number of feature maps. Table 1 shows the algorithm used to compute a features matrix based on deep features extracted from last convolutional layer which is composed of features maps. These features maps are passed to SC to encode features as sparse coefficients, which are then combined, using pooling, and then further concatenated as the features matrix. After the pooling of sparse coefficients, the features matrix is obtained by using all training images and then fed into the classifier for image classification. Classification algorithms are discussed in the next section. Figure 3 shows the proposed method for extracting coded features based on pooling and SC. CNN based on GAP layer technique Traditional convolutional networks use lower layer convolutions. Features maps in the last layer are converted into vector forms and sent to the fully connected layer for the classification task, which employs standard classification algorithms such as softmax logistic regression (Farabet, Couprie, Najman, & Yann, 2013). This structuring thus uses convolutional layers for features extraction, which features are then applied to traditional classifiers. Traditional CNNs can produce overfitting due to fully connected layer parameters. Different methods are used to avoid overfitting and produce the generalization ability of the CNN model (Jarrett, Kavukcuoglu, Ranzato, & LeCun, 2009). In the present work, we use GAP to minimize effects from overfitting. The idea standing behind GAP is to generate a features map for each corresponding category of the classification task. Instead of using the fully connected layer, an averaged vector from each features map is fed into the classifier according to the number of classes (see Figure 4a). The advantage of GAP over the use of a fully connected layer is that it uses less parameters yet produces better correspondence between features maps and categories. Another advantage is that there is no need to optimize parameters because it avoids overfitting and improves accuracy rates for both training and testing. Furthermore, GAP sums spatial information; thus, it is more robust for spatial translations of input. Figure 4(b) outlines GAP and fully connected layers. Support vector machine The support vector machine (SVM) has been used widely in image processing and pattern recognition applications. It uses hyperplane for separating training data using multidimensional training values of fixed different number of classes. The SVM classification detail is found in (Palaniappan, Sundaraj, & Sundaraj, 2014). The SVM classifier is introduced as binary classifier. The binary SVM classifier is converted into multiclass classifier using two strategies. These strategies are called one against one and the one against all (Manikandan & Venkataramani, 2009). The OAO classifier technique classifies every pair of class and using the most common label for each pixel. The OAA technique classifies each class against the rest and it chooses the label for each pixel with largest confidence. This strategy performs better when a number of classes are small usually less than Table 1. Proposed algorithm based on sparse coding using deep CNN features. D is dictionary size; M is number of features maps; X is sparse-coded matrix; Y is features maps matrix, F M is features matrix FOR each i = 1: training data do FOR each k ¼ 1 : M do Extract deep features using CNN last convolutional layer A l ¼ pool t ðϕðA lÀ1 à f l ÞÞ Extract features maps from c i Y ¼ ½f 1 ; f 2 ; :::; f k where each y i 2 R nÂk is a feature vector and k is the total number of features maps. Design dictionary matrix based on proposed dictionary, where M is the dictionary size D ¼ ½d 1 ; d 2 ; :::; d M D 2 R nÂM Design sparse coding based on dictionary and input features maps min x k k 0 subject to min y À Dx k k 2 ε Extract sparse codes for features coefficient matrix: X ¼ ½x 1 ; x 2 ; :::; x M X 2 R kÂM U ¼ max½ X ij "k ¼ 1; :::; M ENDFOR F M ¼ ½U 1 ; U 2 ; :::; U i F M 2 R iÂM ENDFOR 10 ( Raczko & Zagajewski, 2017). In this paper, we used OAA due to usage of a small number of classes (less than five). Random forest algorithm The random forest (RF) can be used for image classification in remote sensing application due to its superiority and robustness to noise as compared with other classifiers (C. Zhao et al., 2017). Feng, Liu, and Gong (2015) proposed RF and it was based on ensemble learning technique. It required less number of parameters while running as compared to other machine learning classifiers (SVM, ANN). The popularity of RF increased gradually due to an achievement of equal or higher accuracy in comparison with SVM for image classification in the field of remote sensing (Feng et al., 2015). RF is based on an ensemble of much independent individual classification and dependent as regression tree (CART), it can be defined as gðu; t; θÞ; t ¼ 1; 2; :::; j f g Where g denoted as RF classifier, u is the input feature vector and θ is the predictor variable which is used as independently identically distributed (i.i.d.) random process for producing each CART tree. The RF had the final response for calculating all decision trees output. There is less issue of overfitting due to individual decision tree in RF. Due to abundant advantages, the RF has great potential for classifying the UAV images in the remote sensing field. Extreme learning machine (ELM) The extreme learning machine (ELM) is a basic and new learning algorithm and has been developed using the basis of single hidden layer feed-forward neural networks (SLFN) (G. Huang, Huang, Song, & You, 2015). It is a very time-consuming process to adjust the input weights and hidden layer basis for all feedforward neural networks. To minimize or overcome Simulation and results A CNN-based framework achieves higher overall accuracy when compared to handcrafted-based features extraction techniques. Most remote sensingbased scenes use identical features belonging to different groups depending on spatial features and spectral intensity. A CNN framework extracts indispensable features from high spatial resolution remote sensing images in which generalized fundamental features are gradually changed into higher-level data. Highresolution remote sensing applications require highlevel data in more robust forms in addition to extremely effective low-level features representations of significance in datasets for small scenes. The CNN model thus provides a more robustly persuasive performance with a combination of high-and low-level features data. We observed superior performances in urban imaging of buildings, roads, trees and power transmission poles, but reduced results for vegetation and trees. Even a fusion of multiple handcrafted features cannot generate comparable results with CNN-based UAV high-resolution datasets. Average accuracy rates of the proposed technique when using machine learning algorithms were 80% for training and 20% for testing with a small UAV dataset. This dataset contained four classes (buildings, trees, power transmission poles and roads). Each class comprised 625 samples. Different numbers of training class samples were used to evaluate deep features using various classification algorithms. Two training samples per class were used for features extraction and classification ( Figure 5). We computed accuracy using ground truth versus estimations per Equation (8). Precision and recall are the most commonly used evaluation metrics in pattern recognition and information retrieval assessments. Both depend on the relevance of classification criteria and can be defined as the ratio of the number of retrieved elements to the total number of relevant elements in an instance per Equation (9). This ratio depends on true positive and false positive values and is calculated per ground truth sample labels versus estimated sample labels. Recall is defined as the ratio between the number of relevant elements in an instance to the number of retrieved elements per Equation (10). It is calculated as ground truth labels versus estimates (computed) per the proposed algorithm and then compared with results from handcrafted feature algorithms. Figure 5(a) shows performance accuracy rates. SC-based features reduction method produced superior results using the RF algorithm compared to other methods. Figure 5(b) shows that precision value results for the proposed CNN-based method were nearly comparable with accuracy outcomes. Features extraction from the last fully connected layer, without reduction (CNN-FC), produced better results compared to PCA and GAP features reduction techniques and also showed less accuracy compared to SC-based features reduction. The dropout layer was used to reduce overfitting in pre-trained networks and then applied to features reduction of CNN features. Dropout can improve CNNs by reducing overfitting (Yu, Xiaomin, Luo, & Ren, 2017) and is widely used in many deep learning applications. The chief idea is to reduce the co-adaptation of hidden units. The dropout operation sets the output of each neuron to zero with probability. By using this strategy, the network is forced to learn more robust features and reduce noise. The drop rate was set at 0.6 or greater, which is higher than the most commonly used value of 0.5. The dropout layer provided less accuracy than the proposed method (see Figure 6(a)). Precision values increased when lowering the number of layers and with increasing numbers of samples. Our SC-based approach also produced high precision value with the RF algorithm. The performance recall matrix showed similar results and were nearly equal with marginally increased values using the PCA design and RF classifier (see Figure 6(c)). Precision and recall values using the dropout layer also produced improved results when using SVM and RF algorithms. Accuracy, precision and recall performance metrics were evaluated using pretrained CNN-VGG16 features. Using pre-trained CNN models with our proposed features reduction techniques, we calculated computation time (Figure 7). The VGG16 pre-trained network consumed less time compared to other CNN networks. Furthermore, the GAP layer removed the fully connected layer and required less computing time due to the use of fewer parameters. Due to network complexity, SC and PCA techniques used comparatively more time, especially when using the ELM algorithm. Thus, the VGG16 network was more effective for real time application. The proposed SC approach also yielded higher accuracy with pretrained VGG16 networks (highest results, Figure 8). Pre-trained CNN models achieved similar classification accuracy rates with GoogLeNet and VGG-16, which were comparatively better than AlexNet and CaffeNet. GoogLeNet yielded better accuracy than did AlexNet and CaffeNet, but results were unimpressive compared to VGG-16. The likely reason for this outcome being high-level semantic scene input data from extracted features of the intelligent CNN features learning mode. When comparing the number of hidden layers, GoogLeNet and VGG-16 produced better results and showed more profound structures than other pretrained CNN models, which held eight layers. By increasing the number of layers, extracted features became deeper and more closely related to high-level semantic scene data, yielding a more highly descriptive features vector. Hence, GoogLeNet and VGG-16 demonstrated better results for image scene classification of VHR imagery than did AlexNet and CaffeNet. Comparisons were also made between all pre-trained networks using (a) the last fully connected layer (CNN-FC) and (b) the dropout layer. Features extracted from the last fully connected layer had dimensional lengths of 1 × 4096 (GoogLeNet) and 1 × 1024 (VGG16), respectively, for AlexNet and CaffeNet. These deep features were used by various classification algorithms to classify aerial images. VGG16 provided the highest classification accuracy (94.64%) using the SVM classifier based on proposed UAV dataset. GoogLeNet, using the ELM algorithm, had the second highest classification accuracy rate (Figure 9(a)). Accuracy rates produced by the proposed SC features reduction technique were superior to using the fully connected layer of pre-trained networks. The dropout layer was also used to reduce overfitting and enhance features reduction for all pre-trained networks, but it provided less accuracy for all classifiers (SVM, ELM and RF) (Figure 9(c)). This was likely due to low accuracy from pre-trained networks from the UAV based dataset. Some classes from the ImageNet dataset of pre-trained CNN networks were slightly different from the UAVbased dataset. Figure 8 shows different learning rate outcomes for proposed CNN architecture(s) (SC vs. PCA). Average accuracy increased with an increasing learning rate, with the highest achieved at a learning rate of 0.9 using either SC or PCA. Training and testing errors were also evaluated using proposed and existing features reduction techniques and pre-trained CNN architecture(s) with FC (St-CNN). The proposed SC-based CNN architecture minimized overfitting for a small dataset by using a reduced set of features, as did the GAP layer method. Both CNN-based architecture(s) were intended to reduce overfitting problems. Training and testing losses using the standard CNN model with a fully connected layer on a small dataset showed high training and testing values ( Figure 10). SC-and PCA-based training and testing values are seen in Figure 10(a,c). Overall, combined comparisons of SC, PCA and GAP training and testing losses with standard CNN architecture(s) are shown in Figure 10(d). These results demonstrate that SC-CNN produced smaller training and testing gaps, which indicated no overfitting for a small UAV dataset. Figure 10 shows a comparison of training versus testing losses with respect to proposed and standard CNN based algorithms. The largest gap between training and testing losses is seen in Figure 10(a) for the proposed St-based features using CNN. Blue and black lines show training and testing gaps compared to GAP and PCA based features gaps in Figure 10 (b,c). Complete training and testing losses for proposed versus fully connected features (St.CNN features) are shown in Figure 10(d). Visualization of CNN filters and the last convolutional layer CNN filters used in the experiment for each convolutional layer are shown in Figure 11. The size of each filter was 5 × 5. These filters were used to extract features maps. A CNN's power lies in its weights and starting point. Weights were selected randomly and set to a standard deviation of 0.001. Weights for the UAV image kernel of the first convolutional layer are shown in Figure 11. The first column represents six features maps of the convolutional layer. The second column represents six features maps of the ReLU layer. The third column represents the normalization layer of features maps. The last column comprises six features for the first pooling operation. Convolutional, ReLU, normalization and pooling layers of the CNN model are shown in Figure 12. All results are taken from the Tower Class dataset. Visualization results showed that deep spatial properties of the CNN features maps were able to extract prominent features. An instinctive understanding of CNN activation allows that each layer's visualization was achieved by image inversion and reconstruction by the proposed model. Features extracted from convolutional layers were reconstructed as images similar to the original. Deeper layers produced blurred images while features taken from the last fully connected layer could not be inverted to any recognizable image. Reconstructed images contained several similar but meaningful yet randomly distributed components. Results demonstrated that data from lowlevel layers were reorganized by FC layers to recreate further abstracted representations. Reconstruction results of local region's features maps taken from Conv1-Conv5 and fully connected layers (Fc6-Fc8) for various classes (Tower, Building, Vegetation, Road) are shown in Figure 13. The receptive field was of a larger size on features maps with respect to deeper layers for the tower-class ( Figure 14). By increasing the number of hidden layers, features vectors grew more descriptive and presented higher degrees of semantic data. Results showed that deep CNN models have an important role to play in the representation of deep extracted features. Comparison with existing methods Acoording to the prior reports, several descriptors have been used (van de Sande, Gevers, & Snoek, 2010) for remote sensing image classification. These include texture, color-image retrieval/classification, and web image retrieval. Our comparisons were based on the BOW model, Color Histograms and variations as the most effective methods for performance valuations of low-and mid-level features. Table 2 shows comparative outcomes of the proposed method versus existing handcrafted features methods. Mid-level descriptors BoVW and variations (Avila, Thome, Cord, Valle, & Araújo, 2013) are used in remote sensing images as mid-level features descriptors. BoVW uses a codebook to discriminate patches then computes statistics for each visual word occurrence in a test dataset. This has been state-of-the-art for the computer vision community for years and remains a good work horse for many image tasks. For BoVW in our original dataset, we used the scale-invariant feature transform (SIFT) (Lowe, 2004) as the features descriptor. We evaluated the proposed CNN-SC's performance with the NWPU-RESISC45 dataset (Cheng, Han, & Xiaoqiang, 2017) . This benchmark contains 31,500 images divided into 45 classes. Each class holds 700 (256 × 256) images for red-greenblue (RGB) spectral ranges. Spatial resolution changes from 30 to 0.2 m per pixel for most scene classes. This dataset has the largest number of scene classes and total images. Rich image variations, large within-class diversity, and high between-class similarities make this dataset robust and challenging. Table 3 shows comparative experimental outcomes using the NWPU-RESISC45 dataset. The proposed algorithm with SC representation and pre-trained VGG16 proved excellent. Neither GAP nor PCA achieved better results than sparse representation. Both methods used the SVM classifier for classification. BoCF's outcome was also slightly less than CNN-SC (Cheng, Zhenpeng, Yao, Guo, & Wei, 2017). Used by the US Geological Survey Department, UC Merced data are an open-source publicly available aerial image dataset of approximately 30-cm spatial resolution (Yi Yang & Newsam, 2011). It has a total of 2100 (256 × 256) image patches (RGB bands) of various American cities with 21 semantic categories, each with 100 samples per class. Table 4 shows comparative performance outcomes for the proposed CNN-SC algorithm versus extant scene classification algorithms using the UCM dataset. The CNN-SC algorithm yielded results superior to all others with only the gradient boosting random convolutional network (GBRCN) approaching its accuracy. All pre-trained networks using fully connected layer features provided less accuracy. We performed various cross-validation procedures on proposed techniques to enhance results. Optimal results were achieved for 5-and 10-fold (k = 5,10) cross validation as shown in Table 5, showing little room for improvement. Cross-validation with different k values based on our own and on existing datasets showed better performance at 10-k fold for the proposed approach versus existing techniques for all datasets. Experimentally, and due to limited space, only the best results are shown. Discussion The major contribution of this work is its CNN-based design and use of a small remote sensing image dataset based on features reduction techniques. This is in addition to our comparison of results to those obtained by GAP and PCA techniques that replaced fully connected layers with pre-trained CNN architecture(s). Results were also compared with approaches using pre-trained CNN architecture(s) with fully connected layers. Using a small training dataset, proposed techniques using existing pretrained architecture(s) can be exploited by remote sensing applications in place of fully trained networks for image scratching. Our objective was not to emphasize high performance but rather to explore the use of pre-trained architecture(s) for fine-tuning, which showed excellent capability compared to those based on fully trained networks from scratch. We evaluated CNN-based architecture(s) based on our own UAV dataset and compared performances with state-of-the-art datasets used by the remote sensing community. The proposed CNN-SC method uses deep features acquired from the last convolutional layer of pre-trained CNN architecture(s) for scene classification of aerial images. Our hybrid dictionary uses SC ridgeletbasis elements and yielded excellent results with an exceptional ability to express discriminative features for scene classification. The proposed method also reduces features space from high-dimensional to low-dimensional. In addition, SC features are concatenated to generate global features to further enhance scene classification. By using pre-trained CNN architecture(s) to reduced features dimensions, and by employing machine learning algorithms (SVM, ELM, RF) for classification of reduced features-dimension sets, the proposed system avoided overfitting with a small UAV dataset. Existing features reduction techniques (GAP and PCA) were used to minimize overfitting and provide opportunity for pre-trained CNN architecture(s) use in small datasets. Four different pre-trained CNN architecture(s) (AlexNet, CaffeNet, GoolgLeNet and VGG16) were fine-tuned by incorporating them with SC, PCA and GAP features reduction techniques. Instead of fine-tuning all layers in all networks, only the last layers were fine-tuned by random weights and hyper-parameters. Training and testing losses were calculated using pre-trained CNN architecture(s) with and without proposed CNN architecture(s). Deep SC-based CNN architecture(s) outperformed all others by minimizing training and testing error gaps. Gaps between training and testing errors were reduced by changing hyper-parameter regularization approaches such as "weight decay" or "dropout", or by using fewer parameters in fully connected layers. Optimized performance of deep SC-based CNN models can be achieved by using a larger network and by appropriately setting regularization hyperparameters in the last layer of the network. However, computational complexity is increased with the use of a larger network. The main goal of this paper is to show that the proposed SC-CNNbased networks produces better performance with pre-trained networks and increases efficiency due to the small gap between the testing and training errors. Features reduction techniques can reduce overfitting in CNN-based models for small datasets. Our SC approach to features reduction minimized overfitting compared to PCA-and GAP-based algorithms. Pre-trained CNN models without features reduction obtained larger gaps between training and testing losses, which likely indicates overfitting in the CNN model for small datasets. However, it is difficult to recognize whether a system is underfitting or suffers other defects when both testing and training errors are high. If a training-testing gap in a fine-tuned network is not within an acceptable range, even when improving regularization hyper-parameters more data would be required to achieve optimized performance. Thus, we can conclude that training and testing losses are helpful when determining a system's over/under-fitting status when gaging performance and fitting it for an optimized parameters. When developing machine learning models, researchers use different sets of datasets daily with different training sample sizes (increased or decreased), and adjust regularization parameters to affect a deep model's capacity in terms of error rate and computational resources (memory, runtime). They change hyper-parameters of training optimization algorithms to improve optimization techniques related to deep CNN architecture(s). As evidenced in the literature, CNN-based automatic features extraction methods outperform handcrafted methods in aerial remote sensing imagery. Hence, it is more than reasonable to apply deep pre-trained CNN networks for use as deep features extractors such as our proposed and existing features reduction techniques. The proposed SC-CNN-based deep CNN features obtained high performances compared to existing features reduction methods (GAP, PCA and dropout) using UAV-based and existing state-of-the-art remote sensing datasets. Results were also compared with low-level and mid-level handcrafted state-of-the-art techniques using UAV and existing datasets. Comparisons between SC-CNN were also made with pre-trained CNN networks with fully connected layer(s) (FC-CNN) and demonstrated excellent results for scene classification. Conclusion We based this method on pre-trained CNN architecture(s) that incorporated SC for features reduction, specifically to minimize overfitting. Performance outcomes were compared with PCAand GAP-based techniques, which also used pretrained CNN architecture(s) to minimize overfitting. The proposed method used a pooling technique to concatenate SC features extracted from the proposed hybrid dictionary for scene classification. The proposed algorithm was extremely computationally efficient at learning data representations and outperformed all current state-of-the art algorithms in aerial scene classification for both extant and our own UAV dataset. Future work will consider a larger number of hyper-parameters to further minimize overfitting and improve accuracy for use in remote sensing scene classification with different number of samples.
10,885
2019-01-01T00:00:00.000
[ "Computer Science", "Environmental Science", "Engineering" ]
Adoption of Modern Hydrogen Technologies in Rail Transport Many new zero-emission propulsion technologies are being developed today due to the need to reduce the atmospheric carbon dioxide emissions. The impact of the transport sector on the environment drives a need for innovation, including innovation in the rail transport sector specifically. At the TRAKO fair of rail vehicles the newest technological solutions have been presented. These new vehicles are expected to take over the rail transport sector in the coming decades. Many of the presented solutions and prototypes focused on using hydrogen as fuel for a system of hydrogen fuel cells, which are then used to produce the electricity needed to drive the vehicle. The development of hydrogen fuel technologies in vehicle drives in recent years allowed for a set of new solutions to appear for all types of rail vehicles and applications. Hydrogen powered rail vehicles for transporting cargo, passengers, and shunting vehicles have been shown. This article provides a discussion of the newest hydrogen solutions and vehicles sent to the market. It was determined that the adoption of such solutions will be mainly restricted by the relative cost of the hydrogen fuel rather than the vehicles or fuel cell technologies themselves. The cost of hydrogen production, when powered by renewable Energy sources to enable reduced carbon dioxide emissions, would need to be reduced to at least $2.50 /kg of fuel in order to satisfy the requirements for widespread adoption. INTRODUCTION Due to the growing demand for ecological transport solutions many innovative products and new technologies are continuously introduced. As a result of the development strategy, adopted by the EU member states, aimed at building a net zero emissions transport grid the current vehicle market, of both road and rail vehicles, is in the first stages of its most significant change since the introduction of the combustion engine. No other propulsion technology since has been planned and implemented as thoroughly and consistently as the new shift towards low and zero emissions technologies. The main steps being taken with the aim of accomplishing this change include a complete remodeling of the transport sector, using state-of-the-art technologies, into a transport system network that can be considered as having net zero emissions. This is a huge undertaking, especially considering the fact, that many of the technologies that are expected to outcompete the conventional combustion engines are still some development away from being commercially viable. Nevertheless, such goals and efforts are deemed as necessary. The EEA report on the Air Quality in Europe [Report 2020] states that the share of urban population in the EU-28 group of countries that were exposed in 2018 to toxic compounds in concentrations exceeding the European norms reached as high as 15% for larger particles (PM10) and for benzo(a)pyrene (BaP), while for fine particles (PM2.5) and nitrogen oxides (NO X ) this share was 4%. This means that a significant portion of the urban population in the EU is constantly in the presence of excessive concentrations of highly toxic substances in the air, which can cause cancer, lung disease, heart disease and many more health problems. On a positive note the mass of toxic substances released into the atmosphere in EU member states has decreased, despite continued economic growth, which indicates that the goals set by the EU are feasible and can be reached (Fig. 1). However, the rate of this Adoption of Modern Hydrogen Technologies in Rail Transport Paweł Stobnicki 1 , Dawid Gallas 1* 1 Łukasiewicz Research Network -Institute of Rail Vehicles "TABOR" , ul. Warszawska 181, 61-055 Poznań, Poland decrease in emissions is too slow to allow reaching a net-zero emissions economy on its own within any reasonable time frame, which also serves to underline the importance of urgent and focused steps being taken in the effort of reducing these emission values. Hydrogen powered rail vehicles New vehicle drive technologies that are currently in development tend to move towards alternative, pro-ecological, and zero-emission solutions [Oldknow et al. 2021, Daszkiewicz et al. 2017]. The most common and widespread technology that falls into that category are electric rail vehicles. These vehicles require for the rail lines to be electrified, while the share of railways that are electrified can vary greatly depending on the EU country considered. Many EU member states plan on expanding their railway electrification, one of such examples is Germany, which plans to increase their share of electrified rail lines from 52% to 70% in the coming decades [Railway Report 2020]. Due to the high financial costs of railway electrification the popular and proposed alternative solutions usually consider propulsion technologies that allow the vehicles to operate on non-electrified lines. Most hybrid rail vehicles can be considered as belonging to this category, where operations outside of the range of electrification can be performed using the conventional combustion engines. Another possible solution is using electric locomotives equipped with Energy storage systems, which allow them to temporarily operate outside of the electric traction. This strategy has mostly been adopted by Scandinavian countries, mainly Denmark and Norway. Some American companies have offered solutions using electric locomotives, powered with batteries, as an additional propulsion system in a larger conventional freight consists. Such was the intended application of the FLX Drive freight locomotive produced by Wabtec [https://www.wabteccorp. com/locomotive/alternative-fuel-locomotives/ flxdrive] which, when included in a larger consist, enables the whole set to utilize the 2400 kWh of power stored in its batteries, which allows the 3230 kW engine to operate at full power for about 30 minutes (Fig. 2). Innovative drive systems face many barriers to widespread use due to the limited level of Combined with an on-board energy storage system with a combined capacity of 160 kWh this makes it possible for the vehicle to operate using hydrogen fuel, stored in tanks with a combined capacity of 175 kg of hydrogen (Table 1). Such a solution has been proposed for passenger transport in the form of Alstom's Coradia iLint (Fig. 6). This two-unit railcar is reported to be able to travel over 1000 km and with a maximum travel speeds of 140 km/h, where each of the vehicle units is equipped with a hydrogen storage tank system with a capacity to store 94 kg of hydrogen fuel. The hydrogen is used to power the fuel cells of the vehicle which have a power of 314 kW (Table 2). For freight locomotives a respective hydrogen solution is still in early development stages, since it would require a much greater power output than the passenger or shunting vehicles. One of the solutions currently in development is a new hydrogen freight locomotive, which is to be a repowered version of an already existing freight locomotive used by Canadian Pacific. This is to be achieved by equipping a locomotive with 6 hydrogen fuel cell systems produced by the company Ballard, which would result in a total power output of 1200 kW. Because the mark and model of the combustion engine locomotive that is to serve as basis for this repowering process by Hydrogen fuel cell technologies One of the primary issues arising from the use of hydrogen fuel cells in vehicles is the limited continuous power they provide. The displacement of conventional combustion engines with fuel cell stacks resulted in a decrease in vehicle power in each of the considered cases (Table 3). In the case of the Coradia iLint vehicle its power generation using fuel cells is only 54% of the vehicle power in the combustion engine variant (Coradia Lint). Similarly for the previously planned repowering of a locomotive using a set of Ballard fuel cells, its power after repowering was expected to reach about 54% of the lowest powered combustion engine variant. The solution offered by PESA in the form of a hydrogen powered shunting locomotive SM42 6Dn is to have only 21% of the original power value of the base conventional locomotive, the SM42 6Dk. Such variation in the power provided to the drive system indicate a significantly different operating properties, characteristics, performance and capabilities of these new hydrogen vehicles when compared to their conventional counterparts. This means, that the range of real utility provided by these new vehicles will be much more narrow than for the older and existing solutions. Many technologies of hydrogen powered fuel cells exist, such as phosphoric acid fuel cells, solid acid fuel cells, alkaline fuel cells, solid oxide fuel cells, molten carbonate fuel cells, or a fuel cell using a proton exchange membrane (proton exchange membrane fuel cell). In automotive industry the proton exchange membrane variant is the most commonly used (abbreviated to PEMFC). These fuel cells operate in the temperature range of 70-150 °C and at a pressure ranging from 0.1 to 1.0 MPa. The Energy generated by the reactions taking place in the fuel cells which is then sent to the on-board batteries can be described using the equation: where: m -hydrogen mass, H -hydrogen combustion energy (33.2 kWh/kg), ƞ FC -fuel cell efficiency, ƞ BT -battery efficiency, ƞ CV -alternator efficiency In such a solution the efficiency of the fuel cell ƞ FC is described as being proportional to the voltage V of the whole fuel cell stack, described The principles of operation of PEMFC type hydrogen cells includes the dissociation of hydrogen (H 2 ) to an electron and a proton as a hydrogen ion (H + ), where the electron is sent away into the circuit. Then the hydrogen ions (being just protons) pass through the proton exchange membrane to reach the other side of the fuel cell, while their electrons (e -) take the long way around through the electric circuit where their flow is used to generate energy. Hydrogen ions reach the cathode where atoms of oxygen (O -) are waiting to combine with them after being dissociated from the incoming oxygen molecules (O 2 ) also combining back with the electrons coming from the circuit. Finally, the resulting molecules of water (H 2 O) are created as a product of the cell's operation and released outside the system. This process was simplified into graphical form in Figure 7. Operation of fuel cells, aside from useful power, generates losses in the form of heat. Even though the operating temperature of such fuel cells is 70 °C, the waste heat generated by them still needs to be drained and dumped in order to prevent overheating. This heat can, however, be used to heat the vehicle, its temperature sensitive elements, or even recovered through the use of thermoelectric generators and turned back into electricity. There are numerous further improvements and solutions that could synergize with the use of hydrogen fuel cells, but due to the novelty of this technology most of those are still in need of further development. The efficiency of fuel cells depends on their type. Currently, the most efficient furl cell type are the alkaline fuel cells (reaching electric energy generation efficiencies of even 60%). It is, however, the PEMFC type fuel cells that are most popular in transport solutions, which in spite of their lower efficiency are characterized by smaller size and lower mass compared to other solutions. Due to the importance of reducing size and mass of systems and devices in vehicles the PEMFC type cells won out over the alkaline fuel cells in such applications. Development and further applications of hydrogen fuel cells The increase in the prevalence of new fuel cell technologies in transport results directly from the decrease in their price, along with their rising reliability. The cost per kilowatt of power for the newest iterations of fuel cells has been a 30% reduction from their price point in 2006 (Fig. 8). Lower price and greater availability, compounded by political efforts towards cleaner transport, have led to an increased competitiveness of fuel cells. Considering the current EU zero net emissions targets by 2050 and the European Green Deal [https://eur-lex.europa.eu/legal-content/PL/TXT/PD Figure 7. Principle of operation of a PEMFC type of fuel cell F/?uri=CELEX:52020DC0098&from=EN] as well as many other pro-ecological pursuits of OECD countries aiming to reduce the CO 2 emissions to the atmosphere, it can be expected that hydrogen fuel cells will continue to grow in popularity and applications. Current plans of developing and implementing hydrogen fuel cells prepared by the FCH initiative (Fuel Cells and Hydrogen) consider a passive and an active adoption variant (Fig. 9). According to the ambitious development plans for the implementation for fuel cell technologies for year 2030 the expected amount of Energy generated using fuel cells is expected to 65 TWh in the energy sector and 70 TWh in the transport sector. Another ambitious step would be achieving 112 TWh of Energy produced by hydrogen fuel cells in the energy sector, 579 TWh of energy produced for heating buildings and up to 675 TWh in the transport sector. The total amount of Energy produced using fuel cells was postulated to increase from 325 TWh in 2015 to 665 TWh in 2030. Whereas in 2050 hydrogen fuel cell energy would be expected to reach a total value of 2 251 TWh. Due to the Energy loss and limited efficiency in the process of production of hydrogen fuel the only justifiable path to using hydrogen to produce electric Energy would be to use hydrogen as a form of energy storage system. Such a strategy would include producing hydrogen fuel using excess electric Energy available from renewable Energy sources during their peak production time [https:// ec.europa.eu/energy/sites/ener/files/hydrogen_strategy.pdf], which would then be used in the time of increased electric energy demand, which typically takes place in the afternoon hours, which is way past the peak production time for renewables such as solar. As a result of these factors actions and initiatives taken to develop and implement hydrogen fuel cells have been limited in scope by the availability of cheap renewable energy. From an economic standpoint, such a solution is only viable when the cost of energy generation using renewable sources is low enough. The International Energy Agency (IEA) has produced a map with this purpose in mind (Fig. 10), which shows the long term cost of hydrogen fuel production from electric energy generated using renewable energy sources. The current level of hydrogen production is mostly as a side effect of extracting natural gas. Other methods of producing hydrogen include biomass gassing. However, ultimately hydrolysis powered with wind and solar energy is planned to be the end solution for long term hydrogen production [Kakoulaki 2021]. Currently the cost of hydrogen production varies greatly between $2,50/kg up to $6,80/kg, according to the US Department of Energy [https://www.hydrogen.energy.gov/pdfs/20004-cost-electrolytichydrogen-production.pdf]. Yet, S&P claims that this price needs to fall consistently into the range of $2-$2,5/kg in order for hydrogen to become a viable and competitive alternative for fossil fuels [https://www.spglobal.com/platts/en/market-insights/latest-news/electric-power/112020-greenhydrogen-costs-need-to-fall-over-50-to-be-viable-sampp-global-ratings]. This means that the future development and adoption of hydrogen powered solutions, both in rail and automotive sectors, will be intricately linked and dependent on the growth and development of renewable energy production. As of today, production of hydrogen fuel could be justifiable at most in countries close to the equator, as well as in Australia and South America. CONCLUSIONS Thanks to the quick development of hydrogen fuel cell technology as well as the pressure brought on by legislation aimed to reduce atmospheric carbon dioxide emissions new hydrogen solutions and technologies can be found appearing in modern solutions, including in the rail sector. Up until recently, most such innovation was reserved to railbuses and other passenger light rail vehicles, which did not require a large amount of power to be provided by the fuel cells. Even so, in the recent years new solutions have been presented for the use of hydrogen fuel in shunting and freight locomotives, expanding the reach and capabilities of hydrogen-based drive systems. Despite the lower power output provided by the fuel cells in these new drive systems presented by the manufacturers, these vehicles claim to have sufficient power generation to suffice for their designated operations. However, despite the variety of applications shown the widespread adoption of fuel cells in rail transport is still limited by the prohibitive costs of hydrogen fuel production and too high energy costs from renewable sources. It is possible to temporarily patch up the hydrogen supply demand using hydrogen obtained using natural gas extraction or produced from biomass, however, these solutions are not sustainable long term and do not scale up enough. Such hydrogen sources can at most serve as intermediary, since they do not fit into the EU strategy regarding zero net emissions of CO 2 by 2050. In order to fully exploit the possibilities offered by PEMFC type hydrogen fuel cell based drive systems a sufficient reduction in the hydrogen fuel cost is also necessary. Due to the environmental conditions in most of the EU member states, given current level of technology regarding renewable energy sources and hydrogen production process from hydrolysis, it is impossible to conduct a natural market shift towards a more competitive hydrogen powered transport to supplant conventional fossil fuels. Thus further development of these technologies and solutions will be highly dependent on legal pressure and driven political effort of the EU and the member states, mostly in the form of subsidies and tax reductions for the implementation of hydrogen technologies. The final obstacle in hydrogen fuel achieving economic viability is the production cost, which is tied to the cost of electricity, especially with respect to the cost of conventional fossil fuels. Considering the above, it should be possible to artificially increase the break-even point of the estimated hydrogen fuel price from $2,50 /kg of hydrogen to a higher value through the use of subsidies for hydrogen production as well as additional taxes or tariffs on fossil fuels, along with further CO 2 emission limits.
4,128.6
2022-03-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Stabilization and visual analysis of video-recorded sailing sessions One common way to aid coaching and seek to improve athletes’ performance is by recording training sessions for posterior analysis. In the case of sailing, coaches record videos from another boat, but usually rely on handheld devices, which may lead to issues with the footage and missing important moments. On the other hand, by autonomously recording the entire session with a fixed camera, the analysis becomes challenging owing to the length of the video and possible stabilization issues. In this work, we aim to facilitate the analysis of such full-session videos by automatically extracting maneuvers and providing a visualization framework to readily locate interesting moments. Moreover, we address issues related to image stability. Finally, an evaluation of the framework points to the benefits of video stabilization in this scenario and an appropriate accuracy of the maneuver detection method. Introduction To assess athletes' performance during training sessions, in many sports, technology and data analysis are assuming an increasingly larger role. Nevertheless, for sailing, and more specifically, the Olympic dinghy class, such strategies are still not a common practice during training. This is mainly due to the lack of clear strategies for analyzing data obtained from sensors attached to the boats. Therefore, coaches rely mainly on videos to review athletes' performance. Training sessions usually last 2-3 h, where the coach follows the training boat in a rigid inflatable boat (RIB). Two maneuvers, known as tacking and jibing, are the core of the sailors' strategy to reach the finish line as quickly as possible. The maneuvers allow the boat to advance in a zig-zag manner because it cannot sail straight into the wind. Tacking occurs when a boat turns its bow (front part) against the wind and then keeps turning "through the wind" to catch the wind on the other side of the sail, whereas jibing occurs in the opposite direction. When performing such maneuvers, sailors usually move from one side of the boat to the other (they switch sides). When and how maneuvers are performed is critical in a race, as every second counts. Hence, they are an important skill to improve and master during training sessions. Coaches therefore record short video clips of the maneuvers, on the order of 1 min, to further discuss them during a debriefing session with the sailors. However, there is no structured or standard way to register these moments, and most coaches rely on handheld devices for recording, such as their personal smartphones. Because of this, many maneuvers that could be valuable for better analysis of the sailor's performance are missed. To avoid the burden of manual capture, coaches have recently switched to cameras that are mounted on their boats and record an entire training session. Despite the clear advantages of relieving the coaches from the recording task and capturing important moments, the task of going through the entire video after a training session (or multiple training sessions) is too costly in terms of time. In addition, with handheld devices, the person registering the footage naturally applies manual stabilization by compensating for the boat's movement while keeping the subject of interest in focus. On the other hand, this stabilizing effect is lost when the camera is mounted to the trailing boat. In this work, we address the above-mentioned issues associated with a mounted camera for recording sailing training sessions. Our main contribution is the provision of a pipeline that allows a less time-consuming visual analysis of recorded videos from entire training sessions. It includes stabilization based on the horizon line, extraction of maneuvers from recorded footage, and highlighting potentially interesting segments in time, which can in turn be explored and annotated using a visual interface developed in support of this study. This article is an invited extension of a formerly published conference paper [1] with special focus on various aspects. We have extended the conference paper by adding the stabilization process and its evaluation. Related work Video visualization may assist users in their analysis by removing mechanical tasks, such as viewing the entire footage [2]. Note that the goal is not to provide fully automatic decision-making solutions from video data. Visual tasks related to video visualization are as follows [3]: video annotation, browsing, editing, navigation, recommendation, retrieval, and summarization. In our method, we focus on three of these tasks: annotation, navigation, and summarization. The main goal is to explore interesting events captured on video. Higuchi et al. [4] proposed a visual analysis tool designed to analyze long sequences with multiple events; however, sailing sessions differ in composition and content from other event-driven activities. We focus on only one complex activity that requires a deeper analysis. A survey by Barris and Button [5] indicated that there is already extensive literature on vision-based analysis related to sports. One common goal is to provide feedback to athletes, as described in ref. [6]. This is also an important aspect with respect to sailing, as videos are used to debrief sailors by reviewing their training session(s) and providing feedback on possible improvements. Moreover, videos in sports visual analytics may be used passively or actively. In the first case, the video is used to complement other types of data. For instance, Polk et al. [7] used video to reinforce learning outcomes after analyzing tabular data, as one coach noted during their evaluation, "seeing is believing." In the second case, the video is actively used and is considered as the main data input. For example, Legg et al. [8] developed a visual analytics tool for multiple keyframes annotation using glyph techniques. In this study, we examined the importance of video stabilization during the analysis of sailing videos. Video stabilization is a mature area of research, and numerous video stabilization approaches are currently available [9]. Without using other external information, such as inertial sensors, one way to stabilize the video is by using visual cues. In nautical environments, the horizon line is commonly used as a reference for video stabilization [10][11][12]. These approaches aim to detect the horizon line in the video and transform the video frame to align it horizontally. Other approaches intended to accomplish the same purpose involve separating the image into two regions using machine learning methods [13] or applying pixel-wise segmentation with a fully convolutional network [14]. Another class of methods uses the detection of features around the horizon [15], corner points using an adaptive Harris algorithm [16], or hybrid approaches using feature-and dense-network methods [17]. Our approach focuses on the detection of the horizon line using image-processing techniques. Methods To be able to find, extract, and visualize sailing maneuvers, there are a number of necessary steps to follow, hereinafter referred to as the sailing maneuvers analysis pipeline, as illustrated in Fig. 1. These steps are video stabilization, detection and tracking, maneuver detection, and visual analysis. As mentioned previously, sailors switch sides on the boat during a maneuver. This action can serve as an important visual cue to detect maneuvers from video images. We assume that the video is registered from behind the training boat with the camera facing forward on the chase boat (the RIB). The first step in our pipeline is to stabilize the video in order to compensate for the RIB's motion on the water. Then, the position of the boat and sailors are tracked, and this information is used to detect maneuvers. Finally, we provide a visualization screen to facilitate the inspection of any detected maneuvers. The goal is to facilitate efficient navigation for the after-training session and allow annotating the most relevant maneuvers. Stabilization For coaching purposes, we consider that it is easier to analyze a stabilized video than its non-stabilized counterpart. In addition, the subsequent stage in the workflow, detecting and tracking the sailors and sailing boat, should benefit from the stabilization as well. Minimizing the motion of a camera capturing the target objects should allow more stable tracking. Because we only have video data available as input, we must rely on the frames to compute the necessary transformations to compensate for the motion of the camera. The motion of the camera is mainly induced by waves and steering changes. A promising visual cue for deriving induced motion is the horizon line. We assume that the horizon line is always distinguishable in a maritime environment. In addition, we suppose that the horizon line is a long straight line that is not affected by the camera's lens deformation. After locating the horizon line and using the angle formed by that line with a straight horizontal line, the frame is transformed and therefore stabilized. The first steps of the stabilization process consist of denoising the frames using a median blur filter, followed by edge detection using the Canny edge detector method [18]. The main purpose of denoising is to eliminate spurious edges caused by waves and, consequently, facilitate horizon detection. The differences between not applying and applying denoising are depicted in Fig. 2. Detecting the horizon line from the edge image is performed in three steps: dilation, detection, and selection. The detected edges are dilated to increase the probability of detecting the horizon. We empirically found that using a 7 × 7 kernel resulted in stable detection. Dilation helps to generate more prominent continuous lines that are easier to select by the voting system used in the horizon-line detection algorithm. After applying this transformation, the next step consists of detecting and extracting candidate line segments in the dilated image using the Hough transform algorithm. To avoid the high computational cost of the original Hough transform, we use the progressive probabilistic Hough transform (PPHT) algorithm [19], which is a computationally less expensive variant (Fig. 3). The settings used for this algorithm were those from the original paper: θ = 0.01 and ρ = 1. The minimum line segment length limit was 100 pixels, and a voting limit of 150 votes was used during the experiments to avoid selecting small lines caused by water reflections. This parameter is resolution-dependent, and changes linearly with the resolution. The line detector algorithm detects a set of line candidates in a frame. The longest horizontal line segment detected is considered to be the visible horizon. That is, the candidate line segment with the largest absolute horizontal difference (in pixels) between endpoints is selected as the horizon line. The transformations for every frame are then calculated using a previously detected horizon line. The translations and rotations that transform the selected segment to the target destinationthe horizon as a horizontal straight lineare computed and stacked in a data structure. This data structure is then filtered using a moving average filter to reduce jitter caused by small differences in rotation and translation per frame, and the transformations are then applied to the corresponding original frames. An example of a source and a transformed video frame using the presented stabilization method is shown in Fig. 4. Detecting and tracking the boat and sailors A maneuver is detected by analyzing the location of the sailor(s) with respect to the boat. To detect boats and sailors, we use a pre-trained neural network for object detection. More specifically, MobileNet [20] was selected, given its low computational cost. The pretraining was executed on the Microsoft Common Objects in Context dataset [21], which already contains images for the classes Person and Boat. For each video frame, we expect the network to provide bounding boxes for the located boat and sailors. Notwithstanding, we noted that it fails to detect them in many frames, especially the sailors. We therefore combined the detection network with a tracking method. We used the discriminative correlation filter with channel and spatial reliability as the tracking method [22]. The tracking algorithm is initialized with a bounding box found in one of the frames by the detection network. However, the tracking algorithm is also not perfect and tends to result in drift over time due to accumulated errors. To overcome this issue, the bounding boxes between both methods are compared, and when the difference is above a given threshold, the tracker is re-initialized. In our experiments, we used a threshold of 25 pixels for 1280 × 720 footage images. If the network fails to detect the boat or sailor, we must rely on the tracker estimation for these frames. Finally, if both methods fail, we assume that any objects are either too far away or are outside the frame. During the experiments, we disabled the tracker whenever the network did not detect a boat or person for more than 60 sequential frames, which empirically avoided most cases of unreliable location data. The combined network and tracking method increased detection accuracy in stabilized videos by 10%-15%. Maneuver detection As mentioned above, we use the action of sailors switching sides to detect maneuvers. We rely on the bounding boxes of the boat and sailors to identify this action. Hence, a signed distance d between the center of the boat's and the sailors' bounding boxes is computed. Note that only the horizontal difference is considered because we assume that the video is registered from behind the boat. Even if the boat is tilted to one side, the sailors should still be far enough away from the middle of the boat in the video. For dinghy class, this is especially true because sailors lean out over the side of the boat (hike) to keep the boat as vertical as possible. Even though the signed distance d contains noise present in the frames, the crossing moment is usually clear under visual inspection, as illustrated in Fig. 5(a). Because we assume that the video is registered from behind the boat, and the boat's bounding box encloses the whole boat instance, the middle of the bounding box represents the middle of the boat. We define the middle Fig. 5(a). First, we look for the moment when the sailors start to move to the other side. This is followed by the actual crossing. Finally, the maneuver is concluded when the sailors remain on the other side for a sufficient amount of time. In some cases, noise can nevertheless prevent the stable detection of these moments. A possible approach to remove noise is the use of a sliding smoothing window. In this case, the window size and smoothing parameter σ must be defined. Figure 5(b) illustrates this method when σ = 140. Even though the noise is removed, the actual crossing moment is offset by more than 50 frames from the real crossing moment. Moreover, to avoid manually defining σ, we resort to an adaptation of a scale space method called edge focusing [23]. This allows for more robust location of the crossing moment, as shown in Fig. 5(c). In our adaptation, we first compute a Laplacian of Gaussian (LoG) filter with σ in the range [e a , e b ], a = 5, b = 0, using a step size of 0.005, as suggested by Haar Romeny [24]. Using the LoG filter for all σ values in this range, the zero-crossing frames are stored as the 'signatures,' as depicted in Fig. 5(c). The small step size guarantees that negative and positive edges do not last longer than one frame. The positive edge can then be tracked in a coarse-to-fine manner to determine the precise crossing moment, as indicated by the orange arrows in Fig. 5(c). Finally, we need to determine if the sailors remain long enough on the other side to consider it as an actual maneuver. For this purpose, we compute a regression line [25] on the location data to remove noise from the detection. More specifically, we fit the line to a window that represents the average time to perform half a maneuver. For 30 Hz videos, this implies approximately 90 sequential frames. When the slope of the regressed line is below 0.15 rad, we judge the location of the sailors to be stable. Likewise, when the slope of the line increases, we assume that the sailors are starting a maneuver. The slope of the line gradually decreases once they have crossed to the other side, marking the end of the maneuver. We consider the last stable frame before and the first stable frame after the crossing as the maneuver interval. Visual analysis interface The last stage of our pipeline is a visual analysis interface that enables exploration of a registered session containing detected maneuvers, as illustrated in Fig. 6. The user has the option to watch the original or stabilized video, depending on his/her preference. The timeline at the bottom highlights the detected maneuvers time intervals in blue and the currently selected maneuver in orange. Alternatively, it is also possible to browse maneuvers using the thumbnails on the right side, where each one is labeled with timestamps marking the beginning and end of the interval. Both navigation options are linked; hence, selecting the interval in the timeline highlights the thumbnail and vice-versa. We also provide a few tools that may aid a debriefing session. Users can include and remove intervals, mark intervals as important, and annotate selected intervals using the text-box on the left-hand side of the screen. While the timeline with contrasting colors gives a good overview of the potentially interesting events during the session, the thumbnails give an indication of what is occurring during a particular time interval. This facilitates the selection of interesting intervals and the removal of false positives generated by the automatic detection method. Results and discussion In this section, we explain the evaluation process and its outcomes. To evaluate the developed framework, a user study was conducted with seven Olympic-level sailing coaches. We divided the evaluation into several modules: video stabilization, maneuver detection, and general application. Stabilization evaluation To evaluate stabilization, we used both qualitative and quantitative measures based on defined metrics. To objectively quantify the difference between the original video and the result of our stabilization method, we used the peak signal-to-noise ratio (PSNR). The assumption behind this metric is that if the transitions between consecutive frames are smooth, the similarity between consecutive frames is higher. This measurement is defined in decibels between consecutive frames as where MAX I is the maximum intensity that a pixel can have in image I. The mean-squared error (MSE) for consecutive frames with dimensions N × M is defined as Figure 7(a) indicates that there is little difference between the source and its stabilized counterpart. Fig. 6 The implemented sailing maneuver analysis pipeline to locate maneuvers in a video and visualize these intervals. The timeline (bottom part) shows in blue the detected maneuvers and in orange the current selected instant in the main window However, this is mostly caused by the black patches that are used to fill the frame after applying the transformations to a frame in the stabilization method. If the same video is cropped to remove the black patches, as shown in Fig. 7(b), it is clearly apparent that the PSNR of the stabilized video is higher than that of its non-stabilized counterpart. These PSNR graphs can be summarized into a single value by averaging the PSNR for consecutive frames; the result is called the interframe transformation fidelity (ITF). Some experimental results using ITF on four videos taken under different conditions are shown in Table 1. The results from the quality measures suggest that the stabilized videos are slightly better, and we assume that these should be easier to analyze by a coach. To evaluate this assumption, seven coaches were shown three pairs of videos, each pair consisting of the original video and the stabilized version. Each example was recorded under different weather conditions: cloudy and medium waves, overcast conditions and large waves, and sunny weather under relatively calm seas. Later, for each pair of videos, we asked the following question, where video A was the original video and video B was the stabilized video: "Which video (A or B) is easier to analyze? (1) Strong preference for video A; (2) Maybe video A; (3) Both videos are equally easy/difficult; (4) Maybe video B; (5) Strong preference for video B Each coach was then asked to vote on the three pairs of videos using a 5-point Likert scale. The results are shown in Fig. 8. Based on these results, we can conclude that the stabilized version was preferred, as 10 out of 21 votes (blue background) were in favor of the stabilized videos and the non-stabilized version was preferred only six times (yellow background). Motivations given for these choices in favor of the stabilized versions are that it makes looking at the details easier and that "the movements of the video are caused by the RIB, which are totally irrelevant." The coaches who were strongly against the stabilized videos stated that the moving edges were too distracting. This problem could be addressed by cropping the video so that no black patches can be seen, or by using in-painting methods to fill in the missing regions. Maneuver detection evaluation For the evaluation, we selected three videos where our assumptions were mostly satisfied (the sailing boat was followed from behind and was in full view of the camera). We removed situations in which these assumptions did not hold. Moreover, we selected videos in which sailing occurred under normal weather conditions. We manually tagged 27 maneuvers to assess our automatic method. The average sensitivity (rate of true positives over total positives) was 72.72%, and three maneuvers were not detected. On the other hand, by not removing the moments when the assumptions did not hold, the number of false positives increased considerably (from 4 to 22). These false positives can be manually discarded via the visual interface. The adapted edge-focusing method predicted the exact crossing frame for 20 out of 27 maneuvers. The median offset was 26 frames, with an average of 45 frames. In practice, considering a 30 Hz video, the crossing moment would be off by 1 s, which is negligible in our application. Although more tests need to be conducted using larger amounts of data, this evaluation indicates the potential practical utility of our method. Visual analysis framework evaluation We conducted a user study involving seven coaches to evaluate the visual analysis framework developed for this study. The purpose of the study was to determine whether this tool is considered useful for coaching. The questionnaire consisted of the questions listed in Table 2. In response to Q1 in Table 2, three coaches pointed out that all aspects of the framework are potentially useful. Others noted that the annotation and marking features were particularly interesting. In response to Q2, two coaches noted that they would like an easy way to share and store the clips. Furthermore, being able to draw annotations on videos and to label the clips were also considered as desirable future features. One coach specifically asked for a zoom feature because "for the relatively fast sailing boats, it is difficult to stay close during training." Although these functionalities were outside the scope of this study, they can be easily added in the future. The responses to Q3 made us realize that the thumbnails did not give a clear indication of the content. As stated by one coach, "everything looks the same in the thumbnails and therefore it is not useful." Another coach noted that "you need a way to label/ name the thumbnails to be able to distinguish between them." The answers from Q4 lead us to conclude that most coaches would likely use our framework in coaching, as shown in Fig. 9. Coaches who gave low scores chose low scores because of features not covered in this study, such as a zoom feature. On average, 7.43 out of 10 coaches felt that the timeline is a useful feature, based on their responses to Q5.
5,525.8
2021-10-19T00:00:00.000
[ "Computer Science" ]
Effects of Iodine Doping on Electrical Characteristics of Solution-Processed Copper Oxide Thin-Film Transistors In order to implement oxide semiconductor-based complementary circuits, the improvement of the electrical properties of p-type oxide semiconductors and the performance of p-type oxide TFTs is certainly required. In this study, we report the effects of iodine doping on the structural and electrical characteristics of copper oxide (CuO) semiconductor films and the TFT performance. The CuO semiconductor films were fabricated using copper(II) acetate hydrate as a precursor to solution processing, and iodine doping was performed using vapor sublimated from solid iodine. Doped iodine penetrated the CuO film through grain boundaries, thereby inducing tensile stress in the film and increasing the film’s thickness. Iodine doping contributed to the improvement of the electrical properties of the solution-processed CuO semiconductor including increases in Hall mobility and hole-carrier concentration and a decrease in electrical resistivity. The CuO TFTs exhibited a conduction channel formation by holes, that is, p-type operation characteristics, and the TFT performance improved after iodine doping. Iodine doping was also found to be effective in reducing the counterclockwise hysteresis in the transfer characteristics of CuO TFTs. These results are explained by physicochemical reactions in which iodine replaces oxygen vacancies and oxygen atoms through the formation of iodide anions in CuO. Introduction Oxide semiconductors have been used as active channel materials in thin-film transistors (TFTs) owing to their excellent charge carrier mobility, high optical transmittance in the visible range, excellent chemical stability, and versatility in processing. Therefore, oxide semiconductor-based TFTs have been used in various electronic applications such as electronic memory devices, chemical sensors, and active matrix displays [1][2][3]. To date, oxide TFTs mostly exhibit n-channel operation behavior because n-type semiconductors, such as zinc oxide, indium oxide, indium zinc oxide, and indium gallium zinc oxide, are widely used [4][5][6][7]. However, p-channel oxide TFTs have rarely been reported because of complicated fabrication procedures. The local distribution of anisotropic oxygen 2p orbitals is a main factor in determining the valence band maximum of p-type oxide semiconductors, which results in large effective mass and low mobility for holes in p-type oxide semiconductors [8][9][10]. For this reason, a transparent p-type copper iodide (CuI) semiconductor has recently been proposed as a replacement for p-type oxide semiconductors [11]. However, its excessive hole concentration (>10 19 cm −3 ), due to metal vacancies, lowers the on/off current ratio of the TFT to less than 10 2 , which inevitably causes a significant problem in degrading the switching function of the transistors; the pristine CuI semiconductor should thus be doped with metal ions, such as Zn 2+ , Ga 3+ , and Sn 4+ [12]. Although it is difficult to realize p-type high-performance oxide TFTs, oxide semiconductor-based complementary circuits and p-n junction systems still need to be demonstrated [8,13]. Here, we think it is important to point out that p-type oxide semiconductors are more likely to form binary, ternary, and quaternary compositions, compared to CuI. The diversity in the chemical composition of p-type oxide semiconductors is essentially advantageous in applications to electronic devices because the electrical properties of the p-type oxide semiconductor can be adjusted according to the chemical composition. Among p-type oxide semiconductors, Cu-based oxides are relatively simple to fabricate, and they also exhibit promising electro-optical properties. These features establish suitability for p-type semiconductor applications. Recent studies have mostly focused on copper oxide (CuO) semiconductors for electronics and energy devices because it offers the advantages of non-toxicity, cost-effectiveness, and abundance. To meet the demand for high-performance p-channel devices, Bae et al. enhanced the performance of copper-based TFTs by doping the semiconductor film with gallium atoms to reduce oxygen vacancies, which are known to interfere with the conduction of hole carriers [14]. Moreover, Baig et al. demonstrated that the doping of CuO with yttrium atoms could enhance the device performance [15]. The doping process for p-type oxide semiconductors, as demonstrated in previous studies, is important to modulate the charge carrier density of the semiconductor film and improve the electrical performance of CuO-based TFTs. Nevertheless, research on doping technology for p-type oxide semiconductors is still in its infancy and doping these materials as a post-processing technology has hardly been studied. In the present paper, we report the effects of iodine doping on the structural and electrical characteristics of solution-processed CuO semiconductor films and the TFT performance. Note that solution processing is a simple and cost-effective method for fabricating oxide semiconductors. Here, the p-type CuO semiconductor films were formed via spin-coating and the CuO film was doped with iodine vapor. Experimental results demonstrated that iodine doping can be a novel post-processing method to improve the electrical properties of CuO semiconductors and the performance of CuO TFTs. Materials and Methods To fabricate CuO TFTs with an inverted staggered structure shown in Figure 1a, a p-doped silicon substrate with a 100-nm-thick silicon nitride (SiN x ) dielectric layer was sequentially cleaned by sonication in acetone, isopropyl alcohol, and deionized water. Oxygen plasma treatment was performed to make the substrate surface hydrophilic, which was a necessary process to improve the coatability of the CuO precursor solution on the substrate. For the oxygen plasma treatment, a radio frequency power of 45 W was applied for 2 min, while the oxygen flow rate was maintained at 20 sccm. The CuO-precursor solution was prepared by dissolving 0.3 M of copper(II) acetate hydrate [Cu(CO 2 CH 3 ) 2 H 2 O] in 2-methoxyethanol, which was then stirred using a magnetic bar at a rotation speed of approximately 750 rpm on a hotplate (Conring, Seoul, Korea) heated to 75 • C for 1 h. The precursor solution was filtered through a poly-tetrafluoroethylene syringe filter (Hyundai Micro, Seoul, Korea) with pore size of 0.2 µm and spin-coated on the oxygen-plasma-treated substrate at 2000 rpm for 1 min. The coated film was dried on a hotplate at 80 • C for 5 min and 120 • C for 20 min to evaporate the solvent and then thermally annealed in a vacuum tube furnace (Daeki, Daejeon, Korea) at 500 • C for 30 min. Finally, 30-nm-thick Au source and drain electrodes with an interdigitated geometry were thermally deposited on the CuO semiconductor layer through a shadow mask under a base pressure of approximately 6 × 10 −6 Torr. The interdigitated electrodes consisted of 5 pairs with a 80-µm channel length and a 400-µm electrode width. Thus, the effective channel length and width in our transistors were 80 µm and 2000 µm, respectively. To dope the CuO films with iodine, the fabricated CuO TFTs were exposed to iodine vapor for 5 s as shown in Figure 1b; iodine vapor was produced by sublimation from solid iodine at room temperature, and the vapor pressure of iodine at room temperature is known to be approximately 0.4 mbar. In this study, a thermogravimetric analysis (TGA) was carried out to examine the thermal decomposition behavior of the precursor solution. The influence of iodine doping on the morphological and structural characteristics of the fabricated CuO films was investigated using atomic force microscopy (AFM) (XE150, PSIA, Santa Clara, CA, USA) and field emission scanning electron microscopy (FE-SEM) (JEOL, Tokyo, Japan). Contact angle measurement using deionized water drop was performed to evaluate the change in the surface wettability of the CuO films. Raman spectroscopy was used to investigate the influence of iodine doping on the lattice structure of CuO films. The Hall effect measurement using van der Pauw method was conducted to measure the Hall mobility, electrical resistivity, and carrier concentration of the CuO films. Electrical characteristics of the fabricated TFTs were measured in a dark box under ambient air conditions. Results and Discussion To analyze the thermal decomposition process of the prepared CuO precursor solution, a thermogravimetric measurement of the solution was performed in nitrogen ambient by increasing the temperature from 25 • C to 700 • C at a heating rate of 10 • C·min −1 . Figure 2 shows the measured TGA curve of the CuO precursor solution. As the temperature increased to approximately 108 • C, most of the weight loss (>90%) of the precursor solution occurred because of the evaporation of the 2-methoxyethanol solvent. As the temperature increased from 108 • C to 165 • C, the rate of weight loss gradually decreased. This implies that Cu(CO 2 CH 3 ) 2 H 2 O is hydrolyzed to Cu(OH) 2 . As the temperature exceeded 165 • C, the rate of weight loss reduced further, and the dehydration reaction of Cu(OH) 2 to form CuO is thought to have occurred in this temperature range. In particular, there was no discernible change in weight at temperatures above 500 • C. Accordingly, the optimal annealing temperature to fabricate the CuO semiconductor films in this experiment was determined to be 500 • C. Figure 3a,b show the surface morphologies of the pristine and iodine-doped CuO films, which were characterized using AFM. Both of the films had similar surface morphologies, suggesting that iodine doping did not cause significant morphological changes, such as pore formation, delamination, or cracking in the solution-processed CuO film. However, the iodine-doped CuO film had a rather smooth surface compared to the pristine CuO film; the root-mean-square roughness values of the pristine and iodine-doped CuO films were approximately 6.7 ± 0.3 nm and 6.1 ± 0.2 nm, respectively. The area marked with a red line in the inset of Figure 3b represents the area where iodine was supposed to exist, especially at the grain boundaries, which suggests that iodine mostly penetrates the CuO film through grain boundaries. Therefore, the decrease in the surface roughness of the iodine-doped CuO film may be due to the iodine remaining at the grain boundaries while iodine penetrates into the CuO film through grain boundaries. Figure 4 shows the cross-sectional FE-SEM images of pristine and iodine-doped CuO films; the CuO films were formed on a p-doped silicon substrate having a 100-nm-thick SiN x dielectric layer. In our results, the iodine-doped CuO film (thickness~29 ± 3 nm) was slightly thicker than the pristine CuO film (thickness~27 ± 2 nm), indicating the penetration of iodine into the CuO film. The insets show the FE-SEM surface images of the CuO films. As shown in the insets of Figure 4, the pristine and iodine-doped CuO films exhibited similar surfaces; CuO grains with a size of several tens of nanometers are packed in both the films. Based on the AFM and FE-SEM results, it is reasonable to state that iodine, which penetrates into the film through grain boundaries, increases the thickness of the CuO film. We further investigated the influence of iodine doping on the lattice structure of CuO films using Raman spectroscopy; for the measurement, the wavelength of excitation laser beam was fixed at 532 nm and the laser spot size was controlled at approximately 1 µm. Figure 5 shows the Raman spectra of the solution-processed CuO films before and after iodine doping. The pristine CuO film exhibited Raman peaks at approximately 297.44 cm −1 , 343.92 cm −1 , and 629.89 cm −1 , whereas the corresponding Raman peaks in the iodine-doped CuO film appeared at approximately 296.93 cm −1 , 343.41 cm −1 , and 629.40 cm −1 . As these wavenumbers of Raman characteristic peaks are similar to those reported in the literature, we can assign the peak at 297.44 cm −1 /296.93 cm −1 to the A g mode and the peaks at 343.92 cm −1 /343.41 cm −1 and 629.89 cm −1 /629.40 cm −1 to the B g modes of CuO [16,17]. Importantly, the iodine doping of CuO causes shifts in Raman peak positions towards the low wavenumbers. Considering that the tensile and compressive stresses can be characterized by shifts toward lower and higher wavenumbers [18,19], respectively, the shifts in the peak positions towards low wavenumber reveal that the CuO film underwent tensile stress due to the permeation of iodine into the film. The results of Raman spectroscopy indicate that iodine penetrating into the CuO film induces tensile stress in the film, thereby causing a change in lattice properties. The change in the lattice structure of the CuO film due to iodine doping may change the electrical properties of the film. Figure 6 compares the Hall mobility, resistivity, and holecarrier concentration the CuO film before and after iodine doping. Iodine doping of the CuO film is observed to increase Hall mobility from 5.13 cm 2 ·V −1 ·s −1 to 10.27 cm 2 ·V −1 ·s −1 and decrease electrical resistivity from 10.35 Ω·cm to 1.41 Ω·cm. It is also found that the hole-carrier concentration in the CuO film could be increased from 1.78 × 10 13 cm −1 to 14.73 × 10 13 cm −1 due to iodine doping. These variations in Hall mobility, resistivity, and carrier concentration confirm that the structural modification of the solution-processed CuO films due to iodine doping can also enhance the electrical properties of the films. The change in the lattice structure of the CuO film due to iodine doping may change the electrical properties of the film. Figure 6 compares the Hall mobility, resistivity, and hole-carrier concentration the CuO film before and after iodine doping. Iodine doping of the CuO film is observed to increase Hall mobility from 5.13 cm 2 ·V −1 ·s −1 to 10.27 cm 2 ·V −1 ·s −1 and decrease electrical resistivity from 10.35 Ω·cm to 1.41 Ω·cm. It is also found that the hole-carrier concentration in the CuO film could be increased from 1.78 × 10 13 cm −1 to 14.73 × 10 13 cm −1 due to iodine doping. These variations in Hall mobility, resistivity, and carrier concentration confirm that the structural modification of the solution-processed CuO films due to iodine doping can also enhance the electrical properties of the films. In order to explain the change in the electrical properties of CuO semiconductor materials due to iodine doping, the reactions caused by iodine penetrating into the CuO film are assumed, as shown in Figure 7. Figure 7a shows the lattice structure of the pristine CuO film in which Cu-O bonds, copper vacancies, and oxygen vacancies exist. It is well known that copper vacancies generate holes that are majority charge carriers of the p-type CuO semiconductor, and oxygen vacancies generate electrons that are minority charge carriers. Moreover, Cu 3d orbitals and O 2p orbitals are hybridized to form a valence band of CuO, in which holes can move freely [20]. Figure 7b shows the case where iodine penetrates into the CuO film through iodine doping. Firstly, iodine penetrating the CuO film is expected to combine with electrons distributed in the film to form two iodide anions (I − ). Secondly, another oxygen vacancies, i.e., extra electrons, may be generated in the film through the reduction reaction of CuO by iodine, and these extra electrons and iodine may combine to form additional iodide anions. Among these two processes that produce I − anions, the latter process, which requires reduction in CuO, is likely to take longer time for I − anion formation, as compared to the former process. As shown in Figure 7c, I − anions generated by the above reactions combine with Cu 2+ cations to form copper iodide (CuI 2 ) and can also release oxygen (O 2 ) gas. As a result, there is a possibility that I 5p orbitals are locally hybridized in the valence band of CuO, which is formed due to Cu 3d orbitals and O 2p orbitals, by replacing I 2 with oxygen atoms or oxygen vacancies through iodine doping in the CuO film. Therefore, the presence of I 5p orbitals with a relatively larger size than O 2p orbitals in the CuO lattice can contribute to the delocalization of energy states for hole transport in the valence band of CuO, thereby increasing Hall mobility and improving electrical conductivity, that is, reducing electrical resistivity, as observed in the Hall effect results. It should be noted that the iodine atom (atomic diameter: approximately 115 pm) is larger than the oxygen atom (atomic diameter: approximately 48 pm). This may explain why the replacement of oxygen atoms or oxygen vacancies by iodine atoms causes an increase in the thickness of the CuO film, as confirmed in the FE-SEM cross-sectional images, and induces tensile stress in the CuO film, as observed in the Raman results. The proposed model also points to a decrease in electron and oxygen vacancy due to iodine doping of CuO. Accordingly, the change in the structural and electrical properties of the CuO film due to iodine doping can be understood through the physicochemical reactions caused by iodine. To examine the effect of iodine doping on the performance of solution-processed CuO TFTs, the electrical characteristics of the transistor were measured before and after doping with iodine. Figure 8a shows the output characteristics of the CuO TFT, which were measured before iodine doping by changing the drain voltage (V D ) from 0 V to −20 V in increments of −1 V at gate voltages (V G ) of 0 V, −10 V, and −20 V. The CuO TFT exhibited a clear pinch-off and excellent saturation under p-channel accumulation mode operation, indicating that holes are majority charge carriers in the CuO semiconductor film. Moreover, an important observation from Figure 8b is that the output characteristics of the CuO TFT could be enhanced through iodine doping process. After iodine doping, the TFT exhibited higher drain currents (I D ), while the pinch-off and saturation behaviors could still be maintained without degradation. Figure 8c,d show the transfer characteristics of the CuO TFT before and after iodine doping, respectively. These characteristics were measured at a fixed drain voltage of −20 V, while the gate voltage was swept reversibly from 10 V to −30 V in increments of −1 V. In order to evaluate the performance of TFT, the subthreshold swing (S.S.), which is defined as the change in the gate voltage required to change the drain current by a factor of 10, was extracted from the plot of |I D |versus V G , and the threshold voltage (V T ) was obtained from the plot of |I D | 1/2 versus V G by extrapolating to the drain current of 0 A. The field-effect mobility (µ eff ) was calculated in the saturation region. The TFT parameters are summarized in Table 1. When the gate voltage was swept from 10 V to −30 V, the CuO TFT without iodine doping exhibited a subthreshold swing of 3.3 V·decade −1 , threshold voltage of −4.13 V, field-effect mobility of 4.25 × 10 −3 cm 2 ·V −1 ·s −1 , and on/off current ratio of 2.4 × 10 3 . Upon reversing the gate voltage sweep direction from −30 to 10 V, the threshold voltage shifted in the negative direction and a counterclockwise hysteresis was observed; the measured shift in the threshold voltage was approximately −12.36 V. After iodine doping, the transistor exhibited a subthreshold swing of 3.0 V·decade −1 , threshold voltage of −3.08 V, field-effect mobility of 6.61 × 10 −3 cm 2 ·V −1 ·s −1 , and on/off current ratio of 3.51 × 10 3 . Considering that the field-effect mobility reported in the performance improvement study for the precursorbased solution-processed p-type CuO TFTs was approximately 2.83 × 10 3 cm 2 ·V −1 ·s −1 [21], the field-effect mobility of the iodine-doped CuO TFTs is quite remarkable. In addition, a negative shift in the threshold voltage and counterclockwise hysteresis were also observed upon reversal of the gate voltage sweep direction; the shift in the threshold voltage was approximately −10.35 V. The decrease in the threshold voltage and increase in the field-effect mobility by iodine doping could be due to the improvement of the electrical properties of the CuO semiconductor film, as confirmed by the Hall effect results in Figure 6. Herein, we should note that the counterclockwise hysteresis, the negative shift of the threshold voltage, is reduced by iodine doping. It is well known that counterclockwise hysteresis for p-type TFTs is caused by the mobile charges and defect-related trap states [22]. Based on our model in Figure 7, the reduction in electrons and oxygen vacancies by iodine doping is thus believed to contribute to reducing the hysteresis in the transfer characteristics of CuO TFTs. Additionally, we observed changes in the drain current of CuO TFTs while increasing the duration of iodine doping. Figure 9 shows a comparison of the changes in the drain current while varying the duration of iodine doping; the change in current was expressed as a current ratio obtained by dividing the drain current measured after iodine doping by the drain current measured before iodine doping, and at least ten TFTs were used under each condition to examine the effect of iodine-doping duration. Importantly, the current ratio is found to decrease with an increase in iodine-doping duration. When the duration of iodine doping was 120 s, the drain current of the TFT deteriorated after doping, as indicated by a current ratio of less than 1.0. This means that there is a limit to improving TFT performance even if the iodine-doping duration is increased. We consider that the physicochemical reactions of iodine during long-duration doping process may deteriorate the energy states for hole transport in the valence band of CuO by further augmenting the lattice deformation and tensile stress in the CuO semiconductor layer. According to the model we proposed (Figure 7), iodine doping for a long duration can significantly increase the reduction reaction of CuO, which greatly reduces Cu-O bonds in the lattice structure. Quantitative analyses of the energy characteristics of CuO, such as density of states and carrier distribution in the energy band structure, as a function of iodine concentration in the film is expected to contribute to optimization of the iodine doping process. Consequently, this study demonstrates that iodine doping represents a novel method to improve the electrical properties of CuO semiconductors in general and the performance of CuO TFTs in particular. For a comprehensive understanding of the physicochemical reaction mechanism by doped iodine in CuO, further studies are required to analyze the stoichiometric properties of iodine-doped CuO semiconductor films according to the spatial distribution of iodine. Conclusions We investigated the effects of iodine doping on the structural and electrical characteristics of p-type CuO semiconductors and the performance of CuO TFTs. The doped iodine penetrated the film, inducing tensile stress and increasing the thickness of the film. In addition, iodine doping contributed to increasing Hall mobility and hole-carrier concentration and decreasing electrical resistivity. According to the physicochemical reaction model by iodine proposed in this study, the replacement of oxygen atoms or oxygen vacancies by doped iodine induces delocalization of energy states for the transport of holes, which are majority carriers, in the valence band of CuO. This explains the improvement in the electrical properties of p-type CuO semiconductors through iodine doping, which, in turn, enhanced the TFT performance. In particular, it was found that the reduction in minority carrier electrons and oxygen vacancies in the CuO semiconductor film due to iodine doping is effective at reducing the hysteresis in the transfer characteristics of the transistor. Experimental results demonstrate that iodine doping, as a post-processing method, can be used to improve the electrical characteristics of p-type oxide semiconductor materials, thereby developing p-type oxide TFTs with high performance. Effects of iodine doping on the crystallinity of CuO semiconductors and the electrical stability of CuO TFTs remain to be studied. We believe that the physicochemical reactions by iodine proposed in this study provide a basis for further research pertaining to the development of various sensors fabricated using solution-processed oxide TFT-based circuits.
5,509
2021-10-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Widely Tunable Near-Infrared Wavelength Conversion Based on Central and Off-Central Multiperiod Grating Pumped by a Compact Fiber Laser Tunable near-infrared sources are highly demanded due to their various applications in biomedicine, printing, barcode technology, and optical storage. In this paper, widely central phase-matching wavelength tunable and high efficiency are demonstrated in theoretically and experimentally, pumped by a compact all-polarization-maintaining (PM) 9-character cavity fiber laser operating near-infrared (785 nm) based on a multiperiod grating. The wavelength tuned at about 18.2 nm as the grating period changed 1 $\mu \text{m}$ . A wide spectral tuning of 42 nm (from 776 nm to 818 nm) and high efficiency about 12.3% have been obtained. Besides, we discovered that the non-central second harmonic can also be generated from the second harmonic generation caused by strong phase mismatch, and the sum frequency generation (SFG) on account of the nonlinearity of the crystal. The result shows that the central and off-central phases match harmonics to form continuous broadband at some temperatures and grating periods, which can be widely exploited for wavelength conversion and tunable channel output. I. INTRODUCTION The demand for tunable and broadband sources is increasing due to their various applications, especially in biomedicine and optical storage analysis [1]. One promising approach is the wavelength conversion of available lasers in nonlinear materials in the absence of suitable sources [2]- [5]. When the laser is transmitted in a nonlinear medium, a new wavelength is generated under phase-matching conditions, and quasiphase-matching (QPM) is a common and effective method to achieve tunable wavelength. Cavity design and optimization of continuous pump cavity have been widely used to control the cavity type, which also miniaturizes the laser. Increasing the efficiency of the resonant cavity structure has been investigated based on the nonlinear periodically poled lithium niobate (PPLN) crystal. The maximum output power was increased to 70 mW using the inner folding cavity structure in the mid-infrared idle wavelength of 3.66 µm [5]. Further, in 2018, a compact The associate editor coordinating the review of this manuscript and approving it for publication was Ladislau Matekovits . three-mirror linear cavity with plane and concave mirrors was experimentally and theoretically demonstrated, and its power density on the nonlinear crystal was improved by optimizing the concave radii of cavity mirrors [6]. Nevertheless, the steep adjusting accuracy of the resonant cavity is difficult to fulfill and thus limits its widespread use for wavelength tuning. The capability of quasi-phasematched nonlinear optical frequency conversion combined with temperature control has been studied for realizing wavelength tunability based on a solid-state laser directly pumping a quasi-phase-matched optical parametric oscillator [7], [8]. The grating period of MgO: PPLN ranged from 29.52 to 31.59 µm, tuning the band from 3.0 to 3.8 µm [9]. However, the obtained wavelength tunable bands are mostly concentrated in the mid-infrared region, and studies on wavelength tuning in visible and near-infrared wavelengths are few. In 2018, an all-solid-state tunable CW orange laser based on single-pass sum frequency generation has been reported. The output wavelength could be tuned up to 5.66 nm by varying the operating temperature and the position of focusing inside the step-chirped crystal [10]. Wavelength tuning in the near-infrared region (785 nm) can be particularly important for applications such as optical storage, printing, and barcode technology. In this paper, we propose a multiperiod grating based on a MgO: PPLN crystal pumped by a compact PM 9-character cavity fiber laser to achieve widely tunable wavelength and high efficiency output for wavelength selection and diversity in the near-infrared region at 785 nm. The second-harmonic wavelengths were tuned at about 42 nm in the range 776∼818 nm and the highest efficiency is up to 12.3%. In addition, we consider a potential approach to obtain off-central harmonic generation under the combined action between SHG and SFG, which can obtain a continuous spectrum with the central second harmonic by tuning the grating periods and temperature. II. THEORY AND MODEL As the coupling wave equation for SHG and the general numerical solution method are complex, obtaining a precise solution is difficult. We describe the SHG process using a simple transfer function in cases where group velocity dispersion can be ignored, which depends only on the material properties and the structure of the QPM grating [11]- [13]. The uniform grating transfer function can be represented as follows [14]- [16]: where γ ≡ 2π/ (λ 1 n 2 ), λ 1 is the fundamental harmonic wavelength, and n 2 is the refractive index of the SHG. L is the crystal length, d m is the amplitude of the mth Fourier component of the grating, k i is the carrier k-vector mismatch at different periods or temperatures, K i = 2mπ i , and m is the QPM order. To obtain a high conversion efficiency, it is common to take m = 1, which indicates the polarization reversal cycle i is the grating period. δυ = 1 µ 1 −1 µ 2 which is the group velocity mismatch parameters. = 4π c λ 1 −4π c λ 0 which is the angular frequency variation range of the second harmonic and λ 0 is the central wavelength of the fundamental harmonic. The phase-matching factor can be represented as follows [17]- [19]: where n 1 is the refractive index of the fundamental harmonic, the refractive index and temperature in the nonlinear conversion coefficient are directly related, the pumped light and the ginseng light have e-direction polarization, and their polarization directions are along the optical axis direction. QPM is achieved when k(T , λ) = 0. Since the band of the input fundamental harmonic is determined, we can alter the period ( ) or temperature (T) to achieve phase-matching. To facilitate the follow-up research, a central phasematching wavelength of 1600 nm is selected for simulation in Fig. 1, which shows that the largest D is the phasematching factor close to 0, and D sharply decays when the phase-matching factor moves away from the central phasematching wavelength, approaching 0. The intensity is nearly two orders of magnitude less than the central phase-matching when the mismatch wavelength was about 1570 nm. In our experiment, the time domain pulse of the femtosecond magnitude was adopted, strong noncentral phase-matching harmonics could still be observed. The maximum intensity of second harmonic is at λ 2 as shown in Fig. 2(a), when the period of the grating central phase-matching of the central wavelength λ 1 of the fundamental wave, where λ 1 = 2π c ω, λ 2 = 2π c/2ω. The maximum harmonic is at λ 2 which intensity is less than λ 2 in Fig. 2(a) when the wavelength matching the center phase of the grating is λ 1 , where λ 1 = 2π c ω , λ 2 = 2π c 2ω . The fundamental wave at λ 1 can still generate the second harmonic wave at λ 2 with weak intensity through the grating, as can be seen from Fig. 1. In addition, the SFG effect due to the nonlinearity of the crystal was found by 2ω = (ω + ω) + (ω − ω), and a weak SFG response generated at λ 2 of the two frequencies. As discussed above, the noncentral phase-matching harmonics at λ 2 was generated due to the combined action of SHG and SFG, as shown in Fig. 2 III. TUNABLE SHG In this section, we consider the second harmonic wavelength tuned by changing the period and temperature of the multiperiod grating at the central QPM. Besides, the noncentral phase-matching harmonics are further analyzed. A. EXPERIMENTAL SETUP The experimental setup of the second-harmonic wavelength conversion of the MgO: PPLN crystal is shown in Fig. 3(a). The output laser of 9-cavity laser is amplified by EDFA and then passed through the AP to measure the input power. The PC is used to reduce the reflection and improve the quality of the light source. Then the laser beam is focused on the grating through the focusing lens M1, and the output beam is concentrated into the detector by the lens M2, so as to observe the output power and waveform. All PM 9-character cavity fiber laser output to 1570 nm as the center wavelength of the fundamental harmonic, which is a passive mode-locked laser shown in Fig. 3(b). The pulse is divided into two pulses with the same intensity and opposite transmission direction from D to the nonlinear amplifying ring mirror, that is the CW and CCW. The asymmetric amplification of the two pulses and the accumulation of different nonlinear phase shifts, due to the asymmetric structure of the nonlinear amplifying loop mirror. The two pulses are combined at the coupler and interfere after one loop of transmission. The effect of the equivalent saturable absorber of the nonlinear amplifying ring mirror is achieved by adjusting the nonreciprocal phase shifter in the cavity, so as to realize the self-starting of the mode locking [20]- [23]. We used a multiperiod grating with a sample size of 5 (L) × 9.2 (W) × 0.5 (T) mm 3 , which is shown in Fig. 3(c). The multiperiod grating ranges from 19.5 to 21.3 µm. Each cycle height is 0.7 mm, and the interval between the cycles is 0.2 mm. The grating period is changed by moving the position of the multi period grating on the glass platform, and the temperature tuning of the grating is realized by the temperature controller. B. TUNABLE AND OFF-CENTRAL PHASE-MATCHING HARMONICS WITH THE CHANGED PERIOD In this section, we experimentally observed and theoretically simulated second-harmonic tunability and off-central phasematching harmonics regularity, based on the chosen multiperiod grating and temperature parameters. We characterized the tunability by accommodating the grating period at 150 • C, which experimental results are representative. The experimental and simulation results of the versus between the second harmonic wavelength and the grating period are shown in Fig. 4(a) and 4(b) respectively. The wavelength of the second harmonic shifted to a longer wavelength due to the increase in the grating period. The experimental data agreed well with the simulated data. at 150 • C and grating period of 19.5∼21.3 µm. The experimentally found secondharmonic wave band is wider than the theoretical one when the grating period is 20.1 or 20.3 µm due to the nonlinear action of the crystals. We further considered the second-harmonic spectrum at 25 • C, 50 • C, and 100 • C to accurately describe the influence of SHG tunable based on the multiperiod grating. The central second-harmonic wavelength as a function of the period is shown in Fig. 4(c), where E denotes experimental (results) and T, theoretical (results). The theoretical and experimental results are mostly consistent. The larger the phase-matching period of the fundamental harmonic incident on the corresponding position on the crystal, the larger the central wavelength of the second harmonic shifts in the longer-wavelength direction. Moreover, the 19.5∼21.3 µm multiperiod grating led to a wavelength tuning of about 33 nm. According to Section 2, the relationship between the period ( ), temperature (T), and wavelength (λ) can be obtained, where = λ 1 2[n 2 (λ 2 ,T)−n 1 (λ 1 ,T)] . The fundamental harmonic (λ 1 ) can be regarded as a function of period ( ) when T is constant. The first derivative of this function, ∂λ 1 ∂ = 2(n 2 − n 1 ) is simulated in Fig. 4(d). Note that ∂λ 1 ∂ is always greater than zero in the wavelength range from 1550 nm to 1640 nm. This illustrates that the fundamental harmonic monotonically increases as the period increases. In this case, the wavelength of the phase-matched fundamental wave shifts in the direction of the longer-wavelength region with the increase in the grating period, and the corresponding second harmonic also shifts in the same direction ( Fig. 4(a)) for the temperature range 25 • C∼150 • C. Furthermore, the wavelength bandwidth is appeared near 785 nm from a period of 20.3 µm, that is, the wavelength of the quasi-phase-matched second harmonic is greater than 800 nm, which is defined as off-central phase-matching harmonics. The reason for this phenomenon has been explained in Fig 1. Meanwhile, the relative intensity of the phasematching second harmonic decreased, and the relative intensity of the noncentral second-harmonic phase-matching increased with the increase in the grating period. Nevertheless, the experimental data gave noise relative to the theoretical data. In order to research the harmonic under large phase mismatch, the phase matching at the center wavelength of the fundamental wave at 1570 nm was selected as the center phase matching ( k = 0). The harmonic wave with the period of 20.3∼21.3 µm at 150 • was simulated. As can be seen from Fig. 5(a), the degree of phase mismatch increases gradually as the grating period increases, which means the central phase matching fundamental harmonic is farther than 1570 nm. Therefore, the intensity of central phase matching harmonics gradually decreases. Nevertheless, the wavelength of non-central phase-matched harmonics is always concentrated around 785 nm and does not change with the change of the period, as shown in Fig 5(b). The reason is the central wavelength of the fundamental wave always is 1570 nm, no matter whether the grating period is changed or not. The harmonics caused by the large phase mismatched SHG and SFG will always exist, and the harmonic intensity of SFG effect will also unchanged. So, the intensity of the non-central phase-matched harmonics will be relatively enhanced. We systematically investigated the pattern of the fundamental harmonic before and after the amplifier in Fig. 6(a) and Fig. 6(b). The bandwidth broadened after the amplifier but also created significant noise. As can be seen from Fig. 6(b), the intensity of the fundamental harmonic sharply decays after 1600 nm and gradually approaches zero. As a consequence, the intensity of the second harmonic derived based on QPM also sharply decays and gradually approaches zero. The distributions of the 1600 nm QPM grating period at different temperatures are shown in Fig. 7. The period of quasi-phase-matched grating decreases as the temperature increases, thus, observing the noncentral phase-matched wavelength with the temperature increase is easier. Taking 150 • C as an example, the matching period at 1600 nm is 20.3 µm, and the noncentral phase-matching harmonic also starts to appear at 20.3 µm. The fundamental harmonic had the strongest intensity near the central wavelength of 1570 nm, which still produced a second harmonic after passing through the crystal at the VOLUME 9, 2021 large phase-mismatched period. It is concluded that the intensity of the second harmonic is strong when the wavelength of the periodic phase-matching of the grating is strong, and the relative intensity of the non-centrally matched second harmonic is weak; therefore, it is difficult to be observed. However, when the wavelength of the periodic QPM of the grating is greater than 1600 nm (i.e., the wavelength intensity is weak), under the combined action of SHG and SFG effect, the noncentral phase-matching harmonics are relatively enhanced due to the weak intensity of the second harmonic generated by QPM. Compared to that of the phase-matched harmonic, the harmonic intensity of the harmonic with a large phase mismatch differs by about 4 orders of magnitude. C. TUNABLE AND NONCENTRAL PHASE-MATCHING HARMONIC VARIED WITH TEMPERATURE In previous sections, we found that the tunability of the second harmonic and the intensity of the noncentral phase-matching harmonics are affected by not only the period of the multiperiod grating but also temperature. Hence, we further studied the spectra of harmonics changing with temperature at a specific period, as shown in Fig. 8. The spectrum diagrams with a period of 20.9 µm at 25 • C, 30 • C, 50 • C, 100 • C, 120 • C, and 150 • C were measured for further analysis, which can reflect the intensity characteristics of each temperature better. The second harmonic from the central phase-matching moved toward the longer-wavelength region with the temperature increase ( Fig. 8(a) and Fig. 8(b)). We extended this work to the entire period of the multiperiod grating from 19.5 to 21.3 µm. The function of the temperature and the central wavelength of the second harmonic in different periods were further calculated. Compared to the experimental and theoretical data in Fig. 8(c), we obtained that the central wavelength of the central phase-matching second harmonics moved toward the longer-wavelength region as the temperature increased. In addition, a wavelength tuning of approximately 9 nm was achieved from 25 • C to 150 • C. We treated the fundamental wavelength as a function of temperature when the grating period is fixed and simulated the first derivative ( ∂λ 1 ∂T ) of it, as shown in Fig. 8(d). The calculated ∂λ 1 ∂T is always greater than zero over the period range of the multiperiod grating used, that is, fundamental wavelength monotonically increases as a function of temperature for a given grating period. Therefore, when the temperature increases, the wavelength of the fundamental wave and the corresponding second harmonic generated move toward the longer-wavelength region. Similarly, in this section, we determined the period and changed the temperature to understand the effect of phase mismatch on SHG. It can also be observed in Fig. 6 that when the period is determined, the higher the temperature is, the farther the fundamental wavelength of the central phase matching is from the central wavelength 1570 nm. Therefore, the phase mismatch will gradually increase as shown in Fig. 9(a), and the intensity of the second harmonic at the central phase matching will gradually decrease. Since the fundamental wave remains unchanged, the noncentral phase matched harmonics will always be concentrated around 785 nm. The numerical simulation results are shown in Fig. 9(b), and the harmonic intensity generated by SFG will also unchanged. Therefore, the relative intensity of noncentral phase matched harmonics will be stronger, compared with the central phase matched harmonics. D. CONTINUOUS SPECTRUM AND TUNING RATE The noise is accordingly amplified owing to the fundamental wave amplified by the amplifier, explaining why the noncentral phase-matching harmonics in the experimental spectrograms give noise. Remarkably, center and off-center phase-matching harmonics contribute to the continuous spectrum at certain temperatures and grating periods. The spectra at the period of 21.1 µm and the temperature of 30 • C confirm this, which is shown in Fig. 10. The wavelength tuning accuracy with the grating period and temperature demonstrated in Fig 11. We define the periodic wavelength tuning ratio as the wavelength tuning range for every 1 µm change of grating period, and the temperature wavelength tuning ratio as the wavelength tuning range for every 1 • C change of temperature. The periodic wavelength tuning ratio is 18.15 (nm/ µm) at 25 • C, and it is increases with the increase of temperature, as shown in Fig. 11(a). Compared with the grating period, the wavelength of the temperaturetuned wavelength is smaller, the temperature wavelength tuning ratio is about 0.0725 (nm/ • C) at 21.3 µm, and it gets greater as the period increases, which is given in Fig. 11(b). As previously discussed, wide spectral tuning of about 42 nm could be achieved by adjusting the grating period from 19.5 to 21.3 µm and the temperature from 25 • C to 150 • C. In addition, we can tune the temperature or period according to the above tuning rate to get the desired wavelength tuning range, offering great flexibility for wavelength selection and diversity. E. EFFICIENCY OF THE CENTRAL SECOND HARMONIC We also analyzed the conversion efficiency under different conditions to study the performance of the central second-harmonic spectrum tuning more accurately. The input fundamental harmonic of all data in Figures 4-11 is 56mW. The maximum conversion efficiency was about 11.5% when the temperature was 25 • C and the grating period was 20.1 µm under an input power of 56 mW. The efficiency of the second harmonic of the center matching tends to decrease with the increase in temperature, when the period is determined, as shown in Fig. 12(a). In our actual operation, the power is lost through the lens, and in the actual optical path, we use two focused lenses, each of which is about 4% efficient. Therefore, the final observed conversion efficiency is reduced by about 8% compared to the theoretical conversion efficiency. We compared and analyzed the efficiency when the input power was 43 and 56 mW, as shown in Fig. 12(b). These results show that the smaller the input power is, the greater the output power becomes. The maximum conversion efficiency is about 4% when the input power is 43 mW, and the loss of two lenses is very similar to the theoretical conversion efficiency of 12.3%. IV. CONCLUSION We obtained a widely tunable, compact, efficient, nearinfrared optical laser operating at 785 nm based on a multiperiod grating. We experimentally and theoretically showed that a wide spectral tuning of about 42 nm can be realized from 776 to 818 nm and the theoretical efficiency can up to 12.3%. The wavelength tuned at about 18.2 nm as the grating period changed 1 µm and 0.07 nm with a temperature adjustment of 1 • C. The unit tunable harmonic wavelength gradually increased with increasing temperature or period. We confirmed that strong off-central harmonics can still be observed in the large phase mismatch when the time domain pulse of the femtosecond magnitude was adopted. The combined effects of SHG and SFG at 785 nm are detected where the central phase-matching harmonics are weak. Remarkably, at certain temperatures and grating periods, the central and off-central phases match harmonics to form continuous broadband, which can be widely exploited for wavelength conversion and tunable channel output.
5,034.4
2021-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Detailed band alignment of high-B-composition BGaN with GaN and AlN The electronic structure of B0.097Ga0.903N was determined by examining its bandgap and valence band offset (VBO) in detail. The BGaN sample was grown using a horizontal reactor metalorganic chemical vapor deposition. For bandgap determination, three different techniques were utilized yielding similar results, which are: UV–Vis spectroscopy, Schottky photodiodes, and electron energy-loss spectroscopy. The bandgap was determined to be ∼3.55 eV. For measuring the VBO, the valence edges and the core levels of Al 2s and Ga 2p were measured using x-ray photoelectron spectroscopy (XPS). The valence edges were then fitted and processed along with the core levels using the standard Kraut method for VBO determination with AlN. The BGaN/AlN alignment was found to be −1.1 ± 0.1 eV. Due to core level interference between GaN and BGaN, the Kraut method fails to provide precise VBO for this heterojunction. Therefore, a different technique is devised to analyze the measured XPS data which utilizes the alignment of the Fermi levels of the BGaN and GaN layers when in contact. Statistical analysis was used to determine the BGaN/GaN alignment with decent precision. The value was found to be −0.3 ± 0.1 eV. Introduction A semiconductor's bandgap may be its single most important property, especially when designing the simplest devices made of single-material homojunctions. However, as soon as * Authors to whom any correspondence should be addressed. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. heterojunctions are introduced, the bandgap alone becomes insufficient in describing the behavior of carriers and photons in the desired structure. The reason being that the electronic structure of the junction can have one of three band alignment types: type-I (straddling gap), type-II (staggered gap), and type-III (broken gap). This can occur even when other factors such as surface states and polarization charges are not explicitly considered [1,2]. Therefore, the band alignment at a heterojunction becomes the second most important property needed to understand the behavior of the junction [3]. Typically, at least three quantities are needed to model a junction's band structure: the two bandgaps of the constituent (1). (b) Schematic showing energy levels typically measured when aligning bands using equation (2). Notice the different Fermi levels (E F ) for each sample. (c) The resulting VBO (in red) when the energy levels are aligned. The Fermi level of the interface sample is highlighted as an effective reference energy. (d) The valence energies of the interface sample can be measured for alignment using equation (3). The arrows in (b)-(d) point in the direction of increasing binding energy. materials, and the way they are aligned, described by the valence band offset (VBO). Figure 1(a) shows a simple schematic for a band structure with a type-II alignment. Equation (1) describes how the gaps are related to the alignment. The equation takes the gaps and the VBO to give the conduction band offset (CBO) relating all the quantities figure 1(a): where ∆ is the offset of the conduction bands minima (E c ) and the valence bands maxima (VBM, E v ), and E g is the bandgap of material X or material Y on either side of the junction. X-ray photoelectron spectroscopy (XPS) is the most used technique to determine the VBO of different heterojunctions [4,5]. Using XPS, direct measurements of core-level and valence-edge energies can be performed. To determine the VBO from these measurements, equation (2) is typically used which is commonly referred to as the Kraut method [6]: where E CL are chosen core levels of each material X and Y. The first four terms are measured for thick samples of X and Y typically referred to as 'bulk' samples. The last two terms are measured for a heterojunction between them referred to as an 'interface' sample. The use of this equation allows for the determination of the VBO of the two bulk samples taking into consideration band bending due to interface states and other sources. Unfortunately, the Kraut method can only work when the measurement can discern between the core levels of the two materials. An example of an interface where the Kraut method is not easily applicable is that between a semiconductor and its oxide. One solution for that issue is to simply measure the valence edges of the semiconductor and its oxide and take the difference between them as in equation (3): This approach has been utilized by the others to study the Si/SiO 2 interface [7,8]. An advantage of this method is its simplicity since only the low binding energy region of a single interface sample is needed to be measured. The physics behind this method is that the Fermi levels of materials X and Y become one when they are in physical contact. However, this method lacks the advantages of the Kraut method which gives the VBO of the bulk instead of at the interface. Therefore, strain and interface states may influence the results using this method. Figures 1(b)-(d) summarize the two methods schematically. In the past half century, III-nitrides have gained high popularity mainly due to their now commercial applications in optoelectronics and, more recently, power electronics. An emerging class of III-nitrides are those alloyed with boron (B). B alloying can be used to tune several properties including the bandgap and band alignment. A recent work by Mickevičius et al has shown that low B composition BGaN and GaN have a type-II alignment [9]. However, the value provided in this study is not quantitative due to the used approximations and lack of reference energy. Recently, the growth of B x Ga 1−x N at a high B composition (x > 10%) has been reported [10]. Therefore, in this work, we study the electronic properties of BGaN at a similar composition (x ∼ 9.7%). The bandgap and the band alignments of B 0.097 Ga 0.903 N with AlN and GaN are examined in detail. The BGaN/AlN is studied due to its straightforwardness and as a cross-validation to the BGaN/GaN interface. The latter interface is intriguing from both a physics perspective due to the lack of distinguishable core levels and VBM, and an application perspective, as GaN is more advanced in devices such as power electronics and optoelectronics [11][12][13][14]. In the process, we enhance the precision of the method in equation (3) to be the same as the precision of the XPS instrument. Finally, combining the bandgap and band alignment data is used to create a complete band structure of the two heterojunctions using the flat band model. Experimental methods The high B composition BGaN and the AlN and GaN used in the alignment were grown using a horizontal reactor metalorganic chemical vapor deposition from Taiyo Nippon Sanso on metal-polar (0002)-oriented thick (>1.5 µm) AlN/sapphire (or GaN/sapphire) templates. The metalorganic compounds used were trimethylaluminum, triethylborane and trimethylgallium as the sources for Al, B, and Ga, respectively. Ammonia was the source for N in the reactor. Hydrogen gas was used as a carrier for the reactants. For the BGaN layer, high V/III and B/III ratios of 4100 and 0.51 were used, respectively. Low growth temperature and pressure of 650 • C and 75 Torr were used, respectively. The B composition of the BGaN layer is estimated by linear interpolation of lattice parameters as estimated by the Bruker D2 Phaser x-ray diffraction (XRD) [15]. The bandgaps of the different layers were estimated using three different techniques: • UV-Vis transmittance spectrum of a 120 nm film measured using a Thermo Scientific Evolution 160. Tauc plot for the estimation of the optical bandgap was utilized [16]. • Additionally, a simple metal-semiconductor-metal (MSM) interdigitated photodetector was created to confirm the optical bandgap [17]. The detector's responsivity was measured using a Zolix photodetector measurement system. The detector covers an area of 500 × 500 µm 2 with each finger having a width and an interdigital distance of 30 µm. The contact is Au/Ti with thicknesses of 200/20 nm. • Furthermore, electron energy-loss spectroscopy (EELS) was used to measure the electronic bandgap. The EELS measurements were performed at 80 kV with a ThermoFisher USA (formerly FEI Co) Titan Themis Z (40-300 kV) transmission electron microscopy (TEM). The TEM was equipped with a double Cs (spherical aberration) corrector, a high brightness electron gun (x-FEG), an electron beam monochromator, and a Gatan Quantum 966 imaging filter (GIF). Low-loss spectra were acquired in microprobe scanning TEM (STEM) mode with about 1 mrad semi convergence angle (4 nm probe). The TEM lamella is around 100 nm thick. The spectrum was captured with resolution of 5 meV and a zero-loss peak full width at half maximum (FWHM) of 50 meV utilizing a monochromator as has been optimized by Lopatin et al [18]. Finally, the valence band and a few core level energies were measured using a Kratos AXIS Ultra XPS. The XPS data was then used to calculate VBO of BGaN with AlN and GaN using equation (2) and equation (3). Since our samples are insulating, charge correction is applied by fixing the C 1s peak at 284.8 eV binding energy [19]. Using this, the Fermi level is assumed at 0 eV binding energy. This serves as an effective reference energy for each XPS scan-it has been reported that the Fermi level is expected to be within the window of 0 ± 0.3 eV given this reference (C 1s peak) [20]. The 0.3 eV error is not in the C 1s reference, so it does not propagate to the band alignment. However, the error in this measurement comes from the electron energy analyzer's precision which is ±0.1 eV. Results and discussion The BGaN film is epitaxially grown on the AlN template and is (002)-oriented. Figure 2(a) shows the XRD 2θ-ω scan of the BGaN/AlN sample with an angular distance of 0.81 • between the BGaN and the AlN (002) peaks. This gives us an approximate B composition of 9.7%. The assumed lattice parameters are in the inset of figure 2(a), which are in agreement with literature accepted values [21,22]. Both bandgap and band offset are dependent on B composition. The bandgap of our sample is first examined using several techniques. Figure 2(b) shows the Tauc plot and the photodetector responsivity curve. Both results in figure 2(b) give nearly the same optical bandgap of the BGaN material. The Tauc plot gives a bandgap of 3.58 eV and the photodetector peak is at 3.51 eV. The responsivity peak has multiple adjacent near band edge peaks which could be the result of either the presence of another phase or more likely point defects and impurities [10]. To investigate the phase purity of the sample, cross-sectional STEM images are taken ( figure 3(a)). Like earlier reports, the image shows a wurtzite material rich in stacking faults which enable the presence of zincblende inclusions in the film [23][24][25]. The sample is then measured using EELS to investigate its bandgap and aid in the interpretation of the band alignment. It was found that the EELS signal is quite noisy for the BGaN film due to the material's quality. As a result, the measurement data is smoothed using a linear Savitzky-Golay method with 10 points of window [26,27]. The number of points is chosen such that the resolution of the spectra is not reduced beyond the FWHM of the ZLP (50 meV), which is taken as the measurement error. The EELS signal is extracted from an area of ∼500 × 100 N m 2 from either side of the BGaN-AlN interface. To extract the bandgaps, the first derivative of the smoothed curves is then fitted with a Gaussian peak. The position of the peak is considered the bandgap [26]. Due to the higher noise in the BGaN spectrum and to avoid the influence of the near band edge peaks, the measured data is smoothed with a window of 100 points before taking the derivative (not shown). The EELS bandgaps for the AlN and BGaN layers are 5.96 eV and 3.55 eV, respectively (figures 3(b) and (c)). Since the major features of the spectra are not changed by the smoothing, the error is assumed to be 50 meV (equal to the FWHM of the ZLP). Finally, the band alignment of the grown BGaN film is investigated by XPS. To align BGaN with AlN, the Kraut method is initially used. This method gives the clearest picture regarding the band alignment, so its result is later used to validate the alignment with GaN. To perform the measurements for this method, three samples are grown: 'bulk' AlN, 'bulk' BGaN, and BGaN/AlN interface ( figure 4(a)). A fourth sample of AlN/BGaN interface is also grown for crossvalidation. Figure 4(c) shows the measured XPS spectra and the difference in energy levels for each sample. These values can be directly put into equation (2) to get a VBO of −1.1 eV. The error here is taken as the precision of the XPS instrument which is ± 0.1 eV. The cross-validation sample's VBO result is −1.2 eV, which is within the error of the measurement. The cross-validation sample has lower material quality, so −1.1 eV is taken as the VBO. The alignment of BGaN and AlN is relatively straightforward when the growth conditions of each sample are optimized. However, the alignment of BGaN with GaN, as in the case of our sample in figure 4(b), is not easily achieved using the Kraut method. This is due to the core levels in the interface sample belonging to the same atoms (Ga and N). The only core level that can distinguish BGaN from GaN is that of the B atom. The Si doping within the GaN layer is below the detection limit of the XPS instrument, so it is difficult to align the interface according to equation (2) (figures 1(b) and (c)). A similar issue led Mickevičius et al to oversimplify the Kraut method and use an equation that may seem identical to equation (3) at first glance [9]. However, what they missed is that the measured VBM are referenced to different Fermi levels since they only measured 'bulk' samples ( figure 1(b)). Therefore, in this study, we perform the measurement only on interface samples such that the energies are referenced properly. This method makes the alignment of GaN with BGaN on its surface analogous to the alignment of a Si with native SiO 2 on its surface [7,8]. This approach is relatively easy to apply to a heterojunction between materials with different bandgaps and valence band positions even in the absence of two distinct core levels of each material. However, this method aligns the heterojunction at the interface instead of the bulk materials (equation (3)). This means that band bending due to interface states and polarization is not taken into account by this method as in the case of alignment using the Kraut method. Furthermore, this method is not so straightforward when the bandgaps and the VBM of the two materials are close. This is, unfortunately, the case for BGaN and GaN. Therefore, to resolve the BGaN/GaN VBO, we utilize a novel statistical technique to process the low binding energy XPS data in (3)). See supplementary material for an analysis of the robustness of this statistical method. Finally, a flat band model for the alignment result of B 0.097 Ga 0.903 N with AlN and GaN is made in figure 5(b) to summarize the band structure results. The results show a type-I alignment for the BGaN/AlN heterojunction and a type-II alignment for the BGaN/GaN junction. These results are qualitatively in agreement with the density-functional theory (DFT) results by Ota et al, but are different from the DFT results by Al Sulami et al [28,29]. Table 1 summarizes the comparison between our experimental results and the previous two DFT results. Additionally, this flat band representation allows us to infer the alignment of the more studied material pair GaN/AlN. As table 1 and figure 5(b) show, the inferred VBO and CBO have values of 0.8 eV and 1.8 eV, which almost perfectly matches the commonly cited 30:70 VBO:CBO ratio for the AlGaN material system [30,31]. [28,29]. The GaN/AlN offset is inferred by the two measurements with BGaN. The theoretical calculation results are for BGaN with composition of 12.5%, which is slightly different from the composition here. Conclusion We have studied the band structure of BGaN in detail using several characterization methods. A B composition of 9.7% was estimated using the peak position of the XRD scan. The bandgap is studied using UV-Vis transmittance, MSM photodetector responsivity and EELS spectra. The bandgap values using all the methods agree with the EELS value of 3.55 ± 0.05 eV. Using STEM, the material is found to be wurtzite with stacking faults in the vertical (002) direction, which enable the presence of the zincblende phase within the crystal. Finally, the band alignments of BGaN with AlN and GaN are studied in detail. The BGaN/AlN alignment is straightforward using the traditional alignment technique by XPS. However, due to similarities in the core levels of BGaN and GaN and their close valence edges, a novel statistical technique is used for this alignment. This technique is found to give a similar precision to that of the XPS electron energy analyzer. We believe this method could give even better precision with higher resolution measurements. Additionally, this method should be applicable to other III-nitride alloys and other material systems. Data availability statement All data that support the findings of this study are included within the article (and any supplementary files).
4,113.2
2023-06-23T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Public and Legal Burdens on Cooperative Banks and Limits of the Financial Strength of Taxes The paper deals with the problems of public and legal burdens of cooperative banks operating in Poland, in terms of their limits on the strength of taxes (various types of mandatory fees and levies for the state). Comparative public and legal burdens of banks are discussed in the literature. As the main research goal, the authors chose to examine the situation of cooperative banks in the context of increasing tax and legal burdens. The article uses the desk research method, comparative method and quantitative analysis of cooperative banks’ tax burden for calculating the author’s tax restrictiveness index. The authors proposed their own definition of the tribute paid by cooperative banks to the state and institutions subordinate to it. Introduction The over-regulation and increase in public and legal burdens of the banking sector after the recent financial crisis also included cooperative banks in Poland.Undoubtedly, the size of the activity, the social mission and the specificity of cooperative banks raises the question whether the cooperative local banking should be equally charged to the state against the principle of proportionality, which is applied in the European Union?José Manuel Barroso, former EC president, stated: "Cooperative banks that have remained faithful to cooperative values and principles and cooperative banks, which rely on members' funds and are controlled by local communities, were able to resist the crisis very well" (Miklaszewska, 2015).However, a wide scope of regulation of the sector and an increase in public and legal burdens encompassed those banks equally.Nevertheless, it is known that small local banks cannot have a higher return on equity than large commercial banks, because their costs are simply higher.The main purpose of the article is to present the methodology of examining public and legal burdens in cooperative banks and to analyze the limit of strength for these burdens (obligatory fees) on the example of 309 banks -participants of the SOZ BPS system (Dec & Masiukiewicz, 2018;Pasternak-Malicka, 2017). The main goal of the paper is to present the situation of cooperative banks in the context of increasing tax and legal burdens.The thesis is as follows: the public-legal burdens of Polish cooperative banks are increasingly higher in recent years, approaching the limit of financial strength of taxes from these banks.The authors carried out a quantitative analysis of financial data obtained from several hundred cooperative banks in Poland and proposed an original methodology for calculating the financial strength ratio of tributes in the studied group of cooperative banks. It is worth mentioning that if banking activity is accepted as a public good, then the state should protect this sector (Masiukiewicz, 2015b).This also happened during the subprime crisis in the EU and the U.S. The sector's densities should be sustainable in relation to imposed regulatory obligations and tasks, including supporting the country's economy (Kalicki, 2019;Żółtkowski, 2016).Thus, under the concept of public-legal burdens, or a kind of tribute, the authors understand the burden more extensive than only taxes, i.e., also resulting from statutory regulations for banks, e.g., contributions to BFG, contributions from cooperative banks to SOZ -Association Protection System, Institutional Protection Scheme -IPS (Dec & Masiukiewicz, 2018) fees on the Polish Financial Supervision Authority and others (Figure 1).Remarkably, this definition does not include social insurance contributions and health protection.It was assumed that these are equivalent burdens, closely related to pay and guarantee a specific set of services instead.Thus, the concept of financial strength of tributes incurred by banks was introduced.The capital adequacy ratio and basic tax burden of the banking sector in Poland are presented below, including cooperative banks.The total capital ratio of banks was in 2018 at a good level (18.9%), in commercial banks it amounted to 19.0%, and in cooperative banks to 17.1%, but further growth of the business requires capital growth increased in 2018 compared to 2017 by PLN 2.5 billion, or 1.3% (Pietraszkiewicz, 2019). Generally, the public and legal burdens of Polish banks (at the level of 38%) are much higher than in Western Europe (29%) and the US (above 22%) -which is not justified by the subprime crisis; which, after all, was not present in Poland (Pietraszkiewicz, 2019).What then is justified, if the ratio of non-performing loans is normal, in the required values? Curiously, the effects of the new regulations on CHF loans and the MREL directive for bank costs will be significant; although co-operative banks will be affected only slightly (Table 1).Summing up, in recent years there has been a tendency of broad regulation of the banking sector and the introduction of new public-law burdens. Effectiveness of cooperative banks and public and legal burdens It was found that at the end of January 2019, there were 547 cooperative banks operating: 347 associated in BPS SA (308 participating in IPS BPS), 198 associated in SGB-Bank SA (196 participating in IPS SGB) and 2 banks operating independently (KNF, 2019). It is crucial to note that the net financial result of cooperative banks decreased compared to September 2017 by 12.1% (to PLN 544.7 million).Net loss in the total amount of PLN 52.9 million was incurred by 7 banks.Own funds of cooperative banks in the three quarters of 2018 increased by 5.4% to PLN 12.1 billion (in the entire banking sector own funds increased by 10.7%).The total capital ratio of cooperative banks amounted to 17.6% (against 17.2% in December 2017), while the Tier I ratio amounted to 16.7% as compared to 16.3% in December 2017 (Table 2).The increase in total operating income (CPO) in cooperative banks has fallen below the growth rate of costs, and in the case of a slowdown in the economy, the disproportion may deepen.As far as we know, the maintenance of cooperative banks in a competitive market requires new outlays, among others for IT (Kata, 2016a;Kozłowski, 2015;Kurkliński & Miklaszewska, 2018).Cooperative banks incur a number of public and legal burdens, both tax and para-tax, mainly included in operating costs (Owsiak, 2013).A significant burden are payments to BFG and fees for SOZ of associations of cooperative banks, while the payments for the BGF guaranteed fund bear interest and where they will not be paid to the bankrupt bank's clients cannot be treated as a current quasi-tax. Significantly, the bank tax does not include small banks and credit unions, nor does it include banks that implement remediation programs; what should be assessed positively (Dec & Masiukiewicz, 2013).However, a few large banks had to pay the tax.Other generally applicable taxes are a tax on civil law transactions, real estate tax (e.g.financial services are exempt from VAT, but all banks pay approximately PLN 2 billion VAT on purchases each year).In addition, banks bear specific burdens, such as reserve requirements at the NBP or, for example, a one-off payment to the borrowers' aid fund.The statutory fees are optional contributions to the audit association and to the chamber of commerce .However, cooperative banks pay a small amount of VAT (despite general tax exemption) in connection with purchases; which is illustrated by an example Cooperative Bank in Piaseczno (Table 3). The specific statutory legal burden for cooperative banks was the depreciation of members' shares; finally, regulations in this area have been modified.In general, the burdens of cooperative banks have increased in recent years.According to Szołno-Koguc (2016) justice is the basis for the construction of a tax system, and the natural limit of taxation is, on the one hand, the state's financial needs, and on the other hand, taxpayers' payment capacities. The level of burdens and the financial strength of the taxes of cooperative banks Laffer's theorem depicts the repayment of public obligations as a curve with a normal distribution (Trabandt & Uhlig, 2011).In contrast, revenues from rising taxes are approaching zero in an asymptotic way.However, the Laffer curve is not just an economic law, but it can be proved mathematically by Rolle's theorem.An important drawback of the Laffer effect is the passage of time, because the reactions of individual economies to the increase in taxes occur in a longer time horizon, and in the meantime, other factors may influence other tax revenues of the state budget (Gomułowicz & Małecki, 2013). Contemporary systems of public-legal burdens (levies) consist not only of taxes, but also a pair of taxes; this applies to financial institutions (Dec & Masiukiewicz, 2013).Hence the need for a modified model of analysis of these loads.For cooperative banks, the financial tribute strength analysis model should be built as below.The model introduces a category of operating income before payment of taxes (DOPD).Assuming the normal distribution of the Laffer curve, it can be concluded that the WD ratio above 50% signals excess charges on inputs and thus a decrease in financial strength. Where: WD -tax restrictiveness index, DOPD burden index DOPD -operating income before the payment of public and legal levies ZO -operating profit (before PD) PD -income tax SKNF -payment to the Polish Financial Supervision Authority SBFG -contributions and fees for the Bank Guarantee Fund (including the Stabilization Fund) OSOZ -payment for the costs of the protection system of the association of cooperative banks PN -real estate tax OFWK -payment for the borrowers' support fund PB -bank tax (in principle, exemption for most cooperative banks and credit unions) In the area of payments to BFG, there have been significant changes since 2018.Contributions and fees paid to BFG in 2018 included payments to the Bank Guarantee Fund, payments for the Compulsory Fund for Restructuring Banks and other payments to BFG (annual contribution for costs and Stabilization Fund), i.e., without funds for FOŚG (they are at the disposal of banks and they bear interest and payments are incidental, i.e., in the event of bankruptcy of the bank).In the analyzed years, cooperative banks did not pay contributions to the borrowers' support fund.The fee for SOZ BPS applies only to the cost of maintaining the system and does not include payments to the interest-bearing liquidity fund of affiliated banks.In fact, this fee should be reduced by the estimated cost of savings in banks as a result of the takeover of the audit and control function by SOZ BPS. For the purposes of calculating the financial strength ratio of the WD data, the data of the cooperative banks of the Association of BPS SA for the last three years were adopted; those that created the association protection system (Table 4).The use of the above-described formula allowed to calculate WD -estimated financial strength index of tributes in the studied group of cooperative banks.The results are as follows: • WD (2016) = 35.3%• WD (2017) = 34.6%• WD (2018) = 35.7%The financial strength limit of the taxes of cooperative banks is getting closer (WD is increasing) and conclusions should be drawn from this fact. Major tax conditions in the future Foresight analyzes indicate that Polish cooperative banks may be increasingly burdened with public and legal liabilities.The most important problems in this area are presented below (Masiukiewicz, 2015a;Jaka przyszłość czeka…, 2016;Kalicki, 2019). 1. New, increased requirements regarding the capital adequacy ratio in the EU.Some cooperative banks have not achieved a higher rate (by 50%) yet; and it should be achieved in 2019 . 2. Increase in fees for BFG, it was announced at the Banking Forum of the Polish Bank Association Office 2019.For cooperative banks, the payment of some of the fees to the BFG should be maintained (and even increased) in connection with having its own security systems, i.e., Protection Systems of Associations (generating costs) (Ustawa z dnia 10 czerwca 2016 r., 2016; Obwieszczenie Marszałka Sejmu…, 2018). 3. There is a need to increase the equity of cooperative banks, related to the development of operations, including the costs of new information technologies and digital security -this is a condition for maintaining the level of competitiveness (Kozłowski, 2015;Kurkliński, 2017).Cooperative banks are also charged with the obligation to depreciate members' shares as a result of the provisions of the EU directive; what is assessed as contradictory to the principles of cooperatives (Czternasty, 2015).On the other hand, lack or low dividend payments also cause a decrease in the number of shareholders in cooperative banks -which is an unfavorable trend. 4. There is a possibility of incurring new fees for the benefit fund for Swiss franc loans in connection with the work on the Act in this respect. 5. The likelihood of introducing a cadastral tax; the risk of increased financial costs of banks operating in their own facilities is rather small.6.The new tax ordinance draft of 6 October 2017 contains 687 articles to which the codifying committee has submitted 511 pages of justification (Prace Komisji Kodyfikacyjnej…, 2019).One should add about 140 additional changes, which will have to be introduced on the basis of about 40 ordinances.With so much legal uncertainty; resulting primarily from discrepancies in interpretation by the tax authorities themselves and from continuous changes in tax regulations, critical conclusions from surveys conducted among entrepreneurs by the NBP should be taken seriously.A complicated and unfriendly fiscal system may in future not only be a brake on development but also a factor in the degradation of business entities (Podatki i niepewność prawa…, 2019). Figure 1 . Figure 1.Types of public and legal burdens of cooperative banksSource: Authors' own study. Table 1 . Effects of new regulations on CHF loans and the MREL directive (Pietraszkiewicz, 2019)s and other commercial banks are included in the group of banks with limited development potential Source:(Pietraszkiewicz, 2019). Table 3 . VAT payments by the Cooperative Bank in Piaseczno Source: (Data of Cooperative Bank in Piaseczno). Table 4 . Financial strength ratio of cooperative banks' taxes -participants of SOZ BPS in 2016-2018 Authors' own study based on data from the BPS SA Head Office and SOZ BPS.
3,175.2
2020-04-20T00:00:00.000
[ "Law", "Economics", "Business" ]
Highly gate-tuneable Rashba spin-orbit interaction in a gate-all-around InAs nanowire metal-oxide-semiconductor field-effect transistor III-V semiconductors have been intensively studied with the goal of realizing metal-oxide-semiconductor field-effect transistors (MOSFETs) with high mobility, a high on-off ratio, and low power consumption as next-generation transistors designed to replace current Si technology. Of these semiconductors, a narrow band-gap semiconductor InAs has strong Rashba spin-orbit interaction, thus making it advantageous in terms of both high field-effect transistor (FET) performance and efficient spin control. Here we report a high-performance InAs nanowire MOSFET with a gate-all-around (GAA) structure, where we simultaneously control the spin precession using the Rashba interaction. Our FET has a high on-off ratio (104~106) and a high field-effect mobility (1200 cm2/Vs) and both values are comparable to those of previously reported nanowire FETs. Simultaneously, GAA geometry combined with high- κ dielectric enables the creation of a large and uniform coaxial electric field (>107 V/m), thereby achieving highly controllable Rashba coupling (1 × 10−11 eVm within a gate-voltage swing of 1 V), i.e. an operation voltage one order of magnitude smaller than those of back-gated nanowire MOSFETs. Our demonstration of high FET performance and spin controllability offers a new way of realizing low-power consumption nanoscale spin MOSFETs. The high electron mobility of III-V semiconductors makes them good candidates for the development of field-effect transistors that can be operated with high speed, a high on-off ratio, and a low power consumption. Of these semiconductors, those showing band structures with large spin-orbit splitting have been independently attracting great interest in relation to spin FET applications 1 . The large band splitting is mostly associated with the Rashba spin-orbit interaction (SOI) generated with an electric field induced by structural inversion asymmetry. The Rashba SOI is given by the Hamiltonian, H = eα 0 • (σ × k), where e is an elementary charge, α 0 is a Rashba coefficient determined from the band structure of a bulk material, σ is the Pauli matrix, k is the electron wave vector and E is the electric field vector [2][3][4] . The Rashba coupling parameter given by α ≡ α 0 eE is an important index as a measure of modulating electron spin, and increasing and controlling α with the gate voltage has been a focus of attention. Here, we report high gate-tunability of the Rashba SOI in an InAs nanowire MOSFET employing gate-all-around (GAA) geometry 19 , in which gate-induced electric field is more enhanced and more uniform than those in conventional bottom-or top-gated nanowire devices [13][14][15][16] , multigated nanowires 18 , and Ω-shape (partially coaxial) gated devices [20][21][22] . The Rashba parameter that we obtained by weak antilocalization measurements is 0.6 × 10 −11 -2 × 10 −11 eVm, and the gate voltage tunability is 1.2 × 10 −11 -2.4 × 10 −11 eVm/V, the latter being ten times larger than that obtained for various types of III-V semiconductors including InAs nanowire MOSFETs [6][7][8][9][10][11][12][13][14][15][16] . This is also comparable to the best V g tunability achieved for an ion-gated InAs nanowire FET 17 . In addition to the excellent V g tunability of the Rashba SOI, our device exhibits excellent FET characteristics including a high on-off ratio (10 4~1 0 6 ) and a high field-effect mobility (1200 cm 2 /Vs). As MOSFETs have faster responses than ion-gated devices, which normally require considerable time for electric double layer stabilization 23 , our demonstration of both the excellent FET performance and high tunability of the Rashba SOI in a small V g range could lead to the development of a practical spin nanowire MOSFET with low power consumption that is compatible with the currently used Si transistor platform. Figure 1(a) is a schematic illustration of our GAA InAs nanowire FET, which we fabricated using a similar method to the one we used for our previous nanowire FETs 24,25 . GAA geometry, which is also called surrounding gate 26 or wrap-gate 27 geometry, has been used not only to induce a uniform electric field but also to suppress the short-channel effect of transistors 28 with an improvement in nanowire FET performance 29,30 . To obtain a high carrier density and thus induce a strong electric field, we used the high-κ gate dielectrics of Al 2 O 3 /HfO 2 (2 nm/4 nm) grown by atomic layer deposition (ALD). The InAs nanowire coated with the above dielectrics was deposited on a pre-patterned substrate and then gate metal was evaporated onto the nanowire. This two-stage deposition of gate metal allows us to fabricate a GAA structure. As shown in Fig. 1(b), our sample is covered by the gate electrode over 90% of the channel length, which allows us to ignore the contributions of the ungated regions (for details see Method). Figure 1(c) shows a TEM image of a cross-section of a typical nanowire FET. We find that layered gate dielectrics and GAA geometry are formed according to our MOSFET design. These structures are also examined with energy dispersive X-ray spectrometry (EDS). The false colour images in Fig. 1(d-i) rule out any significant migration or diffusion of the deposited elements or contamination during the device processing along the entire channel. Results We first describe FET operation at various temperatures. Figure 2(a) shows the transfer characteristics of the device measured at room temperature for different source drain voltages V sd of 100 to 500 mV. As shown in the inset, the subthreshold slope (SS) and on-off ratio are 350 mV/dec and over 10 4 at room temperature (RT). Here SS is defined as dV g /dlogI sd with the source-drain current I sd . While the SS values for our typical devices fabricated in the same manner usually exceed 200 mV/dec at RT, which is larger than the ideal RT limit of 60 mV/dec, the on-off ratio exhibits good performance and is generally higher than ~10 4 . When we decrease the measurement temperature to 1.5 K, the SS and on-off ratio are greatly improved to 25 mV/dec and 10 6 , respectively, as shown in Fig. 2(c). The high on-off ratio at RT and 1.5 K are comparable to the excellent previously reported values for GAA InAs nanowires 24,25,[31][32][33] and GAA InGaAs nanowires 34 . Moreover, steep increase in I sd within V g~1 V indicates that our GAA device is operated at lower voltage than conventional back-gated nanowire FETs with cylinder-on-plane (COP) geometry [13][14][15][16] . Figure 2(b,d) show the output characteristics for various V g values measured at RT and 1.5 K, showing that a good saturation is obtained within a V sd of 0.5 V. To investigate how robust our FET is under ambient conditions, we compare the same device in different measurement runs. Figure 3(a) compares the device transfer characteristics measured before the first cooling with those measured after 6 months, during which time the sample was stored in ambient air when not in use. Although reduction in I sd is accompanied by a reduction in the on-off ratio from 2 × 10 4 to 1 × 10 4 , we observe no notable change in SS values between the two cases. Moreover, our GAA device shows robust and clear transfer characteristics for various temperatures down to 1.5 K [ Fig. 3(b)] after 6 months interval. We note that the data shown in Fig. 2(a-d) were measured after several cooling cycles, indicating that our FET performs well even after being affected by thermal cycles and the ambient conditions. We next compare the field-effect mobility μ for the two cases and examine the temperature dependence. μ is given by , where L g is a gate length of 3.3 μm, C is a gate capacitance of 2.29 × 10 −14 F, and V sd is the source-drain bias. The sample exposed to the first thermal cycle shows mobilities of 1000 cm 2 /Vs at RT and 1200 cm 2 /Vs at 1.5 K, as shown in Fig. 3(c). Our device shows less T dependence than other InAs nanowire GAA devices 32,33 . Indeed, many of our GAA devices possess mobilities of 1000-1500 cm 2 /Vs at room temperature. The value is one order of magnitude higher than that of a previously reported InAs GAA device using the gate dielectrics of HfO 2 (~109 cm 2 /Vs) 31 , and comparable to single-crystalline and pure-phase InAs nanowire with GAA geometry 33 (1500 cm 2 /Vs) and high-mobility InGaAs nanowire FETs (1030 cm 2 /Vs) 34 . However, after several thermal cycles and long-time storage under ambient conditions, the mobility decreased to around 400 cm 2 /Vs, which is nevertheless higher than the mobility of a high-κ gated MoS 2 2D transistor 35 or a Si nanowire FET 36 . The decreased mobility may be attributed to increase in access resistance resulting from the nanowire segment that is not coated by the gate metal, possibly due to impurities adhered to that segment by repeated thermal cycles or during sample storage. Therefore, the decrease is merely in the extrinsic mobility, not the intrinsic one. This is also supported by the fact that SS after 6 months, which shows linear temperature dependence that is characteristic to standard FETs [ Fig. 3(d)], has no notable difference from SS for the first cooling from room temperature to 1.5 K, indicating that surface states of the nanowire under the gate electrode are expected to be unaffected. In this paper, we use data obtained for the sample when it had a field effect mobility of ~400 cm 2 /Vs unless otherwise stated. However, we emphasize that gate efficiency on the nanowire channel was not degraded during 6 months, as is seen from virtually unchanged SS values. This is also consistent with the results obtained by magnetotransport measurements as we discuss later, in which we confirm that the gate controllability of the Rashba parameter was not degraded after 6 months. Having examined the FET performance of our device, we then investigated the effects of a spin-orbit interaction by conducting magnetotransport measurements at 1.5 K. Figure 4(a) shows the correction of magnetoconductance (ΔG ≡ ΔG(B) − ΔG(0)) as a function of a magnetic field (B), where the magnetoconductance was deduced from the two-terminal dc-transport at V sd = 10 mV. The data have been smoothed over V g ± 15 mV and B ± 15 mT to exclude universal conductance fluctuations or other random fluctuations caused by impurities, as in refs 14, 16. In addition, our data are further averaged with respect to the reversed magnetic field sweep direction to fit the data with better accuracy as described below. As V g increases, B dependence of ΔG changes from a dip to a peak, indicating a crossover from weak localization to weak antilocalization 37,38 , which occurs for conducting channels in a variety of materials and devices 9,39,40 in the presence of a strong spin-orbit interaction. Such a crossover from weak localization to weak antilocalization has also been observed for various types of InAs FETs 12-17 , where spin-orbit interaction is considered to be the Rashba SOI originating from a strong electric field. These devices have a mean free path shorter than the nanowire diameter, indicating that an electrical channel in a nanowire can be reasonably analysed in the framework of the disordered one-dimensional weak antilocalization model reported in ref. 38, where h is Planck constant, L g is the gate length, l φ is the phase coherence length, l so is the spin-orbit relaxation length, D is the diffusion constant, and τ B is the magnetic relaxation time. Here τ B is given by with l B being the magnetic length given by . Note that using this relation reduces fitting parameters to only l so and l φ . Our device has a typical mean free path of 12 nm, which is smaller than the nanowire diameter of 100 nm. Therefore, the use of Eq. (1) is justified, as plotted by the solid lines in Fig. 4(a), which fit well with our data. l so and l φ are shown in Fig. 4(b), together with τ so and τ φ , which are deduced from τ so (τ φ ) = l so (l φ ) 2 /D with diffusion constant D given by D = v F 2 τ/3. Here v F is the Fermi velocity and τ is the momentum scattering time given by τ = μm * /e (m * : effective electron mass) with m * = 0.023 m e (m e : electron mass). We also note that l φ > W, which is required for a one-dimensional weak antilocalization condition, is satisfied as shown in Fig. 4(b). As V g increases, l so decreases and l φ increases, reaching a crossover at V g ~ 0.5 V. This corresponds to the gate voltage at which a crossover from weak localization to weak antilocalization occurs. The decreasing l so accompanied by a rapid decrease in τ so demonstrates that the spin-orbit relaxation length is tuned significantly by the electric field induced by the gate voltage. Discussion We in turn compare the V g tunability of l so obtained for our device with those already reported for other InAs nanowire FETs [13][14][15][16][17][18] . As is clearly seen in Fig. 5(a), where l so is plotted against V g , our GAA MOS-type device shows superior V g tunability; l so is modulated several times in a V g range an order of magnitude smaller than that used to operate back or top-gated (cylinder-on-plane) InAs nanowires [13][14][15][16]18 , indicating that our GAA MOSFET can offer much lower power consumption than conventional nanowire MOSFETs. The tunability for our device also reaches a high level comparable to the previously reported best controllability obtained for an InAs nanowire device operated with electrolyte gating 17 . It is noteworthy that such high V g tunability is achieved for a MOSFET, which has an advantage of easier and faster operation than ion-gated devices particularly in temperature-variable measurements. This is because ion-gated devices typically require the temperature to be increased to change the carrier density for ion polarization 41 , which itself requires a long time to stabilize 23 . These types of devices sometimes take more than ten hours for temperature variation to minimize sample electrochemical degrading 42 . Using experimentally extracted l so , we calculated the Rashba coupling parameter α R and corresponding electric field E R . Here α R is given by where ħ is the reduced Planck constant and α 0 is the Rashba coefficient of bulk InAs α 0 = 1.17 nm 2 (ref. 43). Figure 5(b) shows α R and E R as a function of V g . The red and blue circles indicate data obtained for the first cooling [with a mobility of 1200 cm 2 /Vs, as shown in Fig. 3(a,c and d)] and for the cooling carried out with an interval of 6 months [with a mobility of 400 cm 2 /Vs, as shown in Fig. 3(a-d)]. Despite the long time interval and difference in mobility, the α R and E R values obtained from two measurements are in good agreement. When V g is increased above the threshold voltage V th , α R and E R increase linearly as expected. A rapid increase in α R up to V g ~ 1.5 V provides Rashba parameter tunability reaching 1.2 × 10 −11 eVm/V. Figure 5(b) also summarizes the V g tunability of the Rashba SOI extracted from various devices, where our device is compared with an ion-gated InAs nanowire device 17 , a back-gated cylinder-on-plane InAs nanowire 13 , and other two-dimensional FETs fabricated from strong SOI material 7,8 . Here α R is estimated by analysing the crossover from weak localization to weak anti-localization for the nanowire devices, and is extracted from beating patterns in magnetotransport for the two-dimensional FETs. While the V g tunabilities of α R and E R for our sample are about a quarter of their counterparts for the ion-gated device 17 , they greatly exceed the values obtained for a conventional back-gated cylinder-on-plane InAs nanowire MOSFET 13 as well as those obtained for two-dimensional FETs fabricated from III-V material 7,8 . We further investigate the ratio of the calculated electric field E L expected from GAA geometry and the E R value that is directly associated with the Rashba SOI. In the cylinder capacitance model, the charge line density Q L and associated electric field E L are given by, L L InAs 0 where C is the cylindrical gate capacitance (see Method), V fb is the gate voltage that gives flat band condition, W is the nanowire diameter, and ε 0 and ε InAs are the vacuum and relative permittivities. The slope of the V g dependence of E L is extracted for our device from these equations. We use C = 2.29 × 10 −14 F, L g = 3.3 μm, W = 100 nm Figure 5. (a) Comparison of V g dependence of l so in our device and those in previously reported InAs nanowire devices. They are categorized as having GAA geometry and back-and/or top-gate (cylinder-on-plane) geometry. (b) Rashba parameter α R and associated electric field E R plotted as a function of V g for our GAA InAs nanowire MOSFET, an InAs nanowire device using electrolyte 17 , an InAs nanowire device using a backgate with cylinder-on-plane (COP) geometry 13 , InGaAs QW 8 , and InAs 2DEG used to develop a spin FET 7 . Data shown with red and blue symbols were obtained from measurement runs for the first cooling and for the cooling after 6 months interval in Fig. 3. (c) E R as a function of E L for our device and that in ref. 17. (d) E R to E L ratio as a function of V g for our device and that in ref. 17. for our sample. As for V fb. we use gate voltage given by the intercept of E R = 0 for the Rashba measurements. The dash-dotted line in Fig. 5(b) tracing our data has a slope that is twenty times smaller than that for calculated E L . We then consider the ion-gated device described in ref. 17, where the authors adapted the same cylinder capacitance model to their device. We calculate E L using the corresponding values shown in Supplementary Information in ref. 17 (C = 1.44 × 10 −14 F, L g = 2 μm, W = 25 nm). Their data are also traced by the dashed line with a slope twenty times smaller than E L calculated for their device. The inconsistency between E L and E R is pointed out in ref. 17, and they attributed it to electric field decay due to screening by the gate-induced charge in the nanowire channel 44 , also noting that this decay would appear similarly in GAA MOS-type nanowires. We consider that the inconsistency we found with our device is partly associated with this charge screening, which is mainly due to surface-state pinning 15 . We also mention that the field gradient on V g can be reduced by trap states or interface states possibly incorporated in a gate insulator, which would act as a reservoir for gate-induced carriers 45,46 , even though our device is expected to have less interface state density due to the insertion of an Al 2 O 3 layer before HfO 2 growth 47 . When we assume the presence of interface states located between the InAs surface and the Al 2 O 3 gate insulator, the interface state density required to explain the dash-dotted line would be very large, reaching ~3 × 10 14 eV −1 cm −2 based on a model similar to that described in ref. 45. This unreasonably large value of the interface-state density itself suggests that our device is significantly affected by the charge screening effect. To highlight the efficiency of our device, we compare E L , E R and E R /E L between the two devices. As expected from the device geometry, E L for V g -V fb of 1 V is calculated to be 4.0 × 10 8 V/m for an ion-gated device (with their assumption of a Debye length of 1 nm (ref. 22), which corresponds to the gate insulator thickness in GAA geometry) and 1.0 × 10 8 V/m for our device. It should be noted that, while we compare devices with different nanowire diameters, E L is determined solely by the gate insulator material and gate geometry, and is thus inherently nearly independent of nanowire width. Although E L for our device is about one quarter of its electrolyte counterpart, it is significant that a MOSFET has such a high E L value owing to its thin high-κ gate dielectrics. When E R is plotted as a function of E L , instead of V g , as shown in Fig. 5(c), data from our MOS device and those from the ion-gated device fall on almost the same line. This consistency between totally different devices highlights the fact that our GAA device is fabricated as well as an ion-gated device as regards the gate-control efficiency that affects the Rashba SOI. Although the E R to E L ratio decreases to about 5% for both devices, our MOS device does not require any thermal cycle for gate voltage change unlike ion-gated device, and therefore enables in-situ continuous tuning of α R . Furthermore, the E R to E L ratio in our device is nearly independent of V g [see Fig. 5(d)], thus ensuring more stable SOI operation by sweeping gate voltage. The above results demonstrate that our GAA geometry with high-κ gate dielectrics has the Rashba SOI tuning efficiency close to the best value ever achieved, at the same time as enabling the continuous in-situ tuning due to the faster response of MOS design. We believe that these advantages will make our device a prototype nanoscale MOSFET for use in realizing practical spin control application. Method InAs nanowires are grown by vapour-liquid-solid method using gold nanoparticles as catalysts 48 . For the gate dielectrics, we combined two high-κ gate dielectrics of Al 2 O 3 (2 nm) and HfO 2 (4 nm) grown by ALD. The growth of Al 2 O 3 before HfO 2 can improve the interface between InAs and gate dielectrics, which may reduce the interface state density in ALD-grown gate dielectrics 47 . As shown in Fig. 1(b), more than 90% of the channel length of our device is coated with a gate electrode. When we considered the contributions of ungated regions and deduced the corrected mobility as in refs 33 and 34, we found that the corrected mobility differs less than 5%, which allows us to disregard the contributions of the ungated regions. The sample was measured with a standard DC transport method or ac lock-in techniques at room temperature down to 1.5 K using a cryostat. To obtain the gate capacitance, we used a standard cylindrical model. When a gate insulator with a thickness of h coats a nanowire with a radius r and length L g , the gate capacitance C is given by = , where ε h is relative permittivity of the gate insulator. Since our device employed a double layer of high-κ gate dielectrics, Al 2 O 3 and HfO 2 , we use the total gate capacitance C tot given by = +
5,482.4
2017-04-19T00:00:00.000
[ "Physics" ]
Examining stability of machine learning methods for predicting dementia at early phases of the disease Dementia is a neuropsychiatric brain disorder that usually occurs when one or more brain cells stop working partially or at all. Diagnosis of this disorder in the early phases of the disease is a vital task to rescue patients lives from bad consequences and provide them with better healthcare. Machine learning methods have been proven to be accurate in predicting dementia in the early phases of the disease. The prediction of dementia depends heavily on the type of collected data which usually are gathered from Normalized Whole Brain Volume (nWBV) and Atlas Scaling Factor (ASF) which are normally measured and corrected from Magnetic Resonance Imaging (MRIs). Other biological features such as age and gender can also help in the diagnosis of dementia. Although many studies use machine learning for predicting dementia, we could not reach a conclusion on the stability of these methods for which one is more accurate under different experimental conditions. Therefore, this paper investigates the conclusion stability regarding the performance of machine learning algorithms for dementia prediction. To accomplish this, a large number of experiments were run using 7 machine learning algorithms and two feature reduction algorithms namely, Information Gain (IG) and Principal Component Analysis (PCA). To examine the stability of these algorithms, thresholds of feature selection were changed for the IG from 20% to 100% and the PCA dimension from 2 to 8. This has resulted in 7x9 + 7x7= 112 experiments. In each experiment, various classification evaluation data were recorded. The obtained results show that among seven algorithms the support vector machine and Naive Bayes are the most stable algorithms while changing the selection threshold. Also, it was found that using IG would seem more efficient than using PCA for predicting Dementia. Dementia is a neuropsychiatric brain disorder that usually occurs when one or more brain cells stop working partially or at all. Diagnosis of this disorder in the early phases of the disease is a vital task to rescue patients' lives from bad consequences and provide them with better healthcare. Machine learning methods have been proven to be accurate in predicting dementia in the early phases of the disease. The prediction of dementia depends heavily on the type of collected data which usually are gathered from Normalized Whole Brain Volume (nWBV) and Atlas Scaling Factor (ASF) which are normally measured and corrected from Magnetic Resonance Imaging (MRIs). Other biological features such as age and gender can also help in the diagnosis of dementia. Although many studies use machine learning for predicting dementia, we could not reach a conclusion on the stability of these methods for which one is more accurate under different experimental conditions. Therefore, this paper investigates the conclusion stability regarding the performance of machine learning algorithms for dementia prediction. To accomplish this, a large number of experiments were run using 7 machine learning algorithms and two feature reduction algorithms namely, Information Gain (IG) and Principal Component Analysis (PCA). To examine the stability of these algorithms, thresholds of feature selection were changed for the IG from 20% to 100% and the PCA dimension from 2 to 8. This has resulted in 7×9 + 7×7= 112 experiments. In each experiment, various classification evaluation data were recorded. The obtained results show that among seven algorithms the support vector machine and Naïve Bayes are the most stable algorithms while changing the selection threshold. Also, it was found that using IG would seem more efficient than using PCA for predicting Dementia. These promising results open the door to a new era of early prognosis of Alzheimer's Disease and Related Dementias (ADRD). . Introduction Dementia is a neuropsychiatric brain disorder that usually occurs when one or more brain cells stop working partially or at all. This brain disorder is often accompanied by memory attenuation. From healthcare records, it was found that people aged 65 and above are more vulnerable to disease (Bansal et al., 2018(Bansal et al., , 2020Lakshmi, 2020). According to the World Health Organization (WHO), there are around 50 million patients with dementia worldwide with an increase of 10 million patients annually (Battineni et al., 2020;Harvey et al., 2003;WHO, 2020). The most common type of dementia is Alzheimer's Disease (AD) where the average age of clinical AD diagnosis received is 80 years old (Barnes et al., 2015;Dai et al., 2020). The prognosis of dementia is still poor and depends on various factors such as age, gender, educational level, and many more (van de Vorst et al., 2020). Early studies suggested that early diagnosis of dementia may prevent deterioration and help widely with treatments and future predictions of the disease (Alam et al., 2016;Battineni et al., 2019;Chen & Herskovits, 2010). This is where advanced computational techniques come into use for predicting future outcomes of dementia whether as a prognosis of disease progression or as a mortality rate (Green & Zhang, 2016;van de Vorst et al., 2020). At the beginning and mild stages of dementia, magnetic resonance imaging (MRI), which is a neuroimaging technique, is becoming an effective tool in detecting AD. However, only a few works correlated AD incidence rate with measurements concluded from MRI (Battineni et al., 2020). It's worthwhile to note that MRI provides significant input variables for machine learning algorithms to make the prediction and classification of probable dementia patients and all age-related cognitive decline (ARCD) in general (Babiloni et al., 2017;Garrard et al., 2014;Pellegrini et al., 2018). Estimated total intracranial volume (eTIV) is a significant feature in the dataset that was studied in this research, which relates the brain and intracranial size to ADRD in a volumetric method which is common in neurodegenerative diseases analysis (Malone et al., 2014). FAST 1 program of FSL software suite was used for computation of normalized whole brain volume (nWBV) expressed as a percentage of the accumulated voxels of white and grey matter in the brain taken from eTIV analysis where volumetric measures are normalized according to head's size (Battineni et al., 2020;Marcus et al., 2010). Machine learning methods are becoming a support decision tool that can help doctors in diagnosing dementia-related diseases. The dementia prediction is treated as a classification problem where possible output labels are {demented or non-demented}. In the literature, it was found that machine learning algorithms were used in many studies to predict the presence of dementia in diagnosed patients. The conclusion drawn from these studies is confusing regarding the performance of machine learning methods. Therefore, this study examines the stability of seven common machine algorithms for dementia prediction. These algorithms are AdaBoost (Ada) and Random Forest (RF), k-Nearest Neighbor (kNN), Support Vector Machine (SVM), Decision Tree (DT), Naïve Bayes (NB), and Logistic Regression (LR). To accomplish the stability examination, two feature reduction algorithms were used: Information Gain (IG) and Principal Component analysis. Information Gain can help in ranking features according to IG score then a specified threshold can be applied to select topranked features (KENT, 1983;Raileanu & Stoffel, 2004). Whereas, PCA is a feature reduction algorithm that aims at generating new dimensionality (usually less than the original dimension) based on the structure of the original dataset (Abdi & Williams, 2010;Pearson, 1901;Ringnér, 2008). For each feature reduction algorithm, the threshold of features was changed to produce a new feature subset every time. Regarding IG, the Information gain score is first measured between each input feature and the output feature. The input features are then ranked according to their Information gain score from high to low. Later, specific thresholds (range from 20% to 100%) are applied to select the top features that will be used to slice the dataset that will be used to train and test machine learning methods. For example, the threshold of 20% means selecting the top 20% of the dataset's features. Every time the threshold was changed, a new set of features was added to the previously selected top features. This will form new input data to the machine learning models which enable us to examine the stability of these models based on different data. On the other hand, the same scenario was used with PCA, where a dimension reduction threshold was specified from 2 dimensions to 8. A large number of experiments have been conducted to examine the stability of seven machine learning methods in predicting dementia. The stability term is defined by how much the performance of the machine learning method can remain stable under changing different setting parameters. The setting parameters in this study are the changes in the dataset features and dimensions as explained earlier. However, in each experiment, seven machine learning methods were run over a different feature set either obtained from IG or PCA. This has resulted in 7×9 + 7×7= 112 experiments. The classification accuracy values have been recorded for each experiment. The dementia dataset the has been used in this paper is available publicly from Boysen (2017), more details can be found in section 3.1. The obtained results show that among seven algorithms the support vector machine and Naïve Bayes are the most stable algorithms while changing the selection threshold. Also, the researchers found that using IG would seem more efficient than using PCA for predicting Dementia. The remainder of this paper is organized as follows. In section 1 an introduction to dementia and machine learning integration is followed by the literature review and related studies in section 2. Section 3 covers experimental setup through four subsections: data origin and dictionary, data preprocessing, choice of learning algorithms, and evaluation measures. In section 4, the research methodology was explained. Moreover, section 5 consists of results and discussion, while section 6 wraps up this study with a conclusion. Literature review Most of the studies performed on the prediction of dementia using machine learning models did not mention a clear methodology that could be generalized to ensure a stable high-performance prediction independently of the underlying dataset (Bansal et al., 2020;Battineni et al., 2020;Bharanidharan & Rajaguru, 2020;Dallora et al., 2020;Popuri et al., 2020;Sharma et al., 2020;You et al., 2019). This is because each study uses a different dataset with various features from different sources. In other words, there is a lack of stability assessment of the proposed models for predicting dementia. In Battineni et al. (2020), a hybrid machine learning model combining 4 models with 14 features to diagnose the early stages of Alzheimer's disease was proposed. The researchers studied 373 magnetic resonance imaging tests belonging to 150 elders. As a result, it was found that the proposed hybrid model provides enhanced accuracy of dementia prediction up to 98%. In another study, a machine learning model was presented for the classification of dementia disease using magnetic resonance imaging (Bansal et al., 2020). The researchers proposed using the bag of features method to extract the features of magnetic resonance imaging scans in association with a support vector machine to classify these scans. This proposed methodology leads to an accuracy of 93% for the detection of dementia. A broad multifactorial decision tree model for the prediction of dementia was proposed by Dallora et al. (2020). The researchers used longitudinal data incorporating 75 variables from 726 subjects. The proposed approach reached 74.5% Area Under the Curve (AUC) for the 10-year prognosis of dementia, while the Recall value, that is the ratio of correctly predicted positive observations, was 72.2%. In the study of Sharma et al. (2020), dementia was diagnosed by implementing an iterative filtering decomposition approach to improve the classification accuracy of electroencephalogram (EEG) signal besides Cognitive Tests including finger tapping test (FTT) and the continuous performance test (CPT). The EEG data were collected from 47 subjects. The proposed approach achieved up to 92% accuracies for dementia classification, 91.67% accuracies for early dementia classification and 91.87% accuracies for healthy classifications. Four swarm intelligence algorithms (particle swarm optimization, artificial bee colony, ant colony optimization, and dragonfly algorithm) were implemented and compared with the non-swarm intelligence algorithm (fuzzy-C-means) as in the study of Bharanidharan and Rajaguru (2020). The brain Cross-sectional MRI of 65 non-dementia and 52 dementia subjects were collected and used as input for dementia classification. The results show that the Dragonflyparticle swarm optimization hybrid classifier yields the highest accuracy of 87.18%. The researchers You et al. (2019), examined the relationship between speech and the risk of dementia using a parallel classification system. The system ensembles both the K-Nearest-Neighbor model and the Support Vector Machine model to classify older participants at high or low risk of dementia by selecting 27-feature extracted from audio recordings. The resulting accuracy was 94.7% when the system trained with paralinguistic features only and 97.2% when the system trained with both paralinguistic and episodic memory test features. The authors Popuri et al. (2020), proposed an ensemble-learning model that combines structural features to create an aggregate measure of neurodegeneration in the brain. The classifier was trained on 753 subjects, including 423 stable normal control and 330 dementia of Alzheimer's type. The classifier was trained on 753 subjects, including 423 stable normal control and 330 dementia of Alzheimer's type. Then, independent validation was made on 8834 unseen images to predict the development of dementia of Alzheimer's type depending on the time-to-conversion. The classification performance achieved an area-under-curve (AUC) of 81% for time-to-conversion of 6 months and 73% for time-to-conversion of 7 years. Experimental Setup This section describes the research methodology of the present paper and is divided into the following subsections: Data origin and dictionary The original data came from the OASIS project which aims to provide the scientific community with open-source MRI datasets for free. OASIS is made available by the Washington University Alzheimer's Disease Research Center, Dr. Randy Buckner at the Howard Hughes Medical Institute (HHMI) at Harvard University, the Neuroinformatics Research Group (NRG) at Washington University School of Medicine, and the Biomedical Informatics Research Network (BIRN). However, we've downloaded it in a formatted way from Kaggle (Boysen, 2017). This dataset is based on a study of 150 subjects with multiple visits and scans aged between 60-96. All study participants are right-handed. According to this dataset, subjects can be initially divided into three groups: The first group includes 72 subjects who were classified as nondemented throughout the study. The second group includes 14 subjects who were first diagnosed as non-demented and then converted at a later stage into dementia. The third and final group consists of 64 of the subjects characterized as demented from early visits and remained so for the rest of the study. The last two groups were merged into one group to avoid duplicate results as will be explained in the next section. People with normal age-related brain changes such as mild atrophy, Leukoaraiosis, and common dementia cases of AD, were not excluded from this study. Timeframe for all MRI sessions was within one year. The data dictionary and description for every feature are detailed in Table 1. Years of education SES Socio-economic status (the social standing or class of an individual or group. It is often measured as a combination of education, income, and occupation. Examinations of socioeconomic status often reveal inequities in access to resources, plus issues related to privilege, power, and control). (As assessed by the Hollingshead Index of Social Position and classified from 1 (highest status) to 5 (lowest status)) MMSE Mini-Mental State Examination score (range is from 0 = worst to 30 = best) CDR Clinical Dementia Rating (0 = no dementia, 0.5 = very mild AD, 1 = mild AD, 2 = moderate AD) eTIV Estimated total intracranial volume, mm 3 nWBV Normalized whole-brain volume, expressed as a percent of all voxels in the atlas-masked image that are labeled as gray or white matter by the automated tissue segmentation process ASF Atlas scaling factor (unitless). A computed scaling factor that transforms the native-space brain and skull to the atlas target (i.e., the determinant of the transform matrix). Fig. 1 shows the distribution of the target variable which has two labels {demented, non-demented} grouped by gender. It's clear in Figure 1 that there is a class balance scenario in this binary classification situation. However, it's worth noting that the "converted" label from the dataset was appended and modified to be "demented" because not doing so means having duplicate examples classified as "non-demented" first, then changed to "converted" at a later visit. The latter observation has been noted cautiously and found to have a high impact on the machine learning algorithm performance in subsequent sections in this work. The feature titled "number of visits" was kept in the dataset because of its importance in completing the big picture of the diagnosis procedure. Finally, one can take a baseline idea that males are more exposed to dementia than females according to the studied dataset. 2 shows boxplots of the normalized Whole Brain Volume (nWBV) feature concerning two target labels. It can be observed that the median of nWBV for demented patients is smaller than non-demented patients. Two groups are significantly different, the p-value=0.003. Correlation between age and normalize whole brain volume (nWBV) is shown in Fig. 3. There is a negative correlation between nWBV and age which confirms that the whole brain volume is decreasing as long as the age of patients is increasing. However, no significant difference between the two groups in terms of age was found. Therefore, one could not conclude that there is a clear relationship between age and dementia, even though the medical sectors proved that there is strong evidence that there is a relationship between age and dementia. Data preprocessing Data preprocessing is an essential step in constructing any machine learning model to ensure the quality of the training data. In this study, some important data preprocessing steps were carried out such as handling missing values, outlier detection, and data normalization. Missing data in medical records is not unusual (Zriqat et al., 2017). To handle missing values, the mean imputation method was applied for every missing field in the dataset. In other words, the missing values in any feature are replaced with the mean of that feature. Even though there are different other imputation methods, the researchers prefer to use this approach due to its simplicity, low processing cost, and performance in boosting machine learning models. On the other hand, since the numeric features in the dataset present different scales, they may have a negative impact on the performance of the constructed machine learning models. To eliminate such impact from different feature types, and to have the same influence degree the min-max normalization was used. Regarding outliers, no extreme values were found in all features, therefore no observations were excluded. Choice of learning algorithms As mentioned in the introduction, various machine learning algorithms were chosen in this study to build different prediction models for predicting the Dementia group. These methods are supposed to exhibit different prediction mechanisms and support both linear and nonlinear relationships between the output variable and all input descriptive variables. In particular, seven common machine learning algorithms were used, some of them are known as ensemble learning algorithms such as AdaBoost (Ada) and Random Forest (RF), while the remaining algorithms are considered solo algorithms such as k-Nearest Neighbor (kNN), Support Vector Machine (SVM), Decision Tree (DT), Naïve Bayes (NB), and Logistic Regression (LR). Ensemble learning algorithms are supposed to have better accuracy in comparison with solo algorithms as confirmed in previous studies because they implemented multiple solo algorithms to support weak learners (Minku et al., 2013). k-Nearest Neighbor (kNN) is a supervised machine learning algorithm based on the idea of predicting by similarity. Specifically, the kNN uses a distance measure such as Euclidean distance to retrieve, for each new observation, the nearest observations from the training dataset. The final output is then predicted from the outputs of the set of selected observations. Many control parameters impact the performance of kNN such as feature selection, voting mechanism, feature weighting, number of selected observations (k), and type of distance measure. It is important to note that a smaller value of k can be noisy and subject to the effect of an outlier, whereas the large value of k can smooth the final decision (Wu et al., 2002). SVM is an algorithm used for regression and classification. The basic idea of SVM is to build an optimal hyperplane that can separate data with maximum margin. The margin is defined as the maximal width of the slab parallel to the hyperplane that has no interior data points. The optimal hyperplane generation depends on the choice of kernel functions such as Gaussian, Polynomial, and Radial Basis Function. Both Gaussian and Radial Basis function kernels can benefit hyperplane generation because they support the locality of training data, which means that the data can be efficiently separated (Widodo & Yang, 2007). DT is a tree-like method that uses iterative partitioning to construct the tree. The algorithm uses probabilistic measures such as entropy, information gain, and Gini metrics to decide which optimal features should be used to separate data at each decision node. In each step, more coherent data are grouped based on the decision node. The algorithm also uses a pruning algorithm to remove unwanted sub-branches that do not contribute significantly to the decision process. In this paper, the C4.5 algorithm was not used because it uses gain ratios instead of gains, which can create more generalized trees and not fall into overfitting and it can also handle incomplete data very well (Myles et al., 2004). NB classifier is an efficient probabilistic classifier based on Bayes' theorem of conditional probability. It is "naïve" because it assumes that all predictor variables are conditionally independent. This assumption is often violated in real-world examples, despite that NB can work well in comparison to other classifiers. The NB algorithm can work well with huge data because it needs less computation processes with reasonable speed and accuracy (Soria et al., 2011). LR is a binary classifier based on the assumption of the probabilistic statistical regression technique. LR is used to explain the relationship between one dependent binary variable and one or more independent variables by fitting examples to a logistic curve. The LR uses the sigmoid function to estimate the class probability of a test example (Kleinbaum & Klein, 2010). Random Forest is an ensemble learning algorithm that can be constructed from a set of decision trees using the Bagging algorithm. RF adds additional randomness to the model while growing the trees. Instead of searching for the most important feature while splitting a node, it searches for the best feature among a random subset of features. This results in a wide diversity that generally results in a better model. Therefore, in a random forest, only a random subset of the features is taken into consideration by the algorithm for splitting a node. You can even make trees more random by additionally using random thresholds for each feature rather than searching for the best possible thresholds (like a normal decision tree does) (Qi, 2012). The AdaBoost algorithm is another ensemble algorithm used to learn from multiple weak learners. The basic idea is to build a strong learner from the mistakes of several weaker learners. In other words, one could start by creating a model from the training data. Then, the second model is created from the previous one by trying to reduce the errors. Models are added sequentially, each correcting its predecessor, until the training data is predicted perfectly, or the maximum number of models have been added (Rätsch et al., 2001). Evaluation Measures Choosing the right evaluation metrics is not an easy process because the employed dataset might suffer from imbalanced data distribution which makes most predictions biased towards the dominant class label. For example, the classification accuracy metric cannot tell the true accuracy of the prediction model because it is easily influenced by a truly positive and true negative of the dominant label. Therefore, multiple evaluation metrics are highly recommended in that case. The classification evaluation metrics which were used are: Recall as shown in equation 1, Precision as shown in Eq. (2) Methodology To examine the stability of machine learning algorithms in predicting dementia groups, two feature reduction methods were used that provide flexibility in changing the feature space. These methods are Information Gain (IG) and Principal Component Analysis (PCA). Information gain attempts to rank features based on measuring the information gain metric between the output feature and each input feature then a specific threshold is applied to select top predictive features. PCA is a feature dimensionality reduction attempt to reduce feature space into a smaller space by transforming a large set of features into a smaller one that still contains most of the information in the large set. Reducing feature space into a smaller space can reduce running costs but might affect expenses of accuracy. However, the trick is to trade a little accuracy for simplicity to make the data easier for exploration, analysis, and model construction. Multiple experiments were conducted to investigate the stability of Dementia prediction models. For each machine learning algorithm, both IG and PCA were applied to the employed dataset by changing the threshold. For the IG method, various thresholds were applied to select top-ranked features ranging from 20% to 100%. For example, for a threshold of 20%, the top 20% of all features in the dataset were selected. For PCA, different feature space dimensions were applied to range from 2 to 8. For each threshold and dimension value, a machine-learning algorithm was applied to the reduced dataset, and evaluation measures were recorded also. Each model is validated by using 10-Fold cross-validation which separate the reduced dataset into 10 subsets. In each run, one data subset is used as testing and the remaining data subsets are used for training. Meanwhile, the evaluation measures are recorded for each model. Results and discussion This section presents the results obtained after conducting multiple experiments on the Dementia data set using different machine learning algorithms. The main objectives of these experiments are twofold: 1) to investigate the performance of the employed machine learning and 2) investigate their stability when changing feature space. In this study, the Information Gain was consulted in the first part of the empirical evaluation to reduce feature space by changing the threshold of selection from 20% to 100% with an increment of 10%. Initially, all features are ranked based on their Information Gain score from high to low. The process of selecting optimal features depends on assigning a threshold for the targeted score so any feature that falls within the top percentage is then selected. In other words, for 20%, the top 20% ranked features out of all features were selected. Each time a threshold was applied, and the top predictive features that are ranked by the Information Gain feature selection algorithm were selected. These features are supposed to have a great influence on detecting the group type of dementia disorder. Table 2 shows all features ranked according to the Information gain algorithm as explained in the methodology section. The threshold columns denoted by 20%, 30%, and so forth represent the features selected by that threshold -the selected features are denoted by  symbol. For example, for the 20% column, it can be observed that only CDR and MMSE features have been selected and used to construct machine learning models. One might ask why the features that are not relevant to the output variable were used, the answer is that the researchers need to examine the stability of machine learning models with and without irrelevant features. Age 0.002  Tables 3 to 7 show the aggregate evaluation results for all labels using different selection thresholds of top predictive features. The values in the bold represent the most predictive models for each feature set that are selected by a threshold. Table 3 shows the results of all machine learning models in terms of the AUC measure. It can be observed that no one model can perform better than other models. Remarkably, both DT and NB can show somehow more accurate results than other models. This has also been confirmed by the results in Figure 5 that shows values of AUC for each machine learning model across different threshold values. DT and NB show stable accuracy which suggests that the changes in AUC across different thresholds. Although SVM can use different kernel functions to map data from low dimension to higher, it does not work efficiently for this complex structure dataset because it contains numerical and categorical data. All used models are supposed to support the nonlinear relationship between variables, but since SVM cannot treat input categorical variables in its implementation, only numeric input variables were used. This may explain why SVM could not surpass other prediction models. Surprisingly, the RF obtained less accurate results, even though that RF can treat both numerical and categorical variables and built upon the idea of ensemble learning. Fig. 5. AUC results for the comparison between different machine learning models using IG According to Fig. 5, it can be noticed that LR, RF, and Ada are the most unstable models, whereas NB and DT are the most stable models with slight changes in AUC while changing the selection threshold. Also, there is an improvement in AUC when selecting more features despite some selected features are irrelevant according to Information Gain. Finally, the kNN model shows less accurate AUC results when dealing with a small threshold value, but the AUC improves when the threshold value increases. To investigate more on why kNN behaves badly over small threshold values, the researchers have tried to change the number of k nearest neighbors from 1 to 10 but still, the best result for kNN is so far from other results for small threshold values. Another factor that may increase the accuracy of kNN is the choice of feature weights during the voting process. Two approaches were tried; the first approach is using equal weights for all retrieved k nearest neighbors and the second approach is to use their distance to increase the weight of some labels over others. The latter gives us better results. Indeed, searching for optimal configurations within a large space of configuration possibilities is a time-consuming task and needs optimization algorithms. However, the recorded results for kNN are those after searching manually for the best configurations that include k and weights. Surprisingly, the RF and AdaBoost models that are considered as ensemble algorithms did not beat DT across different feature selection thresholds. Tables 4 to 7 show results for other evaluation measures such as Accuracy, F1, Precision, and Recall. The results in these tables are relatively similar which suggests that kNN, DT, and NB are the most stable models with high values. These results are also confirmed by findings in Figures 6 to 9. In conclusion, it can be found that ensemble methods such as RF and Ada boost cannot beat solo models such as NB, DT, and kNN. Also, the LR model obtained the worst results among different models. The solo models show improvements while increasing the selection threshold which confirms that using as many features as possible increases accuracy in spite that some of these features are irrelevant. . 6. Accuracy results for the comparison between different machine learning models using IG Fig. 8. Precision results for the comparison between different machine learning models using IG In this study, the second part of the empirical evaluation is to further examine the stability of the constructed prediction models in accurately predicting the Dementia group by using the PCA algorithm. The purpose of this validation is to ensure whether the accuracy of prediction models improve and keep stable or there are some fluctuations due to changes in the dimension of generated feature space. Specifically, The PCA algorithm is used to generate a new feature space with a different dimension that changes from 2 to 8. For each dimension, the empirical experiments were re-run on the new dataset and the same machine learning algorithms. The obtained results are recorded in Tables 8 to 12. The first remarkable observation is that all evaluation results are bad when the dimension of feature space is small, and these results are improved along with an increasing dimension of feature space. Furthermore, the PCA results, in general, are worse than the results of the first experiments that use Information gain for feature selection. This is because the PCA generates a new feature space that does not exist in the original dataset. For AUC in Table 8, it can be observed that NB obtained accurate and stable results across different feature space dimension values. For the Accuracy metric in Table 9, it can be seen that NB and kNN are more predictive models. Similarly, Tables 10, 11, and 12 show the same trend exists in Table 9. (10)(11)(12)(13)(14) show the relationship between evaluation measure and feature space dimension for each machine learning model. The findings in these figures suggest that all models behave similarly with a bad performance at small feature space dimensions and these results getting improved while the dimension is increased. Recall results for the comparison between different machine learning models using PCA Conclusion This paper investigates the stability of machine learning methods for predicting dementia in older people. Seven machine learning algorithms in addition to two feature reduction methods with different selection thresholds have been applied to the Dementia dataset. In each experiment, a new set of features were applied as explained in the methodology section. The results drawn from this comprehensive experimentation are: 1) Using Information gain to rank and select features would seem to be more efficient and stable than using PCA. 2) In general, there is no clear stability of all machine learning methods concerning all accuracy measures. 3) It was noticed that Support Vector Machine and Naïve Bayes are the top stable methods among all used machine learning methods. 4) Using high feature dimensions in PCA is better than using a small number of dimensions. 5) Surprisingly, ensemble learning methods such as Ada and Random forest did not show stable performance under different accuracy measures. 6) Finally, adding more features by incrementing the threshold of information gain resulted in an improvement in solo models although some of the added features may be irrelevant. The algorithms that showed consistent improvements are Support Vector Machine, Naïve Bayes, Decision tree, and k-Nearest Neighbors, while the ensemble methods such as Ada and Random Forest did not improve. From these results, it can be concluded that the Support Vector Machine and Naïve Bayes are the most stable methods. However, we doubt that such methods did not work well with low feature dimensionality even though all their accuracy values are superior to other methods. Also, we discourage using ensemble learning methods for predicting Dementia, under any conditional setting.
8,096.2
2022-09-10T00:00:00.000
[ "Computer Science" ]
A Survey of Internet of Medical Things (IoMT) Applications, Architectures and Challenges in Smart Healthcare Systems . Internet of Medical Things (IoMT) or Healthcare IoT is a technological under IoT catering to the healthcare sector. It refers to the interconnection of medical devices, sensors, applications and systems to the Internet. IoMT enables the collection, transmission and analysis of patient’s data in real -time, allowing for remote monitoring and early detection of health issues. IoMT systems present a promising opportunity for prevention, prediction, and monitoring of emerging infectious diseases such as COVID-19. This paper provides a survey of IoMT devices, applications, benefits, challenges, and its impact on the healthcare industry. Introduction Internet of Medical Things (IoMT) is a branch of IoT technology dedicated specifically to the healthcare sector. IoMT provides healthcare providers with continuous access to patients' health data, enabling them to monitor and manage chronic conditions, track medication adherence, and detect health problems before they become serious. IoMT has the potential to transform the healthcare industry by reducing healthcare cost and enhancing the patient's experience for healthcare [1]. By embracing IoMT systems, particularly for managing chronic illnesses, the healthcare sector has the potential to generate cost savings of up to $300 billion especially using telehealth. IoMT constituted 40% of the IoT market by the end of 2020 and the IoMT market is expected to grow to $254.2 billion in 2026 [2]. Unlike other IoT systems, IoMT systems have a direct impact on the lives of patients . IoMT systems present significant challenges like data security, privacy, interoperability, and regulatory compliance. Security of IoMT devices is a major challenge. For example, assaults on implantable devices like pacemaker can be life threatening. It is crucial to safeguard healthcare data in different stages within IoMT systems, including data acquisition, transmission, retention and storage. Healthcare industry experiences 340% *Corresponding author<EMAIL_ADDRESS>more security incidents than any other industry and is 200% more likely to encounter data theft [3]. While the aim of IoMT systems is to reduce the overall healthcare costs, the costs associated with building the Healthcare IT infrastructure are enormous. The cost of the hardware, dedicated IoMT IT infrastructure, cloud computing, and creating consumer facing application result in a high initial investment cost. Even though the eventual return on investments is a definitive, the initial high infrastructure costs act as a barrier to IoMT. IoMT Benefits There are numerous benefits of IoMT in healthcare industry, including: • IoMT Architecture A great number of IoMT architectures for healthcare systems are proposed with different layer formats by various researchers. Among them a predominant style for IoMT architecture is a four layer model. Y. Sun et al [8] describe a 3 Tier architecture for IoMT based healthcare systems. Namely, tier1 which is the sensor level comprising of sensors and medical devices, tier 2 which deals with communication between sensor devices and coordinator and also data exchange between coordinator and medical server, and tier 3 which deals with data analysis. A.H.M. Aman et al [9] propose IoMT pandemic specific architecture which consists of a 4 layer model as follows: IoMT device layer, IoMT communication network layer, IoMT platform layer, and IoMT application layer. In this paper researcher analysed and summarized various architecture models. C. Dilibal [10] proposes a 3 layer architectural model namely wearable device layer based on IoMT user interaction, edge computing layer and cloud layer for IoMT edge computing based architecture for health monitoring platform. A.E. Khaled et al [15] provide an overview of 3-layer architecture for IoMT, such as the physical layer which deals with sensors and communication methodologies, the edge layer which performs data processing functions and real-time decision-making services, and the cloud layer which performs data processing, and data storage. IoMT Devices Sensors play an important role in IoMT ecosystem for data collection. Biological and physiological parameters of a human body are sensed by different types of sensors. These vital parameters are analysed by IoMT applications to provide healthcare services. In general, based on the usage, sensors are classified as wearable sensor devices, implantable sensor devices, and ambient sensor devices. Wearable or implanted biomedical sensors used to measure various physiological parameters of a human body like body temperature, breathing rate, blood pressure, blood oxygen saturation level, blood pressure, electrocardiography (ECG), electroencephalography (EEG), and electromyography (EMG). A. Ashfaq et al explain the various sensors used in healthcare ecosystem [16]. Fall detection and tracking application used for detect potential falls in the elderly and track them people and notify the guard if something is wrong [18]. Authentication, watermark, encryption, copyright protection, secure data transfer and more all apps fall into the category of data hiding [19]. In recent years, smart watches are very popular among all the age category people. Especially, after COVID-19 pandemic, people are more focused on their heath. Bio medical sensors attached with smart watches measures various health parameters depends on the sensors present in the device and also used as fitness trackers for example number of steps walked or number of kilo meters ran per hour, sleep tracking, fitness goal setting and many more. A recent study by Fortune Business Insights™ titled, "Internet of Medical Things Market Size, Share & Industry Analysis, By Product Type (Stationary Medical Devices, Implanted Medical Devices, Wearable External Medical Devices), By Application (Telemedicine, Medication Management, Patient Monitoring, Others), By End User (Healthcare Providers, Patients, Government Authorities, Others) and Regional Forecast, 2019-2026," it is estimated that the IoMT market worldwide will experience a significant growth, expanding from $30.79 billion in 2021 to $187.60 billion in 2028, with a compound annual growth rate (CAGR) of 29.5% between 2021 and 2028. Diabetes is serious chronic diseases, which affects the patents overall heath. Over longer period of time diabetes affects heart, kidney, nerves system and eyes. According to World health organization, worldwide 422 million peoples are affected by diabetics and in each year 1.5 million death are directly related to diabetics. The diabetic related IoMT devices/sensors like insulin pump, blood glucose monitoring devices like Glucometer provides continuously monitor the patients' glucose level. A.E. Khaled at all [15], classify medical and non-medical devices and sensors based on 6 different dimensions according to domains, type, data, device users' deployment scales, hardware and software capabilities of device. Classification of various IoMT enabled healthcare devices are shown in the following figure. Consumer Wearable Medical Devices: These are consumer devices like smartwatches and fitness trackers that are designed to monitor physical activity and provide insights to help users achieve their fitness goals. These devices include sensors that can track metrics such as steps taken, distance traveled, calories burned, heart rate and sleep pattern. Examples are smartwatches, wristbands and clip-on devices. They can be useful motivator for maintaining an active lifestyle. Clinical Wearable Medical Devices: Clinical grade wearables are medical devices designed to be used in clinical settings like as hospitals. These devices have more accurate and reliable sensors that are designed to measure and monitor physiological parameters in a way that meets the standards of the healthcare industry. They can be used to monitor vital signs such as heart rate, blood pressure, temperature and oxygen levels. Implantable Medical Devices: These devices are surgically implanted inside human body. These devices are made of biocompatible materials, such as titanium or ceramic, and are designed to integrate with the body's tissues without causing an immune response. In-hospital Monitoring Device: Smart hospitals use in-house monitoring devices that are inter-connected to monitor patients' health conditions continuously. These devices are important for providing high-quality care to patients in critical care units and surgical wards. Remote Patient Monitoring (RPM) Devices: RPM devices allow healthcare providers to remotely monitor patient's vital signs, such as blood pressure, heart rate, and blood glucose levels, physical activity and sleep patterns continuously in real-time. This enables healthcare providers to more easily analyze the changes in a patient's health condition, detect anomalies and quickly provide the needed remedy. RPM is an attractive option for healthcare providers and patients as it provides timely service, improves quality of care, reduced healthcare cost. Popular example for RPM devices are: thermometer, blood glucose monitoring devices, ECG monitor, pulse oximeter, weighing scales, spirometers. Point of care devices and Kiosks: Point of care devices are used at the point of care such as clinics, urgent care centers and mobile health clinics to diagnose and monitor patients' health conditions. Examples of such devices are blood glucose meters, blood pressure monitors, pulse oximeters, and portable ultrasound machines. Kiosks are interactive self-service devices used in hospitals to streamline patients' workflow such as registration, appointment scheduling and preliminary health assessments. They are also used in public places to provide convenient access to healthcare services. Thus, medical devices and sensors can be categorized based on their type, application, and range of capabilities, enabling the tracking of patients' physical well-being and remote monitoring of their activities and vital signs in real-time. IoMT Applications Following are typical IoMT applications: Smart Hospitals: Smart hospitals are healthcare facilities that incorporate IoMT devices and technologies to improve patient care and optimize healthcare operations. These can include automated medication dispensers, remote monitoring devices, and other systems that help healthcare providers deliver more efficient and effective care. Remote Patient Monitoring: Remote patient monitoring (RPM) involves the use of medical devices to monitor patients outside of traditional healthcare settings, such as at home. RPM devices can include blood pressure monitors, blood glucose meters, and other vital sign monitors that can send data to healthcare providers for analysis. Telemedicine: Telemedicine refers to the use of technology to provide healthcare services remotely. With telemedicine, patients can connect with healthcare providers using video conferencing and other digital communication tools, allowing them to receive medical advice and treatment without leaving their homes. Medical Imaging: IoMT can also be used to improve medical imaging, such as X-rays and MRIs. Medical imaging devices can be connected to the internet to transmit images to healthcare providers for analysis and diagnosis. Health and Wellness Apps: Health and wellness apps are mobile applications that can be used to track and manage various aspects of a user's health, including diet, exercise, and sleep. These apps can connect to wearable devices and other IoMT technologies to provide users with real-time data on their health and wellness goals. Overall, IoMT applications have the potential to improve healthcare outcomes, increase patient engagement, and optimize healthcare operations. Security in IoMT IoMT devices face a range of security challenges, including: Patient data privacy: IoMT devices often collect sensitive patient data, such as health records and biometric data. This data needs to be protected from unauthorized access or disclosure, as it can be used for identity theft, financial fraud, or other malicious purposes. Device authentication: IoMT devices need to be able to authenticate with the network and other devices to ensure that they are legitimate and not being controlled by malicious actors. Device integrity: IoMT devices need to be protected against tampering or unauthorized access to ensure that they are functioning as intended and not being used to collect or transmit false data. Network security: IoMT devices are often connected to a network, which can be vulnerable to attacks such as denial of service (DoS) attacks or ransomware. Regulatory compliance: Many IoMT devices are subject to regulations and standards, such as HIPAA (Health Insurance Portability and Accountability Act), which require organizations to implement appropriate security measures to protect patient data. To address these challenges, organizations need to implement a range of security measures. These include: Encryption: Data should be encrypted both in transit and at rest to protect it from unauthorized access. Access controls: Access to IoMT devices and data should be restricted to authorized personnel only. Authentication and authorization: Devices should be authenticated and authorized before being allowed to connect to the network or other devices. Regular updates: IoMT devices should be regularly updated with security patches and software updates to ensure that they are protected against known vulnerabilities. Monitoring: Organizations should monitor their networks and devices for suspicious activity and respond to security incidents in a timely manner. Challenges in IoMT IoMT enabled healthcare systems have transformed traditional health care systems. The advantages of IoMT systems are discussed in the introduction part. The IoMT eco system consist of data collection, data pre-processing, data forwarding, and data analysis. There are many challenges to be focused while developing and deploying IoMT system. Some of those challenges are: • [17] list out the challenges related to data privacy and sensor device's energy consumption in healthcare IoT. Conclusion IoMT is a rapidly growing field that has the potential to revolutionize healthcare. IoMT devices and technologies allow healthcare providers to remotely monitor patients, analyse patient data, and provide personalized care. This has become particularly important during the COVID-19 pandemic, as healthcare providers seek to minimize in-person visits and reduce the risk of infection. IoMT has several benefits, including improved patient outcomes, reduced healthcare costs, and increased access to care. It has many use cases, including remote patient monitoring, telemedicine, chronic disease management, and predictive analytics. However, there are also several challenges associated with IoMT, including data security and privacy concerns, interoperability issues, and the need for standardization. Despite these challenges, the IoMT market is expected to continue to grow rapidly in the coming years. Major players in the market include tech giants such as IBM, Apple, Google, and Microsoft, as well as medical device manufacturers such as Medtronic, Philips, and Abbott Laboratories. The increasing adoption of wearable devices and other IoMT technologies is expected to drive this growth, along with advances in data analytics and machine learning. Overall, IoMT has the potential to transform healthcare by improving patient outcomes, reducing costs, and increasing access to care.
3,178
2023-01-01T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
A New Pathway for Glucose-dependent Insulinotropic Polypeptide (GIP) Receptor Signaling The hormone glucose-dependent insulinotropic polypeptide (GIP) is an important regulator of insulin secretion. GIP has been shown to increase adenylyl cyclase activity, elevate intracellular Ca2+ levels, and stimulate a mitogen-activated protein kinase pathway in the pancreatic β-cell. In the current study we demonstrate a role for arachidonic acid in GIP-mediated signal transduction. Static incubations revealed that both GIP (100 nm) and ATP (5 μm) significantly increased [3H]arachidonic acid ([3H]AA) efflux from transfected Chinese hamster ovary K1 cells expressing the GIP receptor (basal, 128 ± 11 cpm/well; GIP, 212 ± 32 cpm/well; ATP, 263 ± 35 cpm/well;n = 4; p < 0.05). In addition, GIP receptors were shown for the first time to be capable of functionally coupling to AA production through Gβγ dimers in Chinese hamster ovary K1 cells. In a β-cell model (βTC-3), GIP was found to elicit [3H]AA release, independent of glucose, in a concentration-dependent manner (EC50 value of 1.4 ± 0.62 nm; n = 3). Although GIP did not potentiate insulin release under extracellular Ca2+-free conditions, it was still capable of elevating intracellular cAMP and stimulating [3H]AA release. Our data suggest that cAMP is the proximal signaling intermediate responsible for GIP-stimulated AA release. Finally, stimulation of GIP-mediated AA production was shown to be mediated via a Ca2+-independent phospholipase A2. Arachidonic acid is therefore a new component of GIP-mediated signal transduction in the β-cell. Glucose-dependent insulinotropic polypeptide (GIP, or gastric inhibitory polypeptide) 1 is a 42-amino acid polypeptide hormone synthesized by mucosal K cells of the duodenum and jejunum and released into the circulation in response to nutrient ingestion (1)(2)(3)(4). GIP and glucagon-like peptide-1 (GLP-1) are thought to be the major hormones (incretins) that constitute the endocrine component of the enteroinsular axis in humans and are responsible for at least 50% of postprandial insulin secretion (5). In non-insulin-dependent diabetes mellitus (type 2 diabetes mellitus), the incretin effect following oral glucose administration is reduced or absent (6,7), and the ability of intravenous GIP, but not GLP-1, to stimulate insulin secretion is severely blunted (7,8). This implies that a defective GIP signal transduction system and/or a reduced number of functional GIP receptors may contribute to the pathophysiology of type 2 diabetes. A greater understanding of the signal transduction systems activated by GIP should assist in determining whether reduced responsiveness involves changes at this level. The receptor for GIP (9 -11) is a member of the class II G protein-coupled receptor superfamily, which includes receptors for glucagon, GLP-1, secretin, and vasoactive intestinal polypeptide (12). Stimulation of the GIP receptor has been shown to stimulate adenylyl cyclase and elevate intracellular cAMP levels in pancreatic islets (13), islet tumor cell lines (14), and various cell lines transfected with the GIP receptor (10,15,16). In addition, GIP has been shown to increase uptake of Ca 2ϩ into isolated islets (17) and increase intracellular Ca 2ϩ levels in HIT-T15 (18), RINm5F (9), and COS cells (10). We have shown that the GIP receptor probably couples to various Ca 2ϩ channels (10), but there is no evidence for GIP-stimulated IP 3 production (18). There is, however, evidence that GIP stimulates insulin secretion (19) and activation of mitogen-activated protein kinase (20) via a wortmannin-sensitive pathway, implying a role for phosphatidylinositol 3-kinase. It is therefore clear that GIP action on the pancreatic ␤-cell involves several interacting signal transduction pathways. Heterotrimeric G proteins are activated by G protein-coupled receptors and undergo GDP/GTP exchange at the level of the G␣ subunit, leading to dissociation of the trimer into G␣ and G␤␥ subunits (33). The G␤␥ subunits have recently been shown to act on a number of effector targets including ion channels, enzymes, and kinases (33). Recent data suggested that inactivation of free G␤␥ completely abolished KCl, Ca 2ϩ , and GTP␥Sevoked insulin release from HIT-T15 cells (34), establishing a role for these subunits in insulin secretion. A role for G␤␥ has also been demonstrated in the coupling of PLA 2 and arachidonic acid production in rod outer segments (35) and to the activation of cardiac potassium channels (36). These observations provided the rationale for determining whether arachidonic acid and PLA 2 are involved in the glucose potentiating effects of GIP in the ␤-cell, with a focus on G␤␥ subunits as a coupling mechanism. We show for the first time that GIP stimulates AA release from CHO-K1 cells and clonal ␤-cells (␤TC-3). Coupling of the GIP receptor to AA production in CHO-K1 cells was via G␤␥ subunits, whereas cAMP was shown to be the mediator in ␤TC-3 cells. The PLA 2 isoform activated by GIP in ␤TC-3 cells was Ca 2ϩ -independent and hypothesized to be the same as that activated by glucose when stimulating insulin secretion. EXPERIMENTAL PROCEDURES Cell Transfection and Tissue Culture-CHO-K1 cells cultured in Dulbecco's modified Eagle's medium/Ham's F-12 medium (Life Technologies, Inc.) and supplemented with 10% newborn calf serum (Cansera, Rexdale, Canada) were stably transfected with the wild type rat GIP receptor as described previously (10,15). The CHO-K1 cell line obtained by pooling clones was termed rGIP-15 and has previously been shown to express receptors at levels similar to high level expressing clones (15). In experiments targeted at investigating a role for G␤␥ signaling, rGIP-15 clones were transiently transfected with plasmid DNA encoding the C terminus of ␤-adrenergic receptor kinase (␤ARKct) (37) or the empty vector (pRK5). Briefly, 40 -60% confluent monolayers in 10-cm culture plates (Becton Dickenson, Lincoln Park, NJ) were transfected using Superfect TM (Qiagen, Valencia, CA) transfection reagent according to the manufacturers' protocol. Cells were harvested 18 -24 h posttransfection and passaged into 24-well plates for subsequent arachidonic acid release experiments. The empty plasmid pRK5 and the plasmid pRK-␤ARKct (495-689) were kindly provided by Dr. R. J. Lefkowitz (37). Passages 20 -30 of rGIP-15 cells were used in these experiments. ␤TC-3 cells were obtained from a frozen stock that was originally a gift from Dr. S. Efrat (Diabetes Center, Albert Einstein College of Medicine, New York) (38). Cells were cultured in low glucose (5.5 mM) Dulbecco's modified Eagle's medium (Life Technologies, Inc.) supplemented with 12.5% horse serum (Cansera) and 2.5% fetal bovine serum (Cansera). Passages 20 -30 were used in these experiments. Iodination of GIP and Binding Analysis-Synthetic porcine GIP (5 g) was iodinated by the chloramine-T method, and the 125 I-GIP was further purified by reverse phase high performance liquid chromatography to a specific activity of 250 -300 Ci/g, (10). The aliquots were subsequently lyophilized and stored at Ϫ20°C until use. Competitive binding analyses were performed as described previously with minor modifications (10). Briefly, CHO-K1 cells plated 2 days prior in 24-well plates were washed twice with 4°C Krebs-Ringer (115 mM NaCl, 4.7 mM KCl, 1.2 mM KH 2 PO 4 , 10 mM NaHCO 3 , 1.28 mM CaCl 2 , 1.2 mM MgSO 4 ) containing 10 mM HEPES and 0.1% bovine serum albumin, pH 7.4 (KRBH), and incubated in triplicate for 14 -18 h at 4°C with 125 I-GIP (50 000 cpm/well) in the presence or the absence of unlabeled GIP (synthetic human GIP 1-42 ; Bachem, Torrence, CA). After two consecutive washes in ice-cold buffer, cells were solubilized with 0.1 M NaOH and transferred to test tubes for counting. Nonspecific binding was defined as that measured in the presence of an excess of human GIP (1 M), and specific binding was expressed as a percentage of maximum binding (%B/B o ). cAMP and Insulin Determination-The cells were passaged into 24-well culture plates at 5 ϫ 10 4 cells/well for CHO-K1 clones and 5 ϫ 10 5 cells/well for ␤TC-3 cells. For cAMP studies, cells were washed twice with KRBH and then stimulated for 30 min with GIP in the presence of the phosphodiesterase inhibitor 3-isobutyl-1-methylxanthine at 0.5 mM concentration (RBI/Sigma). Following stimulation, reactions were stopped, and cells were lysed in 70% ice-cold ethanol, cellular debris was removed by centrifugation, and cAMP was subsequently quantified by radioimmunoassay (Biomedical Technologies Inc., Stoughton, MA). All insulin release experiments were performed over 60 min in KRBH in the absence of 3-isobutyl-1-methylxanthine, and insulin secreted into the medium was quantified by radioimmunoassay as previously reported (39). Arachidonic Acid Release-Arachidonic acid release was determined by methods adapted from Shuttleworth and Thompson (40). Cells were harvested and passaged into 24-well culture plates at 4 ϫ 10 4 cells/well for CHO-K1 clones and 2 ϫ of experimental agents, the wells were washed twice with 0.5 ml of KRBH and allowed to equilibrate for 1 h. Ca 2ϩ -free experiments were conducted in KRBH containing equimolar Mg 2ϩ and supplemented with 10 mM EGTA. The agonists were dissolved in Krebs-Ringer buffer, added in triplicate (0.5 ml total volume/well), and incubated for the length of time shown in the figure legends. As a positive control, ATP was added at a final concentration of 5 M. When used, the inhibitor haloenol lactone suicide substrate (HELSS; Calbiochem, La Jolla, CA) was added for 30 min prior to washing and addition of agonists. After incubation, 0.4-ml aliquots were placed into scintillation vials followed by the addition of 10 ml of Econo 2 scintillation fluid (Fisher), and the radioactivity was determined by liquid scintillation spectrometry. AA released from cells was generally between 2-6% of total [ 3 H]AA incorporated into cells. Data Analysis-The data are expressed as the means Ϯ S.E. with the number of individual experiments presented in the figure legend. All of the data were analyzed using the nonlinear regression analysis program PRISM (Graphpad, San Diego, CA), and the significance was tested using the Student's t test and analysis of variance (ANOVA) with the Tukey post-test (p Ͻ 0.05) as indicated in figure legends. RESULTS Initial studies were targeted at investigating GIP receptor signaling in an expression system, the rGIP-15 clone of CHO-K1 cells. Static incubations (45 min) revealed a concentration dependence to GIP-stimulated arachidonic acid production (Fig. 1a). In agreement with previous, non-incretin, studies on CHO-K1 cells (41,42), ATP (5 M) increased AA release from rGIP-15 cells by greater than 200% (p Ͻ 0.01, n ϭ 4). Parallel studies were performed in ␤TC-3 cells, a model of the pancreatic ␤-cell. These cells respond to arachidonic acid in a glucose-dependent manner (Fig. 2). In the presence of glucose, AA potentiated insulin secretion at concentrations as low as 10 M (Fig. 2a), whereas 20-fold greater concentrations were required before a response was observed under glucose-free conditions (Fig. 2c). The potentiation of insulin secretion elicited by 100 M AA is comparable with that elicited by 100 nM GIP under 11 mM glucose conditions (Fig. 2a versus Fig. 8). GIP was found to stimulate AA release in a concentration-dependent manner (Fig. 1, b and c). Interestingly, the EC 50 value for GIP-stimulated AA release (1.4 nM Ϯ 0.62 nM; n ϭ 3) was similar to that for insulin release in these cells (data not shown), in contrast to the 5-fold higher EC 50 value for cAMP production (39). It is well established that the insulinotropic action of GIP is dependent on elevated glucose levels and that glucose induces activation of PLA 2 in pancreatic ␤-cells. However, we have recently shown that GIP receptor coupling to adenylyl cyclase in ␤TC-3 cells is independent of extracellular glucose concentrations (39). In the current study, increases in [ 3 H]AA efflux stimulated by GIP were also found to be independent of extra- -3 cells (b). The medium was removed at indicated time points, and AA efflux was measured as described under "Experimental Procedures." Note that GIP-stimulated AA release was evident by 30 min; however, no effect of glucose was observed by this point. For a, n ϭ 4, and for b, n ϭ 3-4. *, p Ͻ 0.05. cellular glucose (Fig. 1c), indicating that GIP-induced and glucose-induced increases in AA release were mediated via separate pathways. Analysis of the time dependence of AA release in rGIP-15 cells demonstrated maximal release at 10 min (Fig. 3a), which correlates well with that for GIP-stimulated cAMP production (maximal plateau reached at 10 -15 min in rGIP-15 and ␤TC-3 cells; n ϭ 3). In contrast, GIP-induced AA release was not detected before 30 min of incubation in the ␤TC-3 cells (Fig. 3b), and glucose-induced release was not observed until after 60 min of incubation (Fig. 3b). It was considered possible that these differences in onset of response may reflect alternative GIP receptor-effector coupling systems in the two cell types. Because G␤␥ has been previously implicated in the activation of phospholipase A 2 (35), an inhibitor peptide of G␤␥, ␤ARKct (␤-adrenergic receptor kinase C-terminal tail), was transiently expressed in rGIP-15 cells. To confirm that cells had been transfected, GIP receptor internalization was monitored because G␤␥ subunits have been shown to be required for G protein receptor kinase-mediated G protein-coupled receptor internalization (43). Expression of ␤ARKct was associated with an inhibition of receptor internalization in these cells (versus pRK5 vector control; n ϭ 3). Initial experiments were conducted to examine GIP receptor binding and cAMP production in this expression model. ␤ARKct expression was not found to have any significant effect on either receptor affinity for GIP or on activation of adenylyl cyclase (IC 50 values for binding: 3.95 nM Ϯ 0.91 (n ϭ 3) and 4.07 nM Ϯ 0.97 (n ϭ 3); EC 50 values for cAMP production: 0.73 nM Ϯ 0.12 (n ϭ 3) and 0.49 nM Ϯ 0.09 (n ϭ 3) for vector and ␤ARKct, respectively). GIP receptors were shown, for the first time, to be capable of functionally coupling to AA production through G␤␥ dimers, because the expression of ␤ARKct significantly suppressed the GIP-mediated response by almost 70% (Fig. 4, p Ͻ 0.05). Purinergic receptors were also found to be coupled to AA production via G␤␥ dimers, because ␤ARKct expression reduced ATP-stimulated AA production by greater than 40%. To characterize further the pathway by which AA is produced in the ␤TC3-cell by GIP, the effect of ␤ARKct expression was investigated. To ensure that transfection had occurred, cells were typically cotransfected with green fluorescent protein as a marker of transfection efficiency. Inhibition of G␤␥ action had no effect on glucose-or GIP-stimulated AA release (␤ARKct expression versus pRK5 control; n ϭ 3) or insulin secretion in ␤TC-3 cells (Fig. 5). In addition, pertussis toxin (100 and 500 ng/ml) had no effect on AA release, indicating that toxin sensitive G␣-proteins (G␣ i , G␣ o , and G␣ q ) do not play a role in glucose-or GIP-stimulated AA release in ␤TC-3 cells (data not shown). This agrees with our previous studies showing that pertussis toxin at 500 ng/ml had no effect on cAMP levels in ␤TC-3 cells (39). However, both the diterpene forskolin and the incretin GLP-1, agents that specifically elevate intracellular cAMP levels, were able to stimulate AA release (Fig. 6), indicating that GIP may be acting on AA release via stimula-tion of adenylyl cyclase in the ␤TC3-cell. Interestingly, the specific protein kinase A inhibitor, H89 (5 M), showed no effect on GIP-or forskolin-stimulated AA release (Fig. 7). The possibility that AA-stimulated adenylyl cyclase activity can also be refuted because exogenous AA had either no effect or slightly inhibited basal cAMP production in ␤TC-3 cells (Fig. 2, b and d). The reduction of extracellular Ca 2ϩ was found to have no effect on GIP-stimulated AA release, implying that a Ca 2ϩindependent mechanism was involved in the production of AA (Fig. 8a). As predicted, neither glucose nor GIP was able to stimulate insulin secretion from ␤TC3-cells under stringent Ca 2ϩ -free conditions (Fig. 8b). However, GIP was clearly still capable of elevating cAMP levels despite a reduction in basal cAMP production (Fig. 8c). The cAMP levels resulting from GIP stimulation under Ca 2ϩ -free conditions were, however, significantly suppressed compared with control conditions (p Ͻ 0.05). The ability of GIP to release AA under Ca 2ϩ -free conditions suggested that a Ca 2ϩ -independent PLA 2 was involved. An inhibitor specific for iPLA 2 , HELSS, has previously been shown to inhibit glucose-stimulated AA production and insulin secretion in several ␤-cell models (30,44,45). In the present study, HELSS was found to inhibit GIP-stimulated AA production as well as glucose-and GIP-stimulated insulin secretion (Fig. 9), supporting the aforementioned hypothesis that the enzyme coupled to GIP receptor signaling is a Ca 2ϩ -independent PLA 2 . DISCUSSION In human type 2 diabetes there is a decreased insulin response to GIP that is of unknown etiology. One possible underlying defect is in the normal signal transduction pathways by which GIP stimulates insulin secretion in ␤-cells. It has been established that GIP stimulates adenylyl cyclase (13), increases intracellular Ca 2ϩ (18), and activates mitogen-acti- FIG. 4. Effect of G protein ␤␥ inhibition on GIP-mediated arachidonic acid release in rGIP-15 cells. rGIP-15 cells expressing the GIP receptor were transiently transfected with 10 g pRK5 vector or ␤ARKct cDNA construct, and the medium was removed at 45 min. AA efflux was measured as described under "Experimental Procedures." The inset illustrates basal levels of AA release. *, p Ͻ 0.05. FIG. 5. Effect of G protein ␤␥ inhibition on glucose and GIPpotentiated insulin secretion in ␤TC-3 cells. ␤TC-3 cells express- ing the GIP receptor were transiently transfected with 10 g of pRK5 vector or ␤ARKct cDNA construct as described under "Experimental Procedures." Insulin secretion was assessed by radioimmunoassay and corrected for cell number by representation as the percentage of basal (n ϭ 3). *, p Ͻ 0.05 for basal versus all; #, p Ͻ 0.05 for 11 mM versus GIP; %, p Ͻ 0.05 for 10 nM GIP versus 100 nM GIP as tested by ANOVA. vated protein kinase (20), and the current study was undertaken to identify alternate mechanisms of regulating ␤-cell function. We have shown that GIP receptors in ␤TC-3 cells and transfected CHO-K1 cells are capable of coupling to transduction systems that release arachidonic acid from membrane lipids via activation of a Ca 2ϩ -independent phospholipase A 2 . Additionally, this signaling pathway was shown to involve G protein ␤␥ coupling in CHO-K1 cells, whereas a cyclic AMPmediated pathway is probably involved in ␤TC-3 cells. Initial studies of GIP-stimulated AA release revealed a marked difference in the time dependence of AA release between CHO-K1 and ␤TC-3 cells. The much more rapid release evident in rGIP-15 cells is in agreement with previously observed AA production rates observed with rhodopsin and muscarinic receptors expressed in CHO-K1 cells (41,42) and other cell types (40,46,47). However, coupling of GIP to AA release was much slower in ␤TC-3 cells, suggesting a unique GIP receptor-AA coupling mechanism. There was also a difference between GIP-and glucose-induced AA release in ␤TC-3 cells, with GIP initiating release by 30 min, whereas glucose had no effect by this time (Fig. 3). This suggests that separate mechanisms couple glucose and the GIP receptor to AA production. Extensive studies have established that the glucose-induced AA production coupled to ␤-cell insulin secretion (24,48) involved activation of an ATP sensitive, Ca 2ϩ -independent PLA 2 (27,48). This enzyme has been identified in a number of insulinoma cell lines, including ␤TC-3 cells (44), and further studies were therefore performed to determine whether GIP-induced AA release also resulted from its activation. The C-terminal fragment of the ␤-adrenergic receptor kinase protein (␤ARKct or G protein receptor kinase 2) was utilized to study the role of G␤␥ signaling. Jelsema and Axelrod (35) first suggested that activation PLA 2 can be performed by G␤␥ subunits. In the present study it was found that the GIP receptor can couple to PLA 2 via G protein ␤␥ subunits in CHO-K1 cells, whereas neither glucose nor GIP-stimulated arachidonic release or insulin secretion were dependent on G␤␥ subunit signaling in ␤TC-3 cells (Fig. 4). This is in contrast to their involvement in K ϩ -and bombesin-stimulated insulin secretion in HIT-T15 cells (34). Further studies are needed to determine whether G␤␥ subunits are involved in GIP receptor-effector coupling in other targets such as the stomach, fat, or the adrenal gland (49 -51). Glucose-, GIP-, and ATP-stimulated AA release were all shown to be independent of extracellular Ca 2ϩ , indicating that they are likely acting on a similar iPLA 2 isoform. Recently cholecystokinin, another insulinotropic peptide, was also shown to activate islet PLA 2 independently of extracellular Ca 2ϩ (52). Despite a complete ablation of insulin release under Ca 2ϩ -free (extracellular) conditions, intracellular cAMP levels were still stimulated by GIP (Fig. 8c), implying that this may be the proximal messenger to AA release. A reduction in basal cAMP production is likely attributable to a decrease in basal Ca 2ϩ -activated adenylyl cyclase activity, therefore accounting for the reduction in GIP stimulated cAMP. From these observations it therefore seems likely that both GIP-stimulated cAMP and AA production are proximal signaling events independent of glucose and extracellular Ca 2ϩ but insufficient to elicit insulin exocytosis. However, these signaling intermediates may play more direct roles in the actions of GIP under euglycemic conditions, such as those in the adipocyte (50). In islets, glucose stimulation can elevate endogenously generated AA from the micromolar range to cellular concentrations of 50 -200 M, as measured by mass spectrometry (48). In agreement with work published by Metz (53), exogenous AA over this range was able to stimulate insulin release from ␤TC-3 cells in the presence of glucose. However, in its absence, responsiveness to AA was reduced at least 10-fold. Interestingly, application of exogenous AA has been shown to elevate intracellular Ca 2ϩ concentrations in pancreatic islets (53), and there is considerable evidence suggesting a role for arachidonic acid itself or its metabolites in the regulation of capacitive and noncapacitive Ca 2ϩ influx in a number of cellular systems (54,55). Thus, it is tempting to speculate that fluxes in free endogenous AA, brought about by GIP, may play an integral role in regulating intracellular Ca 2ϩ concentrations and thereby influence insulin secretion. Our studies indicate that the mediation of GIP-stimulated PLA 2 activity probably occurs via cAMP actions in the ␤-cell. Because the specific iPLA 2 inhibitor HELSS ablated insulin responses to glucose and thus the potentiating effect of GIP FIG. 8. Effect of Ca 2؉ -free extracellular media on GIP-mediated arachidonic acid release (a), insulin secretion (b), and cAMP production (c). Ca 2ϩ -free Krebs-Ringer buffer contained equimolar MgCl 2 to replace CaCl 2 and was supplemented with 10 mM EGTA. Arachidonic acid efflux (n ϭ 3-4), insulin (n ϭ 3), and cAMP levels (n ϭ 5) were determined as described under "Experimental Procedures." *, p Ͻ 0.05; **, p Ͻ 0.001. Gluc, glucose. FIG. 9. Effect of Ca 2؉ -independent PLA 2 inhibition on GIPmediated arachidonic acid release (a) and insulin secretion (b) in ␤TC-3 cells. The cells were preincubated with the inhibitor, HELSS, for 30 min prior to stimulation and washed with KRBH before the addition of glucose and GIP. The inset in b represents basal insulin secretion levels under control and test conditions. *, p Ͻ 0.05. (Fig. 9) the converging actions on insulin secretion of these two secretogogues may occur distal to the formation of cAMP (by GIP) and arachidonic acid (by glucose and/or GIP). Arachidonic acid and/or its metabolites may therefore be mainly involved in the fine tuning of the insulin response. The actions of cAMP could be direct, via activation of small G proteins (e.g. Rap), or through a guanine nucleotide exchange factor; however, the involvement of protein kinase A is unlikely (Fig. 7). These results are in contrast to a recent study demonstrating an inhibitory affect of cAMP and incretins (GIP and GLP-1) on CCK-8-stimulated arachidonic acid production and insulin release in the rodent islet (56). However, implicit in studies conducted with isolated islets is the existence of paracrine and endocrine interactions between ␣-, ␦-, and PP cells that contribute to a functional response. This may account for the different responses observed in the clonal cell line used in the present study. Finally, in the current study, production of AA was assessed by measuring the total radioactivity secreted from ␤TC-3 cells. Although it has been shown in studies on tumor ␤-cell lines that a surprisingly small percentage of released radioactivity consists of metabolites (48), a recent study suggested a role for lipooxygenase-12 metabolites in ␤-cell function (57). Further studies need to be conducted to discriminate between AA and its metabolites produced by GIP stimulation of the ␤-cell.
5,650.4
2001-06-29T00:00:00.000
[ "Biology", "Medicine" ]
Analysis of Organic Photovoltaic Device at Different Series Resistances GPVDM is simulation software that is used to analyze the optical and electrical properties of organic solar cell, based on P3HT: PCBM organic materials. The bulk hetero junction organic solar cell has been electrically simulated by GPVDM software at different series resistances. Organic bulk hetero junction solar cell is a mixture of P3HT and PCBM used as active layer material, ITO; a transparent electrode, PEDOT: PSS; an electron blocking layer and Al, back electrode. In this analysis, the electrical simulation has been done at different series resistances. It is observed that current density-voltage (J-V) characteristics are varied with the series resistance. The best J-V characteristic as well as maximum short circuit current is obtained at 1Ω series resistance. Introduction Organic photo voltaic devices have attracted more attention in the last decade for its applications as flexible, renewable, non-conservative energy sources [1][2][3]. Organic solar cells are considered as promising renewable energy sources, which are alternative to the inorganic photo voltaic cell [4][5]. The advantages of organic solar cell are mechanical flexibility, lightweight, easy to fabrication at normal temperature and low cost fabrication. The performances of these devices are limited by the several factors. The charge mobility of the organic material is very low and due to the poor conductivity the efficiency is also very low. The highest power conversion efficiency of organic photo voltaic devices is based on bulk-hetero junction concept. Now a day's many solar cell technologies exist, in which organic solar cell is one of the newer classes of these technologies. Since the discovery of photo induced charge transfer between organic donors and acceptors, a great effort has been devoted to explore these materials for photo voltaic applications. Solar cell based on a bulk hetero junction (BHJ) of conjugate polymers P3HT (poly 3-hexylthiophene) and PCBM (phynyl-C 70 butyric acid methyl ester) have been reported among the highest performing material and have been considered as the largest in researchers investigation and studies [6][7][8][9] for improving their power conversion efficiency [10][11][12][13]. The photo voltaic performance of the combination of P3HT and PCBM in organic blends has recently increased approaching 6% energy-conversion efficiency [14] and 6.1% efficiency was achieved using PCDTBT and PC 70 BM blends. The main advantage of the BHJ solar cell is that most of the generated excitons reach a nearby donor -acceptor interface, where they associate into free charge carriers (electron and hole). These efficient excitons harvesting lead to higher power conversion efficiencies for BHJ solar cell. In BHJ photo voltaic device the series resistances Rs is a most important factor that affecting the j-v characteristics of organic photo voltaic devices through decrease of solar efficiency and fill factor [15]. For large series resistance values, the short circuit current might decrease. A transparent electrode (ITO) and carriers transporting inter -layers of different kind could increase Rs significantly. Interfaces between the active layer material and inter layers (metallic contact) may well add more resistance in series because of partial energy level arrangement, which affects the best possible interface charge transfer. The overall effect of electronic transport mechanism is recognized to have impressive effect, when thick active layer films are used to increase light harvesting [16]. The thinner films are able to exhibit almost conversion of absorbed photon into collected carriers [17]. Thus indicating that transport mechanism do-not limit a realizable photo current [18][19][20], thicker active material layer devices suffer from an incomplete collection of photo-generated charges. It is also known that in real devices the analysis of the j-v characteristics does not help in discriminating, in which the series mechanism is effectively dominating series resistance Rs. Therefore, the overall performance might be improved 84 Analysis of Organic Photovoltaic Device at Different Series Resistances by librating operating mechanism involved in series resistance. In this work, electrical simulation of BHJ photovoltaic cell using GPVDM (general purpose photovoltaic device model) software at different series resistance is analyzed. Device Structure Bulk hetero junction is a mixture of two conjugate polymers of electron donor (P3HT) and electron acceptor molecules (PCBM) that allow absorption of light, generation of excitons, splitting of excitons at donor-acceptor interface, and systematic transport of positive and negative charges to opposite electrodes. The bulk hetero junction is mostly created by two conjugate polymers, casting and then allowing separating the two phases usually. The two conjugate polymers will self-assembled into an interpenetrating system connecting the two electrodes [21]. The advantage of BHJ structure is that most of the generated excitons reach a nearby donor-acceptor interface, where they dissociate into free charge carriers. The structure of bulk hetero junction solar cell is shown in figure 1 and 2. Figure1 and 2. Bulk Hetero junction solar cell In bulk hetero junction solar cell, P3HT (3-hexyl thiophene) is a good electron donor material that transport positive holes and PCBM ([6, 6]-phenyl C 71 -butyric acid methyl ester) is a good electron acceptor material. It effectively transports electrons from molecule to molecule. The Indium Tin Oxide (ITO) film is used as a transparent electrode. Since, it has high transmittance in visible region and ability of conduction. The materials PEDOT: PSS or poly (3, 4-ethylenedioxy thiophene) poly (styrenesulfonate) is an electron blocking layer of work function 5.2eV, used hole transporting layer and improve the work function of ITO layer. These may be used as buffer layers between the electrodes and active layer to block the electron and whole transfer in the wrong direction. Simulation The bulk hetero junction photo voltaic device is electrically simulated by the GPVDM software at different series resistance. This software is specially designed to simulate bulk hetero junction organic solar cells, such as those based on the P3HT: PCBM materials. The model contains both an electrical and optical properties, enabling current density-voltage characteristics [22]. The simulation of organic photo voltaic device can be separated into two parts, first electrical simulation and second optical simulation. To describe the carrier (electron and hole) transport the bipolar drift-diffusion equations (1) & (2) are solved in position space for electrons and holes. These are given as- Where J n and J p are the electron and hole current density, and are the electron and hole mobility, n f and p f are the concentration of electron and hole along the Fermi level and E LUMO and E HOMO be the energy of LUMO and HOMO level. In this device model, there are two types of electrons (holes) i.e. free electrons (holes) and trapped electrons (holes). The free electrons (holes) have a finite mobility of ( ℎ ) and trapped electrons (holes) cannot move at all and have a mobility of zero (23). To calculate the average mobility, it is assumed to take the ratio of free to trapped carriers and multiply it by the free carrier mobility, which expressed in equation (3) ( ) = Thus if all carriers were free the average mobility would be and if all carriers were trapped the average mobility would be zero. It should be noted that only ℎ are used in the model for computation and µ e (n) is an output parameter. In organic solar cell photo-generated excitons are dissociated into electrons and holes at the donor-acceptor hetero junction. It is consider that the excitons dissociation probability at the hetero junction is high enough that excitons concentration at the donor -acceptor interface is zero. Thus, only the excitons generated with a distance of excitons diffusion length from the hetero junction can contribute to the photo-current generation. The electrical simulation window is shown in figure 3. Characteristics of Organic Solar Cell: In order to determine the ability of an organic solar cell for conversion incident solar energy into electricity, the current voltage characteristics are measured both in dark and illumination. The current-voltage characteristic is shown in figure 4. (5) where P in is the power density of the light and P out is the electric power generated by the bulk hetero junction solar device at maximum power point. Result and Discussion In this research work, the bulk hetero junction solar cell is designed by the GPVDM software to study the J-V characteristics at different series resistance. The simulation 86 Analysis of Organic Photovoltaic Device at Different Series Resistances parameters are shown in table 1. The illumination J-V characteristics are simulated at different resistances 1Ω, 3Ω, 5Ω, and 7Ω. The J-V characteristic curves are shown in the figure 5. It is clear from the J-V characteristics curves that the short circuit current decrease with increase in the series resistance continuously, at 1Ω the short circuit current is maximum and minimum at 7Ω. The active area makes the photo gene rated current travel a longer distance, before it is collected at the electrodes. The position of the contacts in the device causes current flow primarily in the x-direction, so resistance should depend only on the length of the device. In the study, a thick grid will be applied to maintain a low effective resistance of the ITO even as the area of the solar cell becomes large. As series resistance increases, the voltage drop between the junction voltage and the terminal voltage becomes greater for the same current. The result is that the current-controlled portion of the I-V curve begins to sag toward the origin, producing a significant decrease in the terminal voltage and a slight reduction in I SC , the short circuit current. Very high values of R S will also produce a significant reduction in I SC ; in these regimes, series resistance dominates and the behavior of the solar cell resembles that of a resistor. These effects are shown for crystalline silicon solar cells in the j-v curves [24]. Conclusions In this research study, the electrical simulation of P3HT: PCBM based bulk hetero junction solar cell is done. It is found that for different series resistances, the J-V characteristic of organic solar cell varies with series resistance. It is concluded that at 1Ω series resistance, a smooth curve is found, at which maximum short circuit current as well as efficiency is obtained. It has been demonstrated by using different cathode contacts that high frequency resistance depend only on the active layer blend composition, and not on the outer contact structure. Therefore, it can be explained that the series resistance produces a remarkable effect on the J-V characteristics of organic solar cell.
2,410.8
2017-01-01T00:00:00.000
[ "Physics", "Materials Science" ]
Cold forging tool for gear accuracy grade improvement by a different shrink fitting method The manufacturing of the gear profile by Metal Forming is widely used in industry due to its quality and production capability. Direct cold extrusion has this characteristic and with support of peripheral technologies allows the development of asymmetric parts with complex geometry and near net shape. These resources, added with the great experience of a Brazilian forging company with strong presence in the cold forging market, allowed to develop a cold extrusion process to produce spur gears using the low carbon steel alloy described as SAE 10B22. The goal of this study was to develop the whole process, precision tooling project and manufacture as well as the experimental availability of the process. The tools were manufactured with high speed steel AISI M2 having a hardness in a range from 61 up to 63 HRc. The shrink rings were manufactured using steels with more toughness, such as S1 and H13. The application of shrink rings for the prestressing of tooling was evaluated using two different methods. The first one is using conventional shrink rings with tool steel, while the second is the stripwinding concept developed by the company STRECON. Introduction In recent years, the automotive sector has presented several proposals for quality improvement and efficiency increase applied to topics such as fuel consumption and transmission systems.The car manufactures have the challenging mission to reduce weight and to increase the component strength in order to meet the current strict requirements of the automotive market.In general, it is said that products may obtain greater resistance through forging technology. To obtain precision products, cold forging has become a technology commonly used.The technological development and quality control from the beginning of the process up to the final product must be observed to obtain such progress in precision cold forging, which is connected with high quality materials and cutting edge techniques.In this article, some results obtained through precision cold forging and assistive technologies are presented. Material and method This article illustrates the comparative results between the dimension of gears produced by cold extrusion in a conventional double shrink ring tooling and a high strength shrink ring system developed by the STRECON Company.The forged product is a pinion for the starter drive in SAE 10B22 grade steel which is more often used in the automotive market.This is a nine-tooth pinion with normal module 2.11 and pressure angle equal to 12º.The lead length is 8 mm as Figure 1 shows.The tools are shown in Figure 2. The yield strength (k f ) for SAE 10B22 is 302 MPa.The normal shrink ring tool has two stress rings made in steel grades S1 and H13, which are the inner and outer ring respectively.The interference fit is 0.3 mm for the inner ring and die.For the inner and outer ring, the interference fit is 0.25 mm.The average hardness was in the range of 56-58 HRc and 46-48 HRc, respectively.The yield strength for S1 is 2,150 MPa and for H13 is 1,380 MPa, for such hardness.The yield strength for high speed steel M2 applied in the dies is 3,250 MPa. The gear errors parameters were measured in 3D machine by the software Quindos Gear.They are described bellow. Results Therefore, the chosen methodology for data acquisition was the measurement of each tooth flank for both right and left sides.The gear accuracy grade was evaluated according to the greater numeric result in a group of four consecutive flanks.The gear flank deviations for the die cavity are shown in Tables 2 and 3.According to ISO 1328-1, the lower the gear accuracy grade, the better will be the accuracy of the involute profile.Thus, through the results in the Tables 2 and 3, it is possible to check the gain on tool quality with the usage of the STRECON system. Both tools were tested in an eccentric press with 300-ton force.Tables 4 and 5 show the results of one sample produced by each tool. Discussion The tendencies in precision forging such as net shape forging of increasingly complex parts according to Osakada and Tekkaya (2007), ecological manufacturing, and cold forging of stainless steel and light-weight materials lead to steadily increasing tool loads.Among the several measures to improve the tool performance and service life (Lange et al., 1992), the prestressing of the forging dies is one key design parameter.The importance of prestressing increases with the tool load.The higher the forming load, the higher the level of tensile stresses in the forging die according to Groenbaek and Hinsel (2000). A more developed approach to prestressing is to look at the stress / strain behavior of the forging die.This approach examines the full load cycle of the forging die including the stress range and the physical movement of the forging die (i.e.strain behavior).As shown in Figure 3, the pole position of the forging die will be at a certain level of compressive stress, which is determined by the interference fit. Figure 3 Principal stress-strain response in the critical point of a prestressed forging die. In principle, prestressing can be performed by two generic and mutually different methods as shown in Figure 4.One method is prestressing by heat shrinkage, in which the shrink ring is being enlarged by preheating, for example at 400°C (Brecher et al., 2008).Another method is prestressing by press fitting where the forging die is pushed into the shrink ring by means of a tool assembly press.It depends on parameters as the size of the forging die and the level of interference fit (Engel and Geiger, 2008): An alternative approach to obtain high-stiffness forging tools would be to integrate the tungsten carbide material as part of the prestressing system, for example by having the inner ring of the double ring system made of tungsten carbide (Lund and Andresen, 2015) or to use a technol-ogy developed by the STRECON Company. Figure 5 shows an example in a cut-view of the concept of STRECON containers.A shrink ring made by the stripwinding technique offers a prestressing tool system stronger than the standard shrink rings. Conclusions The results in Tables 4 and 5 showed a significant difference between the quality of die cavity and their cold forged product.The residual stresses and tool elastic deformation, which applies a plastic deformation on a forged product, are some of the main points that affect the gear accuracy grade of the pinion. According to the results, it is possible to check that all the deviations of the forged products showed dimensional scatter re-garding the results found in their respective dies.Individually, the die assembled in the STRECON container presented profile, helix, pitch and runout deviations lower than the die with normal shrink rings.Therefore, it means a more precise gear accuracy grade. The cold forged pinions produced by both tooling systems showed similar profile deviation.However, helix, pitch and runout deviation had better performance for pinions produced by the STRECON container die.The gear accuracy grade was more effective for these deviations. Electrical Discharge Machining technology will be the scope of new researches about die cavity manufacturing.The expectation is refining geometry of gear flank, and thus reducing their dimensional scatter, can provide high gear accuracy grade in cold forged products such like those obtained by some specific machining processes. Figure 4 Figure 4 Principle method of prestressing by heat shrinkage (left) and by press fitting (right). Figure 5 Figure 5 Example of STRECON container. Table1shows the chemical composition for SAE 10B22 steel. Table 2 : Gear flank deviations and accuracy grade of die cavity assembled in a tool with normal shrink ring. Table 3 : Gear flank deviations and accuracy grade of die cavity assembled in a tool with STRECON container. Table 4 : Gear flank deviations and accuracy grade of a forged pinion produced by normal shrink rings tool. Table 5 : Gear flank deviations and accuracy grade of forged pinion produced by the STRECON tool.
1,835.4
2018-12-01T00:00:00.000
[ "Engineering", "Materials Science" ]
FACTORS AFFECTING STUDENT’S QUALITY IN HIGHER EDUCATION This study aims to research the influence of student’s behavior, lecturer’s competency, and school’s facilities on the student’s quality at Sekolah Tinggi Manajemen Informatika dan Komputer (STMIK) Mikroskil and Sekolah Tinggi Ilmu Ekonomi (STIE) Mikroskil Mikroskil. This research is a descriptive study with a quantitative method approach. The population in this study were undergraduate students totaling 3.493 students, the sample size used in this study is 360 students obtained using the Slovin formula. The sampling technique is stratified random sampling techniques applied to each study program. The data analysis method is multiple linear regression analysis. The results showed that student’s behavior, lecturer’s competency, and school’s facilities had a positive and significant effect on Student’s quality by 43.5 %. Future researchers are suggested to carry out the process and learning management in improving student’s quality. Cite this article as: William and Lubis, T. W. H. 2021. Factors Affecting Student’s Quality in Higher Education. Jurnal Aplikasi Manajemen, Volume 19, Number 1, Pages 35–43. Malang: Universitas Brawijaya. http://dx.doi.org/10.21776/ub.jam.2021.019.01.04. The 2010 population census shows that Indonesia’s population dependency ratio is 51.31. This shows that in every 100 people of productive age (15-64 years) there are 51 people of unproductive age (0-14 and 65 +) (BPS, 2011). This census data also shows that Indonesia will enter a demographic dividend period in the range of 2035-2040. The government’s efforts have also shown its seriousness in improving human resources (HR) in Indonesia. This is indicated by the increase in the literacy rate of Indonesia’s population from 88.6% in 2000 to 95.4% in 2016 (Secretariat, 2018) and the increase in Indonesia’s human development index from 66.53 in 2010 to 71.39 in 2018 (BPS, 2019). To improve the quality of human resources to support optimal economic growth, the role of tertiary institutions in the demographic transition period is needed to produce a quality workforce (Cuaresma et al., 2014). The low enrollment rate of universities means that many countries are unable to fully benefit from the demographic dividend (Bloom and Rosovsky, 2001). This is also in line with the statement of the Minister of Finance of the Republic of Indonesia which states 4 steps that need to be achieved to optimize the demographic dividend so that Indonesia can avoid Middle Income The 2010 population census shows that Indonesia's population dependency ratio is 51.31. This shows that in every 100 people of productive age (15-64 years) there are 51 people of unproductive age (0-14 and 65 +) (BPS, 2011). This census data also shows that Indonesia will enter a demographic dividend period in the range of 2035-2040. The government's efforts have also shown its seriousness in improving human resources (HR) in Indonesia. This is indicated by the increase in the literacy rate of Indonesia's population from 88.6% in 2000 to 95.4% in 2016 (Secretariat, 2018) and the increase in Indonesia's human development index from 66.53 in 2010 to 71.39 in 201871.39 in (BPS, 2019. To improve the quality of human resources to support optimal economic growth, the role of tertiary institutions in the demographic transition period is needed to produce a quality workforce (Cuaresma et al., 2014). The low enrollment rate of universities means that many countries are unable to fully benefit from the demographic dividend (Bloom and Rosovsky, 2001). This is also in line with the statement of the Minister of Finance of the Republic of Indonesia which states 4 steps that need to be achieved to optimize the demographic dividend so that Indonesia can avoid Middle Income Traps, namely: 1.) Improving the quality of human resources, 2.) Developing infrastructure, 3.) Reforming the bureaucracy, and 4.) Transparency and collaboration (Jannah, 2018). In the efforts of universities to produce good quality human resources, attention to various aspects is required. Mediawati (2010) and Isnaini et al. (2015), found that a lecturer's competency affects learning achievement and student satisfaction. Meanwhile, Hendikawati (2011) and Riyani (2012), found that besides the competency of lecturers, the college environment also influences learning achievement. Besides lecturers, higher education facilities, and infrastructure, student's behavior as an important subject in the implementation of higher education needs to be a priority as Hanifah and Abdullah (2001) and Sari (2013), found in their research that showed learning behavior affects academic achievement. In this study, researchers aimed to research the influence of student's behavior, lecturer's competency, and school's facilities on student's quality. The purpose of this study was to determine whether there was an influence: 1) student's behavior, 2) lecturer's competency, and 3) school's Facilities on Student's quality of 3.493 undergraduate students at Sekolah Tinggi Manajemen Informatika dan Komputer (STMIK) Mikroskil and Sekolah Tinggi Ilmu Ekonomi (STIE) Mikroskil to then formulate a regression model for student's quality. The results of this study are expected to be input in developing strategies to form good quality students and be able to produce quality graduates who are ultimately able to participate in preparing good quality human resources to face Indonesia's demographic dividend. The theoretical framework in this study can be seen in Figure 1 below: HYPOTHESIS DEVELOPMENT The Effect of Student's Behavior on Student's Quality Student's behaviors play an important role in student's achievement, it is include learning habit, intention, and skill possessed by the student. Good behavior will help in achieving good academic achievement. Tokan and Imakulata (2019), found that the more often a student shows positive learning behavior, the better their academic achievement. According to Hanifah and Abdullah (2001), student's behavior can be explained by a) The habit of taking The habit of reading textbooks, c) Visits the library, and d) Examination habits. Thus, the following hypothesis is proposed: H 1 : Student behavior has a significant effect on student's quality. The Effect of Lecturer's Competency on Student's Quality The lecturer's competency is important in improving the quality of students. Mastery of material by lecturers greatly affects the level of student achievement, and lecturers need to develop strong Based on this, the proposed hypothesis is: H 2 : Lecturer's competency has a significant effect on student's quality. The Effect of School's Facilities on Student's Quality The school's facilities are one of the factors that can encourage higher achievement in higher education. Furniture, educational equipment, educational media, books, electronic books, information and communication technology facilities, sports facilities, classrooms, libraries, laboratories, lecturers' rooms, leadership rooms are some examples of school facilities required by Indonesia law (Permenristekdikti, 2015). In line with this law, Arshad et al. (2019), found that there is a significant influence of school facilities on student's academic performance. Research conducted by Ashrof and Subri (2017), showed that eight factors can be improved in higher education facilities to improve academic abilities, namely: building age, facilities conditions, temperature factors, lighting, sound, interior color, class size, and school size. Hence, the proposed hypothesis is: H 3 : School facilities have a significant effect on student's quality. The Effect of Student's Behavior, Lecturer's Competency, and School's Facilities on Student's Quality Good quality students are expected by every university. Qualified students will be able to contribute to the world of work and be able to offer appropriate solutions for society's problems. Students' quality is affected by behavior (Tokan and Imakulata, 2019), lecturer's competency (Hanushek et al., 2019), school facilities (Arshad et al., 2019). Meanwhile, Indonesia Law, Permenristekdikti, (2015) stated that student's quality is affected by the learning process, learning assessment, the specifications of lecturers, and higher education facilities and infrastructure. Therefore, the proposed hypothesis is: Student's behavior, lecturer's competency, and school facilities have a significant effect on student's quality. METHOD This research is descriptive research with a quantitative approach. The population in this study are undergraduate students at STMIK Mikroskil and STIE Mikroskil, amounting to 3.493 students, the sample used in this study amounted to 360 students who were obtained using the Slovin formula from the entire population, the sampling technique was stratified random sampling technique in each study program with a sampling fraction of 360 / 3.493 = 0.103 multiplied by the population of each study program as shown on Table 1. The respondents respond to the questionnaire which was designed using a Likert scale with five scales which are Strongly Agree (5), Agree (4), Neither Agree Nor Disagree (3), Disagree (2), and Strongly Disagree (1). Descriptive statistical analysis is used to analyze data by describing or describing the collected data as is without intending to make generalized conclusions or generalizations. (Ghozali, 2013). Validity and reliability tests will be applied to the questionnaire data. Then the data will be tested toward multiple regression analysis assumptions, which are: 1) Normality Test, 2) Multicollinearity H 4 : MARCH 2021 William, Tya Wildana Hapsari Lubis Test, and 3) Heteroscedasticity Test. After passing the assumptions, the data will be analyzed by using multiple regression analysis to determine the effect of student's behavior (X 1 ), lecturer's competency (X 2 ), and school's facilities (X 3 ) on student's quality (Y). The regression formula can be formulated with the following formulations: Where : Y = Student's quality; a = Constant; X 1 = Student's behavior; X 2 = Lecturer's competency; X 3 = School's facilities; b 1 ,b 2 ,b 3 = Regression Coefficient; e = Standar Error. The next stage is the hypothesis test which consists of a partial test (t-test) and a simultaneous test (F-test) to determine the partial and simultaneous effect of the independent variable on the dependent variable. After that, it is continued with the test of the coefficient of determination (R 2 ) to measure how far the model's ability to explain the variation in the dependent variable. Descriptive Statistics An overview of the research variables i.e. student's quality, student's behavior, lecturer's competency, and school's facilities are presented in descriptive statistics as follows. Student's Quality (Y) The student's quality was measured by using 9 questions with a minimum value of 1 and the maximum value of 5. The lowest mean was 3.84 which mastering the concept, theory, and method of the field of study through the learning process, meanwhile, the 2 nd question got the highest mean score which was 4.63 which was about appreciating the diversity. Most of the students strongly agree with the statements about morals, ethics, religion, lawabiding, and responsibility which are shown by their mode. Student's Behavior (X 1 ) The student's behavior was measured by using 15 questions with a minimum value of1 and a maxi- mum value of 5. The lowest mean was 2.44 which was gained by the 11 th question which was about visiting the library regularly, meanwhile, the 4 th question got the highest mean score which was 4.17 which was about discussing learning materials with other students. Only 2 questions got a score of 5 for a mode which are about discussing learning materials and preparing for a class. The activities related to reading the alternative sources of materials and visiting the library regularly were the questions that got the low mode score. Lecturer's Competency (X 2 ) The lecturer's competency was measured by using 16 questions with a minimum value of 1 and the maximum value of 5. The lowest mean was 3.88 which was gained by the 13 th question which was about how well the lecturer knew their students, meanwhile, the 8 th question got the highest mean score which was 4.45 which was about how well the lecturer behaved. Students strongly agree, the lecturers are mastering the use of learning media and technology, appreciate students' answers or ideas, having a polite attitude, treat all students fairly, and have a tolerance for student's diversity. School's Facilities (X 3 ) The school's facilities were measured by using 19 questions with a minimum value of 1 and the maximum value of 5. The lowest mean was 3.53 which was gained by the 17 th question which was about whether the school has a large area, meanwhile, the 2 nd question got the highest mean score which was 4.51 which was about how building renovation affects student's enthusiasm for studying. Overall, most of the students strongly agree with most of the questionnaire's questions, there are only 3 questions got 4 as mode value, namely: the toilet availability and the size of the school. Instrument Test To conduct an instrument testing, a reliability and validity test was being used. Reliability test for both consistency and stability of all items in questionnaire correlated to one another which is stated by Cronbach's Alpha. The closer Cronbach's alpha to 1, the higher the internal consistency reliability (Sekaran and Bougie, 2016). The Cronbach's alpha values for Y, X 1 , X 2 , and X 3 respectively were 0.892, 0.909, 0.951, and 0.927. These values mean all the questionnaires are consistent and stable. Whilst validity test measures the validity of the questionnaire which means the questionnaire describes well what is measured. The Pearson correlation is used to determine the questionnaires' item validity. The questionnaire r value must be bigger than the r value in Pearson's correlation table. Since the r value in Pearson's correlation table with a degree of freedom (df) = 360-2 = 358 and confidence interval 95% is 0.113, then all items in the questionnaire are considered valid (Ghozali, 2013). Multiple Regression Analysis Assumptions 3 assumptions need to be met before conclusions about a population based on the sample used for the regression can be drawn. A multiple regression analysis should satisfy the following assumptions: a). Normality, b) No multicollinearity, c) No Heteroscedasticity (Ghozali, 2013). Table 2 shows the result of Kolmogorov-Smirnov with test statistic 0.46 and significant 0.065 which are larger than 0.05 which means the residuals are normally distributed. Secondly, multicollinearity was checked by using tolerance and variance inflation factor (VIF). Table 2 shows all tolerance values are above 0.1 and VIF values are below 10 which means the data are free of any multicollinearity. Lastly, heteroscedasticity was checked by using Rank Spearman. Table 2 shows the Spearman Correlation value for all the variables with residuals and all the significant values are larger than 0.05 which means the data is free from heteroscedasticity. Hypothesis Test Hypothesis testing determines accurately if the null hypothesis can be rejected in favor of the alternate hypothesis. The null hypothesis can be rejected (therefore accept the alternate hypothesis) with a certain degree of confidence based on the sample data (Sekaran and Bougie, 2016). Figure 5 shows all t count > 1.984 and the significant values < 0.05, MARCH 2021 William, Tya Wildana Hapsari Lubis which means we reject H 0 and student's behavior, lecturer's competency, and school's facilities are partially affect student's quality positively and significantly. The F-Test shows the F value is 91.329 > F table (3,356,0.05) = 2.605 and the significant value < 0.05, which means all independent variables simultaneously affect student's quality positively and significantly. The coefficient of determination of the model (R 2 ) is 0.435% which means all-independent variables of this research can explain the student's quality by 43.5%. Therefore, based on Figure 5 the regression model for this research is: Where : Y = Student's quality; X 1 = Student's behavior; X 2 = Lecturer's competency; X 3 = School's facilities; e = Standar Error. DISCUSSION The Effect of Student's Behavior on Student's Quality The results showed that partially student's behavior variables had a significant effect on student's quality, this is in line with the research conducted by Tokan and Imakulata (2019) and Hanifah and Abdullah (2001). Based on the questionnaire obtained, students were able to focus their attention on learning materials, ask lecturers for explanations regarding the material they did not understand, catch up with material lags, students also always discuss together with other students regarding learning materials, prepare materials before lecturing by downloading e-books from e-learning platforms or other platforms used by lecturers who teach courses. Students are also able to focus on the material being read, mark important parts, and make notes on learning material regularly. Students always study regularly, well, and discipline, practice doing questions, and are confident in facing exams. However, students only depend on the material provided by the lecturer so that they rarely read other textbooks that support learning material. Students have not taken advantage of their spare time to regularly visit the library and students also feel uncomfortable reading in the library. Because this study, shows the influence of student's behavior on student's quality, it is necessary to motivate students to visit the library so that they can take advantage of existing learning resources in the library to improve student's quality. The Effect of Lecturer's Competency on Student's Quality The results show that partially the lecturer's competency variable has a significant effect on the quality of students, this is in line with the research of Isnaini et al. (2015) and Hanushek et al. (2019). Students thought that lecturers have a polite attitude in words and actions and have tolerance for diversity. Lecturers can master the use of media and learning technology, listen to and appreciate student answers and ideas, have the authority as a lecturer, and can treat students fairly. The lecturers are also able to maintain order and order in the ad-ministration of lectures, can guide students, not let students give up when facing difficult material, can explain the interests of the areas of expertise being taught in the context of life, master the latest issues in the field being taught, and want to know the reason for choosing the answer to each question asked. Lecturers are expected to be able to increase the use of creative ways in increasing student understanding of the topic, increasing student involvement in research/study/development/engineering activities carried out by lecturers, and getting to know students who are taking their classes so that students can feel cared for and in the end be able to encourage the improvement of student's quality. The Effect of School's Facilities on Student's Quality The results showed that partially the variable of school's facilities had a significant effect on the quality of students, this is in line with the research of Arshad et al. (2019), Ashrof and Subri (2017), and Riyani (2012). Based on the questionnaire, students thought that the campus building was still suitable for use and that renovation of the campus building had increased students' enthusiasm for studying. The facilities provided are complete and adequate to support the learning process. The temperature, light, noise, and interior color in the room have been able to provide comfort and increase the enthusiasm for learning. The large class size can provide flexibility for students to interact with fellow students and also with lecturers. Although the toilets are always clean and tidy, the number of toilets needs to be increased so that students can use them more comfortably. The size of the campus also needs to be increased so that it can add spaces to better support lecture activities and be able to improve the quality of students. The Effect of Student's Behavior, Lecturer's Competency, and School's Facilities on Student's Quality The results showed that simultaneously the variable student's behavior, lecturer's competency, and school's facilities had a significant effect on student's quality. This is in line with research by Tokan and Imakulata (2019), which stated that learning achievement was influenced by learning behavior. This study also confirmed the competency of lecturers which includes pedagogical, personal, professional, and social competencies that influence the quality of students (Undang-Undang No 14 Tahun 2005 tentang Guru dan Dosen, 2005) and research conducted by Hanushek et al. (2019). This study also accepted research by Arshad et al. (2019) and Ashrof and Subri (2017), the results showed that the school's facilities were able to affect the quality of students. This study also strengthens the enforcement of learning facilities and infrastructure standards on national higher education standards in improving the quality of students (Permenristekdikti, 2015). Because the three independent variables in this study, namely student's behavior, lecturer's competency, higher education facilities have a significant effect on the quality of students to be taken into consideration in the student's quality improvement program, the three independent variables need further attention to achieve quality graduates. CONCLUSIONS Based on the results of research and discussion of the influence of student's behavior, lecturer's competency, and school's facilities on the quality of students, it was concluded that student's behavior, lecturer's competency, and school's facilities have a positive and significant effect on the quality of students at STMIK Mikroskil and STIE Mikroskil either partially or simultaneously. Therefore, the students need to be encouraged to enrich themselves by accessing other sources of knowledge to support learning materials. Lecturers also need to actively develop creativity to encourage student understanding. At last, The campus capacity needs to be increased so that it can increase learning support space. RECOMMENDATIONS The quality of students is an important thing to pay attention to in the management of higher education. All higher education management activities are aimed at improving the quality of students. Having good knowledge of the things that affect the quality of students will be an important key in formulating higher education management strategies. This study is limited to analyzing student's behavior, lecturer's competency, and school facilities on student's quality. In further research, research can be carried out on the process and management of learning in improving the quality of students as required by the national higher education standards (Permenristekdikti, 2015).
4,951.8
2021-03-01T00:00:00.000
[ "Education", "Economics" ]
Control Access Point of Devices for Delay Reduction in WBAN Systems with CSMA / CA Due to the gathering of sickrooms and consultation rooms in almost all hospitals, the performance of wireless devices system is deteriorated by the increase of collision probability and waiting time. In order to improve the performance of wireless devices system, relay is added to control the access point and then the access of devices is distributed. The concentration of access point is avoided and then the performance of system is expected to be improved. The discrete time Markov chain (DTMC) is proposed to calculate the access probability of devices in a duration time slot. The collision probability, throughput, delay, bandwidth and so on are theoretically calculated based on the standard IEEE802.15.6 and the performance of the system with and without relay is compared. The numerical result indicates that the performance of the system with control access point is higher than that of the system without control access point when the number of devices and/or packet arrive rate are high. However, the system with control access point is more complicated. It is the trade-off between the performance and the complication. Introduction 1.The Problem of WLAN in Hospitals In almost hospitals, sickrooms and consultation rooms are respectively gathered at one place for convenience of patients.It may be good for patients and hospital sites, however, on the view point of wireless system, there is a problem.Since medical devices access the wireless local area network (WLAN) base station via wireless chan-nel, the collision when more than one devices access the channel in the same time, will occurs depending on the number of devices and the number of data packets that be generated by every device in one second.Moreover, a lot of devices access the WLAN base station that is close to the consultation rooms (Wireless LAN 2 in Figure 1), whereas a few devices access the WLAN base station that is far from consultation rooms (Wireless LAN 1).The access of devices concentrates at Wireless LAN 2, consequently, the probability of collision increases, and then the throughput decreases, the delay increase.As a result the bandwidth efficiency decreases. Aims and Motivations Since many body functions are traditionally monitored and separated by a considerable period of time, it is hard for doctors to know what is really happening.This is the reason why the monitoring of movement and all body functions in daily life are essential.The delay of patients' data as well as the collision of data packets may let doctors misunderstand and information data be lost by timeout.In order to decrease the delay and increase the throughput, the relay can be set to avoid the concentration of WLAN base station.As shown in Figure 1), some devices assess the wireless LAN 1 via the relay, therefore, the number of devices that access the wireless LAN 2 is reduced, and then the bandwidth efficiency is expected to be higher.However, the delay due to signal processing at relay should be considered.At scheme 1, all devices access the wireless LAN 2, whereas at scheme 2, the relay is set and devices access the channel via either wireless LAN 1 or 2. The performance of both schemes 1 and 2 is mathematically analyzed base on standard IEEE802.15.6.The throughput, delay and bandwidth efficiency of both schemes are numerically compared. Related Works According to an emergency of wireless body area network (WBAN), the standard IEEE802.15.6 was established in Feb. 2012 [1].An overview of the standard and performance analyses of WBAN based on bandwidth efficiency and delay were represented in [2]- [4].In these papers, however, the WBAN is assumed to consists of only one device that keeps transmitting a data packet.Packet arrival rates and collisions due to transmission of multiple devices in the same time weren't considered.On the other hand, a Physical layer (PHY), Media Access Control (MAC) layer and network layer of WBAN were researched in [5] [6].Furthermore, the control on MAC layer was analyzed to improve the performance of WBANs [7] [8].The transmission of implanted devices was considered under conditions of low transmit power and low harmful influence on a human body [9] [10].The performance of WBANs that has multiple devices and multiple user priorities were analyzed in both saturation [11] [13] [14] and non-saturation [12].Additionally, WBANs were analyzed in further detail when a superframe with beacon mode and an access phases length were taken into consideration in [13] [14], respectively.However, efficiencies of number of devices, packet arrival rates, packet sizes, etc. on the throughput of each device and the total throughput, the delay and the bandwidth efficiency of system hasn't been discussed. Organization of the Paper The rest of paper is organized as follows.We introduce a brief of PHY and MAC layers of standard IEEE802.15.6 in Section 2. The discrete time Markov chain is proposed and then the performance of both schemes 1 and 2 with CSMA/CA is analyzed in Section 3. The numerical evaluation of both schemes is described and compared in Section 4. Finally, Section 5 concludes the paper. Brief of Standard IEEE802.15.6 A brief of the standard that related to our research is described in this section.The further detail of standard can be found in [1] [2]. PHY Layer The IEEE802.15.6 defines three different PHYs, i.e., human body communication (HBC), narrowband (NB) and ultra wideband (UWB).Furthermore, the NB is divided in several frequency bands and a data rate, symbol rate, etc. of every frequency band are different.We analyze the system in 2400 MHz -2483.5 MHz band as an example, the analysis in different frequency band is similar.The physical protocol data unit (PPDU) of NB PHY is described in Figure 2. Components of PPDU are fixed, excepted the payload.Parameters of PHY layer are summarized in Table 1. MAC Layer The algorithm of CSMA/CA based on IEEE802.15.6 is described as follows.All devices set their backoff counter .The value of min W and max W varies depending on the user priorities (UPs).However, in this paper, the UP of all devices is assumed to be the same as zero-th UP.The extension for multiple UPs is straightforward. As shown in Figure 3, a device starts decrementing its back off counter by one for each idle CSMA slot.When the back off counter reaches zero, the device transmits its packet.Once the channel is busy because of transmission of another device, the device locks the back off counter until the channel is idle.The transmission is failed if the device fails to receive an acknowledgement (ACK) due to a collision or being unable to decode.The W is doubled for even number of failures until it reaches max W .The maximum number of back off stages is bound by a retry limit m.Once the number of retries exceeds the predefined retry limit m, the packet is discarded.When the transmission is successful, the W is set to max W .The W of zero-th UP is represented in Table 2. Discrete Time Markov Chain At first, the performance of scheme 1 is analyzed.The scheme 1 consists of a single base station, the wireless LAN 2, and n devices in a star topology, D1, D2, •••, Dn (Figure 1).All devices can access the wireless LAN 2 directly, however, the wireless LAN 1 is out of them range.The discrete time Markov chain (DTMC) is proposed to calculate the access probability of each device in every time slot.The proposal DTMC of device i with empty state is described in Figure 4 and notations used in this section are listed in Table 3.A packet arrival rate of all devices is assumed the same and denoted by λ . Pr ,0, , ,0 for 1, , Pr , 0, empty for Moreover, the stationary distribution can be calculated by using the state transition probability. From above equations, the ,0,0 i b can be described as a function of ,idle i P , ,fail i P , ρ , k W and i T , Further- more, the access probability of every device can be calculated by solving n equations. System Throughput The probability in which at least one device is sending a packet is called as transmission probability, tran P . ( ) The successful probability of device i means that only device i is transmitting on the medium under condition on the fact that at least one device is transmitting and is represented by ,suc i P .In addition, the coordinator can decode the packet correctly. Finally, the throughput of device i is described as and the system throughput becomes 1 Thro Thro The throughput of scheme 2 is also represented by (12).However, several devices access the channel via the relay and the wireless LAN 1, therefore, the concentration at the wireless LAN 2 is avoided and the successful probability of all devices increases.As a result, the throughput of system is expected to increase. Delay The average access delay D , defined as the time elapsed between the time instant when the frame is put into service and the instant of time the frame terminates a successful delivery.Under the assumption of no retry limits, this computation is straightforward.In fact, we may rely on the well known Little's Result, which states that, for any queueing system, the average number of customers in the system is equal to the average experienced delay multiplied by the average customer departure rate.The application of Little's result to our case yields: The delay computation is more elaborate when a frame is discarded after reaching a predetermined maximum number of retries m.In fact, in such a case, a correct delay computation should take into account only the frames successfully delivered at the destination, while should exclude the contribution of frames dropped because of frame retry limit (indeed, the delay experienced by dropped frames would have no practical significance). To determine the average delay in the finite retry case, we can still start from Little's Result, but we need to replace λ in (13) with the average number of frames that will be successfully delivered.Thus, (13) can be re- written by here, i β denotes the probability that a randomly chosen frame will be successfully transmitted before the re- transmission reaches the retry limit.Therefore, the i β is represented as follows. ( ) For Scheme 2, the delay due to the multiple access at the wireless LAN 1 and 2 is similar to (14).However, the delay due to the capability of relay also should be considered.The delay due to the relay is calculated by Thro , here Q denote the set of devices that access the relay and the C is the capability of relay.There- fore, the average delay of information data that is transmitted via relay is represented as follows. Thro Thro The delay of scheme 2 is the maximal delay of information data that is transmitted to wireless LAN 1 and 2. Bandwidth Efficiency In order to compare the system with and without relay, the bandwidth efficiency is adopted.The bandwidth efficiency of both schemes 1 and 2 is calculated as the ratio of total throughput of system and the total generated data.Notice that the total throughput of scheme 1 and 2 is different. Thro nx Numerical Evaluation The system model is the same as mentioned above and the parameters in Table 1 are used.The average distance between all devices and the wireless LAN 1 and 2 is respectively 500 m and 250 m.The relay is set at halfway between the devices and the wireless LAN 1.The delay of propagation is taken into account.The capability of relay is assumed to be 300 Mbps.The noise-free is also assumed.At first, the performance of scheme 1 is illustrated. The throughput of scheme 1 base on lambda and the number of devices is described in Figure 5 and Figure 6, respectively.The generated data is the total data that is generated at all devices, however the generated data isn't always successfully transmitted due to the collision and the time out.Therefore, the throughput of system is considerably smaller than the generated data, especially when the number of devices and/or the lambda are high.Moreover, the delay of scheme 1 also increase when the number of devices and/or the lambda increase (Figure 7).These are the reason the scheme 2 is taken into consideration as description in Section 1.2.The comparison the delay and the bandwidth efficiency of both schemes 1 and 2 is respectively described in Figure 8 and Figure 9, where the number of devices is fixed to be 10 and 40.For scheme 2, since the concentration at the wireless LAN 2 is avoided, the collision probability decreases.Therefore, the throughput of system increase and then the delay as well as the bandwidth efficiency increase and be higher than that of scheme 1, especially when the number of devices and/or the lambda are large.When the number of devices and the lambda are low, the difference of schemes 1 and 2 is small.Notice that the scheme 2 is more complicated due to the adding of relay and controlling the transmission of devices.It means that there are the trade off between the performance and the complication of scheme 2. Conclusions The wireless system in a hospital has been taken into consideration and the performance is analyzed based on the standard IEEE802.15.6.The DTMC method was proposed to calculate the access probability and then the collision probability, the successful probability, the throughput, the delay, the bandwidth efficiency of system have been theoretically calculated.The performance of system with and without relay is also numerically compared, the bandwidth efficiency of scheme 2 is higher while the delay is smaller than that of scheme 1 when the number of devices and/or the packet rate are large.However, the scheme 2 is more complicated due to the controlling transmission of devices and the adding of relay. The devices were assumed to transmit the information data to either the wireless LAN 1 or the wireless LAN 2. However, the control method hasn't been explained clearly.Moreover, the CSMA/CA was adopted, another access protocol [15] wasn't taken into account leave them to our future works. Figure 1 . Figure 1.The wireless LAN system in a hospital. .Figure 3 . Figure 3.An example of operation of CSMA/CA and relationships of time durations. successful probability of all devices.Once the transmission is successful, the device receives a ACK packet with no payload from the coordinator, whereas the device receives NACK packet or nothing after the timing to receive the ACK packet if the transmitted packet is collided or unable to decode.Consequently, the duration time to transmit a packet successfully, T , is assumed to equal to the duration time of failed transmission, hereafter T is called as the successful transmission time.The successful transmission time is the total duration time to transmit a packet, includes the duration time to transmit a data packet ( ) Figure 5 . Figure 5. Throughput of scheme 1 based on lambda. Figure 6 . Figure 6.Throughput of scheme 1 based on the number of devices. Table 1 . Parameters of PHY layer. Table 2 . Contention window for every UP. Notation Explanation λ Packet arrival rate during a unit time ρ Packet arrival rate during a slot time m Packet retry limit n Total number of devices k W Contention window of k backoff stage
3,540.6
2015-01-29T00:00:00.000
[ "Computer Science", "Engineering" ]
The Big Bang theory: two fatal flaws The cosmic microwave background radiation is routinely cited as evidence for a hot Big Bang. Its isotropy harmonizes with the cosmological principle. However, in prototypical Big Bang models, all matter originates from a primeval fireball that also emits the light that is redshifted into these microwaves. Since light escapes from its source faster than matter can move, it would need to return for it to still be visible to material observers, but the universe is considered ‘flat’ and non -reflective. This prevents us from observing the redshifted glow of the primeval fireball. Like its observability, its homogeneity would also be transient. This is concealed by considering the light to expand with the ‘Hubble flow’ while disregarding that it escapes at c . This blunder reflects the practice of treating model universes in General Relativity as filled with a spatially homogeneous fluid. For radiation, this becomes inappropriate when it is no longer scattered. What we actually observe remains unexplained. Moreover, the calculation of line-of-sight distances allows an expanding view into a large pre-existing universe. For other aspects, the universe is assumed to have been smaller before. This creates contradictions such as between the observed source of the cosmic microwaves and their much smaller and closer assumed emitting source. The criticism expressed here goes against the ‘ hard core ’ of an established research program. Those cores are treated as inviolable, which blocks fundamental progress. Such blockage can persist for generations even if the theory that is promulgated as the best we have is actually irrational. Introduction In the physical cosmology that established itself in the 20 th century and that presupposes Einstein's general theory of relativity, the universe originated and expanded in a 'big bang' from a very dense, hot, and opaque initial state [1][2][3]. The universe became transparent after it had expanded for 380 000 years and thereby cooled to about 3000 K. The light waves that were emitted from the "primeval" or "primordial" fireball at this stage of decoupling and 'recombination', when electrons and protons formed electrically neutral atoms, mostly hydrogen, were then further stretched by the continued expansion. They are now, 13.8 billion years later, about 1100 times longer. In a confined space that slowly expands by this factor in each dimension, blackbody radiation will cool by the inverse factor, from 3000 to 2.7 K. This is thought to have happened because the cosmic microwave radiation that was accidentally discovered by Penzias and Wilson [4] is blackbody radiation with this temperature. It is commonly referred to as the "cosmic microwave background" (CMB). The cosmological principle, which implies that the universe at large scales should be homogeneous and, to stationary observers, isotropic, is compatible with these observations. The practice of modelling the universe in General Relativity by a spatially homogeneous fluid that expands with the "Hubble flow" and represents radiation as well as matter is also in line with this, but we shall see that homogeneity actually cannot be maintained under Big Bang conditions, which imply that the universe was substantially smaller in its distant past. My label "Big Bang" refers to an assumed occurrence that is still going on (and accelerating). It neither refers just to its onset nor just to the plasma state of the universe before recombination and last scattering. It has long been known that standard cosmology suffers from several serious problems [5]. It has in its development become dependent on an increasing number of free parameters [6], each of which is symptomatic of a lack of understanding. Some of them involve hypothetical constituents and processes such as cold dark matter (CDM), dark energy (Λ), and cosmic inflation. These have often been criticized [7,8], also by this author [9], for their fictitiousness or bare conventionality [10]. The standard (concordance) ΛCDM model, nominally a Big Bang cosmology, remains dominant nevertheless. It is promulgated as the best theory we have. In the following, it will be shown that standard cosmology, as traditionally taught, involves contradictory basic assumptions in different models that are used to handle different aspects. This results in faulty reasoning, which can be obscured by the superficial generality of the invoked principles and by committing yet another fallacy. The present article is only concerned with such faulty reasoning -neither with the more often disputed dark sector of standard cosmology nor with free parameters or any independent disagreements between predictions and observations [8]. 2 The first fatal flaw 2.1 Symptom: the primeval fireball delusion In prototypical Big Bang models, the radiation we observe as the CMB is thought to be emitted from the primeval fireball and its abstract "surface of last scattering". However, it requires particular conditions for this to become observable in an expanding universe in which all matter shares its region of origin with this radiation. Since electromagnetic radiation propagates faster than matter can move, it should have caught up and passed every matter by now. If observers (constituted by matter) still see it now, it must have been reflected back or returned on a curved path. A curved return path is under certain conditions possible in positively curved universes, which can be pictured by the surface of an inflating balloon if one dimension is abstracted away. However, in standard cosmology, as conceived in the early 21 st century, the universe at large is not curved like this. It is rather close to 'flat' (Euclidean) [11], and it lacks a reflective boundary surface. In a flat universe, the radiation from the primeval fireball escapes altogether from its region of origin when enough time has passed for the light to cross this region. This should have happened long ago and would have been followed by a 'dark age', which persisted as long as stars had not yet formed. In Fig. 1, model A, the radiation originated within the small red disk and fills now the golden ring. In the spacetime diagram, Fig. 2, it is last scattered at the central red dash and propagates within the golden V-shaped band, whose off-vertical slope represents the light speed c [one lightyear (x-axis) per year (y-axis)]. This precludes that we could still observe the cooled glow of a primeval fireball. Its observability is a mere delusion, in the sense of 'a belief or impression maintained despite being contradicted by rational argument'. It is not contradicted by reality. We actually see a CMB, but it must have a different origin. A prototypical Big Bang model offers no explanation for it. Since we are not located within the golden V-shaped area in Fig. 2, but at the peak of the blue Λ-shaped line, there is no way for radiation that leaves the last scattering surface at c to still appear to us directly. However, as soon as the CMB had been detected [4], Dicke, Peebles, Roll, and Wilkinson [12] were quick to suggest its origin in the glow of the primeval fireball. Like Alpher and Herman [13,14], who previously had predicted a background radiation with a temperature of about 5 K, they had a prior belief in the spatial homogeneity of the whole universe. In keeping with this, the CMB should look the same and be observable everywhere. The fact that the observations appear to corroborate this reasonable belief may have prevented researchers from inquiring under which conditions the glow of a primeval fireball is actually predicted to be observable in various Big Bang models. This inquiry may be less likely to be made if the radiation source is still referred to as a "fireball" when its temperature is said to have fallen from 3000 K all the way down to the 2.7 K of the CMB, as in [15] by Wilkinson and Peebles, but this has rarely been imitated. with successive modifications. Section through a spherical universe shown to scale in comoving coordinates, in which the Hubble flow expansion of the universe is factored out [16]. A. Prototypical model Center: original singularity and our approximate spatial location. Most matter is still nearby. Surface of small red ball, radius 1 Gly: last scattering surface (LSS). Blue ball (with small red ball inside), radius 23.3 Gly: region where now received radiation could have its origin. Golden balloon, radius 46 Gly, thickness 2 Gly: region where radiation from the LSS is now directly observable. This model provides no explanation for the actually observed CMB. It allows expanded radiation from a past epoch to fill the universe only during a limited epoch. An increasing share of the universe will be free from it. For matter, this holds with a different distribution. The standard model is a combination of the incompatible models A, B and C. B. Relic radiation model In this model, matter and radiation are considered to comove with the Hubble flow, but the evolution of the CMB is calculated (section 2.5 in [3]) as if the radiation did not propagate any further. Small red ball: expanding region in which radiation from the LSS remains observable ever since release. In comoving coordinates, this region has its expanded size to begin with. C. Expanding view model This describes an expanding view into regions that transcend those within which any now observable radiation must have originated in models A and B (the red-blue ball). Center: our approximate spatial location, still as in models A and B. There is, then, a drastic discrepancy between the locations of the observed and the emitting source. Consistency requires the radiation source to be one and the same. Golden V: rays from the last scattering surface (LSS, the red horizontal dash close to zero distance). The LSS is directly observable from positions within the golden band, which represents all future light cones of the LSS. We are not within this band but at the peak of the blue Λ. Silver I: The Hubble flow through a region with the comoving diameter of the LSS. Bulk matter with negligible peculiar motion remains within this region. The traditional calculation of the CMB properties erroneously presupposes this also for radiation. This is the relic radiation blunder. Blue Λ: This represents our past light cone and connects us with everything we can now see straight on. In a cosmogonic Big Bang universe, the region below the golden V has not come into existence. The dotting of the lines in this region is meant to remind of this. In standard cosmology, the galaxy GN-z11 is placed in this region nevertheless, actually in an expanding view model. The radiation from the LSS is supposed to be observable where the dotted Λ crosses the dotted red horizontal that indicates the time of last scattering. This is at present at a comoving distance of about 46 Gly in any direction. Red dotted horizontal: The time of last scattering, in the expanding view model also its place. The existence of the CMB is routinely cited as evidence for a hot Big Bang, even as the strongest piece of evidence for it. This contrasts sharply with the preceding considerations, which clearly show the opposite to be the case: The observability of the CMB constitutes evidence against its supposed emission from a surface of last scattering in a formerly less extended universe. As I tried to communicate previously [17], it would not be observable if it had been emitted there. However, it is believed to have been emitted there if the "relic radiation blunder" is committed, which is described in subsection 2.3. The primeval fireball delusion can be considered a clear symptom, manifestation, or consequence of this blunder. Homogeneity loss in a Big Bang universe Within physical cosmology and CMB research, it has long been taken for granted that the universe at large remains homogeneously filled with matter and radiation. This assumption is a simplistic idealization of the cosmological principle. It is convenient because it makes it practicable to apply General Relativity to the universe as a whole. For matter in hypothetical universes, it can be traced back to Einstein (1918) [18]. However, it is well known that the cosmological principle, i.e., the 'perfect cosmological principle' cannot hold over time in a Big Bang cosmology, which is more recent than Einstein's universe [18]. From astronomy, it is further known that the distribution of matter in space is far from homogeneous. It is rather fractal in a sense [19], although the cosmological principle may still remain tenable at the very largest scales. The homogeneity assumption was also applied to radiation when the conditions in the early stages of an expanding universe were considered [13]. When, more recently, the actual presence of highly isotropic background radiation was noticed [4], this was taken to mean that radiation that fills the universe homogeneously remains present over time. Subsequently, one may be tempted to believe that the observed background radiation has its origin in the glow of the primeval fireball. However, this cannot be so in a flat expanding universe, but this went unnoticed or at least untold. A flat Big Bang universe is incompatible with the cosmological principle even if variation over time is allowed. In such a universe, radiation that is no longer scattered cannot fail to separate ever more (as in the golden band of Fig. 2) from its material content (primarily in the silver band). Even matter with a higher speed of peculiar motion will increasingly distance itself from matter with a lower speed. Neither matter nor radiation would thus remain homogeneously distributed. Large-scale homogeneities would be transient and shell-bound at best. Hence, one has to reject the idea of a Big Bang if the cosmological principle is to be kept. In all models that use the Friedmann-Lemaître-Robertson-Walker metric or the ΛCDM model, large-scale homogeneity of the universe is postulated to begin with. The impressive observable near-isotropy of the CMB (attributed to its homogeneity) was puzzling nevertheless, because there are limits to communication between different regions in an expanding universe and communication appears necessary for homogeneity to be maintained. There is a theory, the cosmological inflation theory (with several variants), which, among other things, is supposed to handle this. This theory postulates an otherwise unphysical process of expansion at a superluminal speed. This process is said to end within 10 -32 seconds of absolute cosmic time. Even if this had kept the universe homogeneous until it became transparent, 380 000 years later, the homogeneity of the CMB would anyway have been lost thereafter, and all consistency checks considered in this article are concerned with the circumstances that prevail in standard cosmology then, i.e., after recombination and last scattering. In traditional reasoning, neither the observability of the last scattering surface nor the homogeneity of the radiation from it is thought to be lost, but this is due to the fatal blunder described next. The relic radiation blunder The vertical silver band in Fig. 2 shows a region with the comoving diameter of the last scattering surface. This region contains matter that is now largely gathered in galaxies, but an increasing number of these are now outside the band due to their peculiar motion. In ordinary coordinates, the width of the band grows in proportion to a scale factor a(t) that is set equal to 1 at present and was 1/1100 at the time of last scattering. The diameter of the region expanded from 1.8 Mly to 2 Gly by now; but in comoving coordinates, which are used in both figures, it is already 2 Gly from the beginning and remains constant because a(t) is factored out in these coordinates. The traditional explanation of the CMB and its temperature, section 2.5 in [3], assumes black-body radiation from the last scattering surface to remain within the vertical silver band in Fig. 2. It considers that the radiation expands in proportion to a(t), by the factor of 1100 in all three spatial dimensions. In order for c to remain constant as it must, given the present definition of the meter and the second, time must scale in proportion to length. The density of the radiation scales as 1/a(t) 4 , whereby its blackbody nature is retained. Its temperature T scales as T ∝ 1/a(t), so that Tem = Tobs (1+z) = Tobs/a(tem), as in equations 6.3 and 6.4 in [1]. In its model of 'relic radiation' (also 'relict radiation', 'fossil radiation' or 'comoving radiation'), standard cosmology simply disregards the propagation of light, i.e., the fact that electromagnetic waves move away from their source at c as long as they meet no hindrance. This 'relic radiation blunder' [17] may be obscured to traditionally educated cosmologists by its origin in the practice of considering model universes based on General Relativity to be filled with a homogeneous fluid, in the case of radiation with a diffuse 'photon gas', with photons in random motion and, in [20] and 50 years later in [3] thought of as contained within an imaginary box that expands with the Hubble flow, i.e., with a(t). The disregard of the free streaming propagation of light might be justifiable if the outflow of radiation from a region was always balanced by a compensatory inflow from outside. However, while certain model universes may satisfy this condition, a flat Big Bang universe as a whole cannot do this, because it would require contributions from outside itself. Even if the radiation released from the last scattering surface of the primeval fireball can be described as a photon gas with 3000 K, this description becomes invalid after release from the primeval plasma, when the photons and the corresponding electromagnetic waves are no longer scattered but free to escape at c. It has been noted before that free photons do not constitute a thermodynamic system and cannot leave a relict behind [21]. Fortunately, the radiation that reaches us from our local fireball, the Sun, cannot either be correctly thought of as a photon gas. A solar photon gas might keep us comfortable at 300 K, but it would not give rise to any visible light. This would be bad for life. The CMB may still be a residue of some radiation, but it certainly cannot come from a stage at which the universe was much less extended. During the history of Big Bang cosmology, the relic radiation blunder was copied carelessly. It was treated as part of the irrefutable 'hard core' [22] of the cosmological research program, because it seems to follow from prior assumptions whose physical incompatibility failed to be noticed. We have already seen in subsections 2.1 and 2.2 that radiation that is no longer scattered cannot maintain homogeneity and not even observability throughout a universe that grows in size even in comoving coordinates. Radiation from the last scattering surface would only fill the V-shaped golden band in Fig. 2 and remain outside the view of observers located anywhere above it. In Fig. 1, the CMB would now only be observable in the region represented by the golden ring. Since we are not there and still can see a CMB, its presence requires a different explanation, but this is outside the scope of the present study. An equivalent to the relic radiation blunder arises also when neutrinos are considered, but here we do not need to do so. The second fatal flaw There is also a geometric contradiction that shows itself most markedly in calculated distances that do not fit into a Big Bang universe. We can call this the transcendent distance blunder if we take a Big Bang model as given. Otherwise, the idea of a Big Bang itself constitutes the blunder. The blue Λ in Fig. 2 represents our past light cone, which connects us, located at its peak, to everything that we can see directly. The line-of-sight comoving distance DC between us and a radiation source on this light cone is computed [23] by integrating the infinitesimal contributions dcomov(x, y) = dproper(x, y)/a(t) between nearby events over time from tem, when the radiation from the source was emitted, to tobs, when it is observed: where a(t) = 1/(1+z), and z is the observed redshift. However, this light cone transcends the existence region of the Big Bang universe. Everything below the golden V in Fig. 2 is outside the space within which the Big Bang might have dispersed anything at all. It would require a superluminal speed to bring anything there. This is why the blue Λ and the red horizontal that indicates the time of recombination and last scattering in Fig. 2 have been dotted in this area. Before one can reasonably claim to see anything there, be it a galaxy or the source of the CMB, one has to reject the idea of a formerly smaller universe. There are several galaxies whose observed redshifts z place them in the transcendent region. For one of these, GN-z11 [24], z has been reported to be 11.09. The authors wisely did not publish an explicit distance measure for it, but if one assumes that this z is the cumulative effect over time of an can be seen in Fig. 2. [16] In the standard approach to cosmology, the idea of a Big Bang is, nevertheless, retained in a model that is already marred by the relic radiation blunder (model B to Fig. 1). When it comes to considering line-ofsight distances, which can be based on redshift or luminosity, the Big Bang model is silently replaced by a model that presupposes an expanding view -a transcendentally expanding view (model 5 in [17]). In this model, time appears now to have arisen 13.8 Gy ago, while the universe immediately after inflation already had at least the comoving spatial extension that it has at present in the prototypical Big Bang model. The first radiation sources that became visible in this universe were all cosmically nearby. As time passed on, the span of distances at which sources could be seen became successively wider. This span increased at c, so that radiation emitted during the last scattering epoch was observable ever since. It is now observable where the dotted blue and red lines intersect in Fig. 2, at DC ≈ 46 Gly -not far from the present comoving radius of the Big Bang universe (in the golden ring of Fig. 1, model C). There is no deliberate reflection behind the expanding view model. Therefore, it is not surprising that no name had been attached to it before. In a Big Bang model, Eq. (1) holds approximately for small values of z. If extrapolated without an upper limit for z, the model turns without further action into a radically different expanding view model. The spatial location of GN-z11, shown in Fig. 2, is compatible with an expanding view model, which allows the galaxy to have been close to its calculated spatial distance already at the apparent onset of time. However, the highly problematic nature of a time onset or its equivalent in a process of cosmological inflation is rarely ever discussed in more than a narrow selection of its aspects. In Fig. 1, the golden shell in which the observable source of the CMB appears to be located is very remote from the fireball represented by the small red disk, i.e., from the region from which the radiation is said to have been emitted. The observable source is also much larger than the emitting one. In ordinary units, the surface area of the former is more than a million (1100 2 ) times larger than that of the latter. However, consistency requires these sources to be identically the same. In investigations of the CMB, its apparent source is routinely treated as if it represented its emitting source in an expanding universe, but this actually involves committing a transcendent distance blunder -not only a relic radiation blunder. By putting an expanding view model over an expanding universe model, it is, in fact, taught that the universe was at least as large as it is now, or even infinite, when it was much younger and smaller than now, or even arose out of a point-like singularity. Although it is extremely conspicuous, this contradiction is rarely paid attention to. Liddle [2], p. 82, appears to have expressed it unintentionally -its contrariety remained in any case uncommented: "Since decoupling happened when the Universe was only about one thousandth of its present size, and the photons have been travelling uninterrupted since then, they come from a considerable distance away. Indeed, a distance close to the size of the observable Universe." The first part of this quotation, "Since decoupling happened when the Universe was only about one thousandth of its present size" presupposes a formerly smaller expanding universe, while the remainder "and the photons … come from … a distance close to the size of the observable Universe" presupposes a transcendentally expanding view into a universe that had already its present size when the radiation was emitted. (In comoving coordinates, as in Fig. 1 and 2, the discrepancy is less extreme. In these, one could equivalently say that the universe was about one-fiftieth of its present size when decoupling happened.) In other cases, there is a size discrepancy by a factor of two. A cosmogonic expanding universe model, in which the extension of space is limited to and above the golden V in Fig. 2, allows at present for rays with a maximum comoving length of about 23 Gly, i.e., from no farther than the blue sphere in Fig. 1. At any given time, the expanding view model allows for rays that are twice as long. The size of the observable universe is commonly defined on this basis and so given a radius of 46 Gly. Thereby, the spatial limitation of the model is removed altogether -only the temporal one remains. Instead of a cosmogonic model, we then get a merely "chronogonic" model in which there is no primeval fireball and no surface of last scattering -only a time of last scattering that is valid everywhere in a much larger preexisting universe and whose absoluteness defies relativity. The first one is the disregard of the radiant nature of light (section 2, esp. 2.3) , which can make cosmologists believe that we still can see the light from the primeval fireball, although outsiders understand that the light from this source must have passed our place and become invisible long ago if we consist of matter from the same fireball. The second one (section 3) arises from failing to notice that a line-of-sight distance between us and a radiation source is a distance in a universe whose observable spatial extension increases the further back we look in time. This transcends the space of a formerly less extended Big Bang universe. Each of these flaws requires a rejection of the Big Bang idea. Although they appear conspicuous to attentive unindoctrinated outsiders, most experts in the field, even critical ones, failed to take notice of these flaws. Some who noticed that the idea of a Big Bang is not always convenient use to say that it should not be taken literally. In their view, the universe was always rather large, perhaps infinite, has no unique center [2] are not sufficient to elicit the insight that there is a conflict if spatial extensions are also considered. While, in the absence of independent confirmation, fudge factors such as dark energy and exotic dark matter remain hypothetical excuses for observations that do not fit, they still give the reasoning the status of rational speculation. This quality level is not reached if blunders and contradictions like those revealed here occur. These make the reasoning irrational and thereby entirely untenable, even as a speculation. The criticism expressed here goes sharply against what Kuhn [26] called "normal science" and Lakatos [22] the "hard core" of a research program. This core consists of those tenets of established theories that are taken for granted by the members of the respective research community (the insiders). Kuhn and Lakatos [26,22] noticed half a century ago that these cores are treated as inviolable. It is permissible to question the completeness of an established theory, but any really fundamental progress is blocked in fields in which a single paradigm dominates. Physical cosmology demonstrates that such blockage can persist for generations even if the theory that is praised as the best we have is actually irrational. Although scientific journals often publish articles on speculative modifications not only of mainstream doctrines, articles that discredit the hard core in the respective research program run a very high risk of being rejected right away by the editors of reputable and trusted journals and, if not, then by referees established in the field. These can easily notice deviations from orthodoxy and insider practice. The willpower required for evaluating an outsider's reasoning is rarely present. It is kept low by the experience that unconventional approaches are more often substandard than excellent and by conformity bias. Together with the similarly biased attitude by most teachers and grant providers, this leads to the tenacious perseverance of traditional deficiencies in science. Contrary to expectations expressed in a recent analysis of peer review behavior [27], the open review procedure adopted by Qeios seems to have scared off the most wanted referees: all 10 invited reviewers of version 1 abstained while 4 spontaneous ones rejected my reasoning without appraising any of its points and without pointing out any other fault in it than the openly stated fact that it contradicts a firmly established research program, which the establishment at worst allows to be labeled as incomplete, no matter how absurd it actually is. It is contradictorily overcomplete.
6,869.8
2023-05-11T00:00:00.000
[ "Physics" ]
Detection of Benign and Malignant Tumors in Skin Empowered with Transfer Learning Skin cancer is a major type of cancer with rapidly increasing victims all over the world. It is very much important to detect skin cancer in the early stages. Computer-developed diagnosis systems helped the physicians to diagnose disease, which allows appropriate treatment and increases the survival ratio of patients. In the proposed system, the classification problem of skin disease is tackled. An automated and reliable system for the classification of malignant and benign tumors is developed. In this system, a customized pretrained Deep Convolutional Neural Network (DCNN) is implemented. The pretrained AlexNet model is customized by replacing the last layers according to the proposed system problem. The softmax layer is modified according to binary classification detection. The proposed system model is well trained on malignant and benign tumors skin cancer dataset of 1920 images, where each class contains 960 images. After good training, the proposed system model is validated on 480 images, where the size of images of each class is 240. The proposed system model is analyzed using the following parameters: accuracy, sensitivity, specificity, Positive Predicted Values (PPV), Negative Predicted Value (NPV), False Positive Ratio (FPR), False Negative Ratio (FNR), Likelihood Ratio Positive (LRP), and Likelihood Ratio Negative (LRN). The accuracy achieved through the proposed system model is 87.1%, which is higher than traditional methods of classification. Introduction Cancer is the most commonly known, developing, and most dangerous disease all over the world [1]. Skin cancer is one of the types of cancer. According to the present measures of the World Health Organization (WHO), 2 to 3 million nonmelanoma and 132000 melanoma skin cancer cases turn out globally each year. Out of three diagnosed, one is skin cancer according to the reports of Skin Cancer Foundation Statistics (SCFS) [2]. Skin cancer emerges from the skin. It is an unusual growth of skin cells. ese skin cells can invade the other body parts' cells. Maximum cases come out due to the Ultraviolet (UV) rays of the sun. But cases also come out on areas of body parts which are not ordinarily exposed to sunlight. Skin cancer is further divided into three main types: basal cells carcinoma (BCC), squamous cell carcinoma (SCC), and melanoma [3]. BCC is caused by damaged cells and causes changes in DNA, the basal cells of the outer layer of skin [4]. SCC of skin is caused by exposing skin to UV radiation for a longer duration. is radiation may be from sunlight or from lamps or any other such type of source [5]. Melanoma can spread from any part of our body. is disease attacks our normal skin and makes it cancerous. Melanoma mostly appears on the face or the trunk of affected men. In women, melanoma mostly appears on lower body parts like legs. Mostly, it spreads out on those parts of the body that are not directly exposed to sunlight [6]. In the detection of malignant tumors, many challenges have been faced. e factors such as images of different shapes and sizes, presence of noise in images, tumors irregular boundaries, and similarities with neighbor's tumors confused right identifications. For this purpose, image analysis methods were followed [7]. Computer-Aided Systems (CADs) identified border detection and extract required features. In CADs, different classification algorithms that automatically classified lesions into their relevant class are used [8]. One of them is AlexNet. It is a modest and strong CNN model consisting of convolutional and pooling layers and some fully connected layers [7]. In the proposed system, the problem is to classify benign and malignant tumors of the skin. Malignant tumors are severe types of tumors that grow and spread uncontrollably. ese are called cancerous. While benign tumors remain there where they appear, they do not spread across other areas of the body. ey are not much more problematic. e detection of skin cancer in starting days can be treated well and chances of recovery are high. rough novel techniques, this is possible [9]. e novel approaches of deep learning methods are mostly applied not only for skin cancer but also for other varieties of cancers like breast cancer, brain cancer, lungs cancer, and prostate cancer. In the literature, more algorithms for skin cancer images classification are proposed. But this is still a very challenging task. e problem must be addressed in ways as follows: skin cancer images boundaries might have full contour with maximum curves and small angles. It is still a question that how many images of skin cancer are needed to be trained and analyzed, partially answered [10], and still, it is an issue. In the proposed system model, a large dataset is used to handle this problem. rough this, we achieved maximum accuracy. It is relatively previous work better to deal with this problem. In this article, image base dataset of benign tumors and malignant tumors is used. A customized AlexNet model is applied. e AlexNet model is customized according to the existing problem for achieving high results. e customized existing AlexNet model consists of a total of 25 layers. ere are 5 convolutional layers in this model. e initial layers are fixed and already trained. e last three layers are customized according to the proposed system output classes. It is a binary classification problem. So, the softmax layer is changed and set labeled according to output classes. e customized AlexNet layers are trained according to the dataset which is used in this proposed system. e input parameters for fully connected layers are the size of output classes which is of binary classification. Softmax layers applied softmax functions on providing input data. Fully connected layers are modified according to our classes' particular features. After that, these fully connected layers are able to train the model according to the classes' specific features. e rest of the paper is set as follows. In Section 2, related work is described, Section 3 describes the material and methods which are used for this prediction, Section 4 comprises simulation and discussion of the result of the paper, and Section 5 is about the conclusion. Literature Review In the literature review, most of the representative approaches were used for skin cancer images classification. Diagnosis of any disease is very important for further proceeding and treatment. e same is in the case for skin disease; it is a challenging task for researchers to diagnose disease in its early stages. Different researchers applied different approaches to diagnose skin disease. ese approaches include the following. Garg et al. proposed a system using Convolutional Neural Networks (CNNs) [11]. Masood and Al-Jumaily proposed a methodology that is tested on skin cancer datasets but it can be used and tested in any area where scarcity of labeled data is an issue. eir research demonstrates using data that is not a label for training the algorithm. For this purpose, they use a partially supervised method. e proposed research achieves 86.5% accuracy [12]. Mhaske and APhalke presented that Support Vector Machine (SVM) performs best rather than K-Mean's clustering and Neural Network (NN) for melanoma skin cancer and achieves 80% accuracy [13]. Fidan et al. did work on the ph2 dataset and presented that abnormal and melanoma skin cancer with NN and decision support system would help dermatologists in diagnosing skin lesions [14]. Amirreza and his fellow researchers developed a hybrid Deep Neural Network (DNN) for the classification of skin lesions. ey used pretrained AlexNet, VGG16, and ResNet-18 for feature generators. After that, the SVM classifier was applied on 150 validation images, and for melanoma, 83.83% accuracy was achieved [15]. Rehman et al. presented work that consists of CNN for feature extraction, and after that, they used an ANN classifier for the detection of malignant lesions [16]. Monisha et al. presented ABCD dermoscopy for malignant recognition using backpropagation NN to rearrange harmful stage [17]. Albahar proposed a system using deep CNN applying a novel regularizer technique [18]. Jain presented work using Probabilistic Neural Network (PNN) classification for malignant lesions detection [19]. Maurya et al. presented that they used Grey Level Cooccurrence Matrices (GLCMs) for feature extraction, and after that, multiclass SVM applies and achieves 81.43% accuracy [20]. Pomponiu et al. presented DNN for feature extraction automatically and perform classification of skin lesions for malignancy on clinical dataset [21]. Esteva et al. presented using single CNNs train on 129,450 clinical images for versus benign and malignant melanomas versus [22]. In [23], a dataset with 19398 images was used for skin diseases classification, and the authors used an eight-layer CNN model on a dataset that has 900 images. Jianfeng He et al. constructed an 8-layer CNN model using a dataset containing 600 images for testing the model. Seifedine Kadry 2 Computational Intelligence and Neuroscience et al. proposed a system for the assessment of Skin Melanoma (SM) using a CNN-based approach. Using the VGG-SegNet scheme firstly, they extract the SM part From the Dermoscopy image. After that, the proposed technique was validated using the ISIC2016 database [24]. Attique et al. proposed a system in which they used a segmented RGB images dataset, which they later passed through the Den-seNet model, extracting features. For this purpose, average pool and fully connected layers are applied. Later on, the combined result is forwarded to the feature selection block for downsampling using the proposed entropy controlled least square SVM. For next, they used three different datasets for validation and then measure the performance of RCNN [25]. Oluwakemi et al. proposed using the SqueezeNet deep learning model to improve the data augmentation model for effective detection of melanoma skin cancer [26]. Pham et al. [27] expose a comparative study technique through which they reveal that the color and shape of melanoma lesions are useful for classification with benign lesions. In their technique, they apply six classifiers along with seven feature extraction methods, performing data preprocessing by taking two datasets, and revealed that Random Forest is the best classifier with an accuracy of 81.46%. Pai and Giridaran [28] build a system using a VGG-16 customized CNN model, classifying various seven types of skin lesions. is system predicts the most probable types of skin lesions from given with 78% accuracy. Emrah and Zengin presented research [29] on the "HAM10000" dataset in which they use the K-Fold Cross Validation technique to distinguish seven different classes for training and testing purposes. After that, they applied VGGNET-16 architecture and obtain 85.62% accuracy. Limitation of Related Work and Contribution In [12], semilabelled data is used, the size of the dataset was less, only 1050 images were used, and an SVM classifier was applied which was unable to gain a maximum score. Handcrafted features were used in their proposed system too. In [15], a less number of images were used in this system to validate the system, only 150 images. So, the malignant class achieved 83.83% accuracy, which is less as compared to the proposed system. A fixed number of weights were used in [15], so maximum accuracy was not achieved. In [30], a less number of images were used to train and after that to validate the system. So, this system without augmentation gained only 80% accuracy which is not up to the mark. In the approach proposed in [20], a less images dataset was used, only 359 images are used in their proposed work, a multiclass SVM algorithm was applied, and the system gained only 81% accuracy which is minimum to the proposed approach. Contrary to work done before, the proposed approach in this paper does not rely on handcrafted features. In the proposed approach, transfer learning intends to apply with Deep Convolutional Neural Network (DCNN) AlexNet pretrained network. Moreover, the proposed system network model is trained on 1920 images; each class contains 960 images. e size of the images was the same in both classes. Firstly, features are extracted through DCNN, and after that, a customized pretrained network is trained. After that, the proposed system network model is validated on 480 images. ese are the specialties of the proposed system model. So, it performs better than earlier approaches. Materials and Methods In this section, materials for the paper and work done on the proposed system are briefly described. 4.1. Dataset. In the proposed system, a publicly available dataset is used taken from the Kaggle repository [29]. It is an images base dataset that consists of two different classes: class 1 is named malignantly segmented and the second one is named benignly segmented. Sample image of class 1 is given in Figure 1 and sample image of class 2 is given in Figure 2. e original dataset consists of a total of 2637 images, the size of each class is imbalanced, and so 2400 images have been selected for the proposed system model where each class comprises 1200. AlexNet has been trained on over a million images and classifies images into 1000 objects categories. In the proposed system, the pretrained AlexNet model is modified according to the problem taken in the proposed system. In the proposed model, the AlexNet model is trained on 1920 images that belong to two classes. Model for Proposed System. e graphical diagram of the proposed system model is given in Figure 3. e proposed model consists of a total of 25 layers. e first layer of the proposed model is the image input layer, and the dimension of the input image is 227 × 227 which is specified for the AlexNet model [15]. e RGB coloring scheme is used for input images. ere are 5 convolutional layers in the proposed model. RelU activation function is used for activation. Normalization and pooling are also done within different convolutions. e last three layers are modified according to the proposed system problem, respectively, with 2 fully connected layers against the weights given 2 × 4096 bias added 2 × 1 and tuned. e next is the softmax classification layer that classified input images according to option sets in the trained model. e last layer in the proposed system model is class output cross-entropy with classes benign and malignant. Transfer Learning (Modified AlexNet). One of the most famous techniques in the current era is deep learning, which is used in different fields of life such as the prediction of diseases, transportation, aeronautics, and agriculture. AlexNet is a pretrained convolutional network model. Transfer learning, a process of using a pretrained model, is commonly applied in deep learning applications [30]. Different deep learning pretrained models are used to tackle different types of real-world problems. In the proposed system model, the pretrained AlexNet model of deep learning is used for transfer learning intended for the detection and classification of malignant and benign tumors. Table 1 represents the architecture of pretrained AlexNet which is composed of convolutional layers, pooling layers, and fully connected layers. AlexNet network model is a pretrained CNN network model and has a huge impact on the recently used application of deep learning. is CNN network is modified according to our problem requirement, and then, images were passed to our proposed modified AlexNet transfer learning network model. e last three layers are modified and customized according to our proposed system problem, and these layers are the output classification layer, fully connected layer, and softmax layer. e modified and customized network model is used for transfer learning. Training and Validation Phase. In this phase, initially, pretrained AlexNet model is modified and then trained according to the proposed system problem. Firstly, we divide the dataset with a ratio of 80% and 20%. ere are a total of 2400 images in the dataset which are used in the proposed model. After division, 1920 images are separated for training purposes, and the rest of 480 images would be used in the validation phase. Modified pretrained AlexNet model is trained on different epoch values of 10, 20, and 30, respectively. e learning rate of 0.001 is fixed in all epochs. After training, the phase model is validated on images that are separated already for validation purposes in the proposed system. Data Acquisition Layer. In this layer, the data is acquired on which model has to be trained. ere are a total of 2400 images in this dataset. It is a classification type dataset; classes' names are malignant and benign tumors. e model has to classify the input images into one of the classes: malignant or benign. Data Preprocessing Layer. For further processing, first, we need to process data in such a form that will be more effective for the model. e original dataset is not in such a form that the model will be trained on it directly. Images are not according to the AlexNet requirements. AlexNet model only can be trained on data having a size of 227 × 227 and a coloring scheme of RGB. It is all done using Image Batch Processor. Application Layer. Till now, the data acquisition process is completed and also data is set according to the requirements of the AlexNet model. e AlexNet model is trained on the training dataset which is used in this proposed system. And the results are computed according to the required parameters. e model is trained on different epoch values of 10, 20, and 30, respectively, in the proposed system. Performance Evaluation Layer. e pretrained AlexNet model is trained in the training phase, and performance is evaluated in the validation phase. e performance of the model is checked through performance metrics. e results produced by the proposed pretrained AlexNet model are examined by using different evaluation metrics. e 1920 images with a ratio of 80% are taken for training purposes, and the rest of 480 images with a ratio of 20% are taken for system validation. Computational Intelligence and Neuroscience metrics act as a tool to evaluate classification models and are used in measuring the performance of the predictive model. Results and Discussion e developed proposed system model uses a pretrained AlexNet model for the detection and classification of malignant and benign tumors. Some changes are made in this pretrained AlexNet model according to the proposed system problem. And further, this proposed system model is divided into two layers: training and testing. e proposed system model is trained before and after that validate on separated testing data. As mentioned earlier, 80% of the dataset is used for training purposes and the rest of the 20% is used to validate the proposed model. e produced results of the proposed model are evaluated using performance evaluation metrics. For performance measurement, performance parameters are used to measure the performance of the proposed system model. e following are performance measuring metrics through which performance is measured: Accuracy, Miss Rate (MR), sensitivity, specificity, Output as Classified Image positive predictive values � TP TP + FN × 100%, false positive rate (FPR) � 1 − specificity, false negative rate (FNR) � 1 − sensitivity, likelihood ratio positive � sensitivity false positive rate × 100%, (9) likelihood ratio negative � false negative rate specificity × 100%. (10) e proposed system model is applied in the validation phase and it classifies malignant and benign tumors into one of the classes. Table 2 represents simulation parameter values. e data is trained on multiple epochs like 10, 20, and 30, and maximum accuracy is gained through the proposed network model on 20 epochs. e training graph on epochs 20 where the system gains high accuracy is shown in Figure 2. Maximum accuracy is verified in the validation phase of all epochs, where the proposed model is validated on testing data. In Table 2 Figure 4. Table 2 represents the comparison of training and after that validation score on different epoch values, respectively, of 10, 20, and 30. Table 2 represents the score of different epoch values where the proposed network model achieved the highest score of 87.1% on an epoch value of 20. Figure 5 shows the labeled images of malignant and benign classes through the proposed system model. Total 8 images are given as input to model 4 from each class: proposed system model 5 of it in benign class while 3 in malignant class. Table 3 expresses the proposed system confusion matrix during the validation phase. Before this, the proposed system model is trained on 1920 images, where each class consists of 960 images. After that, the proposed system model is validated; during validation, 480 images were used, where each class consists of 240 images. For this purpose, a value of 20 epochs was set. From benign class, the proposed system model classifies 192 images as correct while 48 images are classified as incorrect. In the malignant class, the proposed system model classifies 226 images as correct while 14 images as incorrect. (1)-(10). Multiple approaches have been used in the past for the detection of skin cancer, but transfer learning is the novel approach to detect and differentiate malignant and benign tumors. e proposed methodology achieved a high accuracy score in the detection of skin cancer disease. us, the proposed methodology helps medical consultants to identify disease and treatment for restraining the spread of disease. Table 5 shows the proposed system model with previously published approaches. It is observed that the proposed model gives 87.1% accuracy which is higher than the approaches published earlier. e proposed system model DCNN, transfer learning intend with customized pretrained AlexNet model, achieved higher accuracy than existing published approaches. Conclusions e earlier work done for skin cancer disease detection was not accurate enough to classify tumors whether they belong to the benign or malignant family. An automated framework is required that can classify tumors as benign or malignant. e proposed system, based on the transfer learning classification model, is able to handle this problem. It detects and classifies the family of tumors accurately. e proposed system model used pretrained AlexNet, retrained CNN. e customized pretrained AlexNet model was validated on the validation dataset and achieved 87.1% accuracy on 20 epochs. e proposed system model does not require handcrafted features. It is very fast and easily manageable for large datasets too. In future work, well-known datasets of skin lesions like Ph2, MED-NODE, DermIS & DermQuest, ISIC 2017, ISIC 2018, ISIC 2019, and ISIC 2020 will be used in different architecture. e AlexNet model can be made more efficient and accurate by fine-tuning all the convolutional layers and improving malignant class accuracy. So, net accuracy can be increased. e rest of the pretrained network can also be explored. Data Availability e data used in this paper can be requested from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest regarding the publication of this work.
5,271
2022-03-24T00:00:00.000
[ "Medicine", "Computer Science" ]
Best approximation of functions in generalized Hölder class Here, for the first time, error estimation of the functions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$g\in H_{z}^{(w)}$\end{document}g∈Hz(w) and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tilde{g}\in H_{z}^{(w)}$\end{document}g˜∈Hz(w) classes using \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$TC^{1}$\end{document}TC1 method of F. S. (Fourier Series) and C. F. S. (Conjugate Fourier Series), respectively, are determined. The results of (Dhakal in Int. Math. Forum 5(35):1729–1735, 2010; Dhakal in Int. J. Eng. Technol. 2(3):1–15, 2013; Kushwaha and Dhakal in Nepal J. Sci. Technol. 14(2):117–122, 2013) become the particular cases of our Theorem 2.1. Some important corollaries are also deduced from our main theorems. Our motivation for this work is to consider a more advanced class of functions that can provide best approximation by a trigonometric polynomial of degree not more than r. Therefore, in this work, we generalize the results of Kushwaha and Dhakal [3] and Dhakal [1,2]. In fact, we obtain the results on the error estimation for the function f ∈ H (w) z (z ≥ 1) by T.C 1 method by F. S. Thus, the results of Kushwaha and Dhakal [3] and Dhakal [1,2] become the particulars cases of our Theorem 2.1. We also obtain the results on the error estimation of the functiong ∈ H (w) z (z ≥ 1) by T.C 1 method of C. F. S. Let "T = (a r,m ) be an infinite triangular matrix satisfying the conditions of regularity [13], i.e., The sequence-to-sequence transformation t T r := r m=0 a r,m s m = r m=0 a r,r-m s r-m (2) defines the sequence t T r of triangular matrix means of the sequence {s r } generated by the sequence of coefficients (a r,m ). If t T r → s as r → ∞, then the infinite series ∞ r=0 h r or the sequence {s r } is summable to s by a triangular matrix (T-method) [14]. " "Let C 1 r = s 0 + s 1 + · · · + s r r + 1 If C 1 r → s as r → ∞, then the infinite series ∞ r=0 h r is summable to s by C 1 means [14]. " The TC 1 means (T-means of C 1 means) is given by If t T.C 1 r → s as r → ∞, then the series ∞ r=0 h r or the sequence {s r } is summable to s by T.C 1 means. The regularity of T and C 1 methods implies the regularity of T.C 1 method. Remark 1 (Example) Consider an infinite series The nth partial sum of (5) is given by Therefore, series (5) is not summable by (C, 1) means. If we take a n,k = 1 n+1 , then series (5) is also not summable by T means. But series (5) is summable by T.C 1 means. So, the product means is more powerful than the individual means. Note 1 w(l) and v(l) denote "Zygmund moduli of continuity [14]. " If we consider w(l) v(l) as positive and non-decreasing, Thus, Remark 4 We are not representing here the F. S. and C. F. S. as these trigonometric series are well known and the detailed work on these series can be found in [14]. We denote the rth partial sum of the F. S. as The rth partial sum of C. F. S. is defined as "The error estimation of function g is given by where t r is a trigonometric polynomial of degree r [14]. " We write . Main theorems where T = (a r,m ) is an infinite triangular matrix satisfying (1) and w, v are defined as in Note 1 provided class; z ≥ 1 and w(l) v(l) are positive and non-decreasing, then the error estimation ofg by TC 1 means of C. F. S. is where T = (a r,m ) is an infinite triangular matrix satisfying (1), (6) and w, v are defined as in Note 1. Proof This lemma can be proved along the same lines as the proof of Lemma 3.5(iii). Proof of the main theorems 4.1 Proof of Theorem 2.1 Proof Following Titchmarsh [17], s r (g; x) of F. S. is given by dl. dl, Let Then "Using generalized Minkowski's inequality Chui [18], " we get Using Lemmas 3.1 and 3.5(iii), we have Also, using Lemmas 3.2 and 3.5(iii), we get By (9), (10), and (11), we have Again applying Minkowski's inequality, Lemma 3.1, Lemma 3.2, and φ(·, l) z = O(w(l)), we obtain Now, we have Using (12) and (13), we get By the monotonicity of v(l), Since w and v are moduli of continuity such that w(l) v(l) is positive and non-decreasing, therefore . Then From (16) and (17), we get Proof of Theorem 2.2 Proof The integral representation of s r (g; x) is given by dl. Conclusion Approximation by trigonometric polynomials is at the heart of approximation theory. Much of the advances in the theory of trigonometric approximation are due to the periodicity of the functions. The study of error approximation of periodic functions in Lipschitz and Hölder classes has been of great interest among the researchers [1][2][3][4][5][6][7][8][9][10][11], and [12] in recent past. The trigonometric Fourier approximation (TFA) is of great importance due to its wide applications in different branches of engineering such as electronics and communication engineering, electrical and electronics engineering, computer science engineering, etc. Several elegant results on TFA can be found in a monograph [14]. In this paper, we, for the first time, obtain the best approximation of the functions g and g in a generalized Hölder class H (w) r (r ≥ 1) using Matrix-C 1 (T.C 1 ) method of F. S. and C. F. S. respectively. Since, in view of Remark 2, the product summability means H.C 1 , N p C 1 , N p,q C 1 , andN p C 1 are the particular cases of Matrix-C 1 method, so our results also hold for these methods, which are represented in a form of corollaries. In view of Remark 1, it has been shown that (TC 1 ) method is more powerful than the individual T method and C 1 method. Moreover, in view of Remark 5, some previous results (see Sect. 6) become the particular cases of our Theorem 2.1. We also deduce a corollary for the H α,r class (r ≥ 1). Some other studies regarding the modulus of continuity (smoothness) of functions using more generalized functional spaces may be addressed as a future work.
1,547
2018-10-11T00:00:00.000
[ "Mathematics" ]
Knowledge management toolkit enhancement for a professional services firm who know how to manage knowledge (Nordenflycht 2010; Wang & Wang 2012). Knowledge, goodwill and brand are three of the most important factors contributing to an organisation’s value in the marketplace (Andriani et al. 2019; Muras & Hovell 2014). Organisations need to manage employees’ knowledge and exploit their expertise and vast collections of explicit knowledge within firms. Jenab, Khoury and Sarfarz (2013:248) state that for an organisation to be competitive and innovative it is necessary to effectively utilise its KM tools. This article evaluates the KM toolkit used by South African client-facing professionals of a global professional services firm, in this article referred to as ‘the PSF’. The research problem identifies the gap in the subject field that focuses on evaluating the effectiveness of KM tools and resources in professional services firms. Specifically, there was no Background: Professional services firms utilise knowledge management tools, for example, IBM and Oracle solutions and toolkits, in their day-to-day client-facing operations. The effectiveness of toolkits must be evaluated to establish their actual value. Objectives: This article evaluates the current toolkit used by the South African client-facing professionals of a global multinational corporation. Method: Pragmatism philosophy was used because of the various perspectives needed to interpret the data. Data were collected from 30 participants who adhered to sample eligibility criteria. An interview was used to collect data to help determine which tools worked well and what had to be improved on. Results: The most value-adding tool was the Experience Tool, whereas the Collaboration Tool ranked the least valuable. The Collaboration Tool showed the most potential to increase its value. The results gave a clear indication of areas of improvement that will enable a professional services firm to strategically position its knowledge management toolkit towards adding value for client engagements. Conclusion: The study contributes towards evaluating the knowledge management toolkit, analysing areas of improvement, and recommending components such as machine learning, online collaboration and other activities that would enhance the knowledge management toolkit. Introduction Service excellence pivots on knowledge work and client engagement (Birkinshaw, Cohen & Stach 2020). Various knowledge management (KM) systems and toolkits are used to improve client engagement and achieve greater competitive advantage (Cerchione et al. 2020;Fransson, Hakanson & Liesch 2011). In this article, conceptualisation of a KM toolkit refers to the essential elements necessary for the successful implementation and management of a KM programme in an organisation. Organisations choose to invest in KM for different reasons, such as timely netting the knowledge of retiring people and sharing knowledge more efficiently for current and future operation (Hetey et al. 2020). Knowledge transfer tools are essential in knowledge-intensive firms (Mazorodze & Buckley 2020). Whatever the reasons for investing in a KM initiative, organisations increasingly recognise that KM is a robust discipline that connects professionals to relevant information, knowledge and the expertise of other professionals. Professional services firms are knowledgeintensive firms that need an efficient KM structure and professionals who know how to manage knowledge (Nordenflycht 2010;Wang & Wang 2012). Knowledge, goodwill and brand are three of the most important factors contributing to an organisation's value in the marketplace (Andriani et al. 2019;Muras & Hovell 2014). Organisations need to manage employees' knowledge and exploit their expertise and vast collections of explicit knowledge within firms. Jenab, Khoury and Sarfarz (2013:248) state that for an organisation to be competitive and innovative it is necessary to effectively utilise its KM tools. This article evaluates the KM toolkit used by South African client-facing professionals of a global professional services firm, in this article referred to as 'the PSF'. The research problem identifies the gap in the subject field that focuses on evaluating the effectiveness of KM tools and resources in professional services firms. Specifically, there was no evidence in place that proved the effectiveness of the KM toolkit of the PSF. A method of deciphering in which areas the KM toolkit required improvement did not exist and therefore the research aim of this study was to evaluate the effectiveness of KM tools in a professional services firm. In order to evaluate the tools, the objectives were: • to rank the KM tools of a professional services firm • to determine the current and potential value of KM tools of a professional services firm • to identify the value of KM tools from the perspective of client-facing users of a professional services firm. Firstly, the article begins by reviewing the literature of KM toolkits for professional services firms. Secondly, it describes the research design, followed thirdly by analyses of research findings of how the KM toolkit provided solutions for clientfacing professionals and what the gaps were in the KM toolkit. The article concludes with a recommendation of what needs to be implemented to improve the efficiency of the KM toolkit. Overall, the study contributes towards evaluating the PSF's KM toolkit. The analysis of areas of improvement and recommendation of how to enhance the KM toolkit may also benefit other professional services firms' KM initiatives. Knowledge management initiatives of professional services firms Organisations that have implemented KM initiatives and toolkits are more likely to achieve competitive advantage (Sook-Ling, Choo-Kim & Razak 2013). A KM toolkit is a set of activities that an organisation implements for knowledge creation, storage, sharing and utilisation (Sook-Ling et al. 2013). Knowledge management initiatives and toolkits contribute towards achieving competitive advantage in multinational corporations (Fransson et al. 2011). The implementation of KM toolkits within a professional services firm is therefore fundamental towards gaining organisational competitive advantage. Multinational corporations such as professional services firms that operate at global scale often execute complex projects and processes. It is therefore not strange for even minor projects and repetitive business activities to include numerous stakeholders and technologies covering different functions, business areas and even geographies (Muras & Hovell 2014). These firms implement and execute KM initiatives that enable them to leverage expertise, lessons learnt and experience, and ultimately create the impetus towards innovative competitive advantage (Sankowska 2013). Executing a KM toolkit relies on technological refinement. According to Muras and Hovell (2014), rapid technological advances and the rise of the millennial workforce have a dispersing effect on an organisation's structure, making it increasingly difficult to identify and access the experts best suited for a specific project. Professional services firms must enable their client-facing professionals to keep up with the pace of innovation and change by seamlessly connecting them to experts and collaboration platforms within and beyond the traditional walls of the organisation. As technological tracking abilities become more refined, and machine learning improves the relevance of retrieved information, organisations are now collecting and measuring more data, and these trends are bound to increase as firms encounter even more advanced technologies (Dallemule & Davenport 2017;Ihrig & Macmillan 2015). Firms are also generating large quantities of information in the form of spreadsheets, SharePoint sites, email and instant messaging. Knowledge management initiatives are therefore necessary to determine where this information is stored and how to turn it into valuable intellectual capital and innovation. Leading organisations acknowledge that innovation does not happen in seclusion; instead, using a variety of platforms for collaboration and knowledge transfer prompt innovation (Andriani et al. 2019;De Smet, Lund & Schaninger 2016). Especially the utilisation of digital workforce platforms may require guidance on how to choose the most suitable tool for a specific requirement and when and how to collaborate and share knowledge (Hetey et al. 2020). Shared knowledge and access to information and experience allow individuals and groups to dedicate their time to shape good ideas and integrate them into innovative products and processes. It has therefore become paramount for organisations to build on their skills to obtain, create and utilise knowledge sustainably and effectively (Marjonovic & Freeze 2012;Muthusamy 2008). Organisations implement KM toolkits to achieve seamless dialogue between knowledge creation, dissemination and innovation (Jenab et al. 2013). Knowledge management tools focus on advocating innovation processes in an organisation, emphasising performance, competitive advantage, sharing lessons learnt, integration and continuous improvement of business processes (Jenab et al. 2013). Knowledge management initiatives support professionals to develop processes that are difficult for competitors to acquire and hard to replicate. Client-facing professionals perform knowledge-intensive work as part of business processes that involve human judgement and experience, complex decision-making and creativity. Knowledge management in client-facing firms Information-based organisations are made up of experts and specialists who direct and guide their own performance via feedback from clients and colleagues (Drucker 1988). Clientfacing firms must have access to whatever knowledge they require, wherever and whenever they need it in order to execute business strategy. A competitive advantage develops when organisations apply strategic information management principles in combination with KM principles to leverage its tacit knowledge (Boljanovic & Stankovic 2012;eds. Galliers & Leidner 2003;Lee et al. 2013). Ideally, organisations should aim at having work processes aligned with knowledge processes as knowledge and continuous learning are critical elements for organisational success (Kianto et al. 2019). According to Genderen (2014), sources of knowledge are: 1. Knowledge that one receives from outside the organisation. 2. Dedicated resources who generate knowledge for a specific reason within the organisation. 3. Fusion of knowledge, when people of different expertise are assigned to work together on a specific project. 4. Adaptation of knowledge occurs when there is a need to respond to new technologies or products in the market. 5. Knowledge networking is knowledge generated when people share knowledge in a formal or informal environment. Nonaka and Takeuchi (1995) emphasise the importance of creating new knowledge and they produced a SECI model consisting of socialisation, externalisation, combination and internalisation, which demonstrates how knowledge is essentially formed through interaction. The SECI model has relevance to this study as interaction processes are linked to the effectiveness of a KM toolkit. It is also critical to note the three components of a KM framework, namely, people, process and technology. People, process and technology framework in knowledge management Knowledge management is shaped by interactions between people who create and share knowledge; processes that are ways in which the knowledge is shared, generated, organised and disseminated; and technologies that are devices used to store, generate and disseminate knowledge (Hosseini et al. 2014). The knowledge embedded in the interactions of people, process and technology provides the foundation for competitive advantage in firms (Magnier-Watanabe & Senoo 2009). An organisation's processes must be efficient and flexible enough to overcome present-day difficulties (Andriani et al. 2019;Kir & Erdogan 2021). An efficient organisation is an organisation that has consistent procedures -in other words, it is able to deliver high-quality services at a low cost. However, mere efficiency is not enough; organisations also need to be adaptive. This means that certain external factors may require drastic changes to organisational routine. Organisations must be agile to respond swiftly to changes while continuing with its routine (Kir & Erdogan 2021). Although process and technology are vital aspects of an organisation, it is the capability of people to think that is an even more critical component of organisational efficiency. People identify with specific knowledge assets, and people are the ones responsible for knowledge creation processes in organisations (Magnier-Watanabe & Senoo 2009). For example, people identify with experiential knowledge assets, conceptual knowledge assets, systematic knowledge assets and cultural knowledge assets (Magnier-Watanabe & Senoo 2009). Experiential knowledge assets relate to experience and skills; it is about sharing tacit knowledge and know-how. Conceptual knowledge assets relate to explicit knowledge which is documented. Systematic knowledge assets consist of systematic and packaged explicit knowledge. Cultural knowledge assets relate to organisational routines and ways of doing day-to-day tasks. Because people are integral in KM, there is a certain amount of trust required to achieve the people component of KM (Hosseini et al. 2014). It is with this understanding of the importance of trust as part of the organisational culture that the PSF utilises technology as a platform to facilitate knowledge sharing. The PSF has built its KM toolkit with the intent to connect people and processes, therefore effecting knowledge sharing and usage within a conducive organisational culture. Research methodology Since the focus of this study was on evaluating the effectiveness of KM tools, rich text had to be collected to evaluate what has been working well and what has been lacking from the perspective of the client-facing users of the PSF's KM tools. A qualitative research paradigm was best suited to evaluate the effectiveness of the KM toolkit from a pragmatism philosophy because of the various perspectives needed to interpret descriptive data (Saunders, Lewis & Thornhill 2009). Pragmatism is a functionalist frame of reference suitable for evaluation research (Mouton 2009). The researchers followed an inductive approach, which meant that the researchers collected data to conceptualise the value creation framework of the KM toolkit for the PSF. Because a value creation framework was non-existent prior to the study, this study was designed to collect data on the perceptions, attitudes and beliefs of the research participants. http://www.sajim.co.za Open Access As perceptions, attitudes and beliefs are difficult to quantify (Saunders et al. 2009), therefore a qualitative research paradigm was preferred for this study. To collect rich text, the interview schedule was aligned to the study's four research questions and the data collection instrument consisted of semi-structured questions for each KM tool. The four research questions were as follows: 1. What aspects of the PSF KM toolkit are successful and why? 2. What aspects of the PSF KM toolkit are unsuccessful and why? 3. Which aspects of the PSF KM toolkit have been omitted? 4. What aspects of the PSF KM toolkit need to be amended to facilitate efficiency in client engagements? The population of relevance was selected from a single multinational professional services firm based in South Africa. A non-probability, purposive sampling was used to answer the research questions from a list of relevant employees that were best able to answer the research questions and meet the criteria (Saunders & Lewis 2012). Consequently, the research participants represented clientfacing professionals working in client accounts, strategic and target clients, and client sustainability for existing clients. In total, there were 30 participants. The research interviewed all participants with respect to all tools within the KM toolkit. For this research, 'KM toolkit' referred to a set of separate platforms, each having the purpose of meeting client-facing professionals' requirements to add value to client relationships. This article uses tool pseudonyms to assist in the discussion and anonymise the PSF's proprietary tools in accordance with the conditions of obtaining ethics clearance for the research project. The findings from the interviews for each tool were analysed by using an integrated interpretation and discussion method, which means qualitative data analysis steps included looking for patterns or themes in the data, followed by understanding and interpreting each theme. The current value of each tool was determined based on the feedback received from the participants. The feedback was categorised according to the themes that emerged from the data. The current value was given an overall rating, which was dependant on how good or bad each tool was perceived for each theme by each participant. Each tool's potential value was based on participant feedback, for example, a declaration of being unaware of the tool and therefore unaware of its value; or the poor quality of data measured against the participant's value criteria expressed in terms of professional client-facing work. The current value and potential value of each tool were analysed using the following rating guide developed for this study: • 0 = none of the participants indicated criteria related to value. • 1 = fewer than 6 participants indicated criteria related to value. • 2 = between 6 and 10 participants indicated criteria related to value. • 3 = between 11 and 15 participants indicated criteria related to value. • 4 = between 16 and 20 participants indicated criteria related to value. • 5 = between 21 and 30 participants indicated criteria related to value. Follow-up face-to-face interviews were made up of five individuals from each of the roles as analysts, consultants, senior consultants, managers, senior managers and partners. The next section presents the research findings, analysis and discussion. Research findings, analysis and discussion Participants' perspectives, attitudes and beliefs regarding KM toolkit elements were analysed for themes and then interpreted. The approach was inductive because a value creation framework for the PSF was non-existent. This meant that the actual application of the KM toolkit in the PSF resulted in the conceptualisation of the KM toolkit value creation framework (Figure 1). Figure 1 illustrates the research findings with themes such as intellectual property (IP), expertise, insights, collaboration, and experience, grouped together with solutions embedded into KM programmes with the intention to connect employees to each other, to connect employees to knowledge assets and to connect experienced employees to inexperienced employees. The conceptualisation of the KM KM, knowledge management; SLATES, search, links, authoring, tags, and extensions and signals. Culture refers to the operating environment and unsaid ethos; it is a crucial determinant of how effectively an organisation adopts and uses its KM toolkit. A combination of the market culture and adhocracy culture best describes the PSF. It hosts an entrepreneurial and creative environment where individuals feel free to make decisions and take initiative and risks. The PSF's leadership are results-driven and hugely competitive with tough, bold leadership culture. The rating guide that was developed in the 'Research methodology' section was used to analyse the findings for the rating of the current value (cv), and potential value (pv), of the Project Tool with elements relating to IP and experience, Eminence Tool, Expert Locator Tool with elements relating to expertise and experience, Content Locator Tool, Research Tool, Compliance Tool, Instant Messaging Tool, and OC Tool, as illustrated in Figure 2. The findings in Figure 2 illustrate the expertise and experience pillars of the KM toolkit -the Expert Locator Tool. Participants perceived the Expert Locator Tool as one of the two most valuable tools in the KM toolkit, with a current value at a level 4 and potential value at a level 5. Verbatim responses are presented in Table 2. The PSF defines 'expert' as an individual who is highly skilled within a specific service and/or widely proficient on a specific industry. The Expert Locator Tool within the PSF equates to an explicit platform of expert locators. The responses regarding the value and usage of the Expert Locator Tool highlighted the following common themes: • Search: Tool enables the ability to search and locate expertise. With the increase of agility, quality, awareness and search, the usage and potential value of the Expert Locator Tool will also increase, thereby creating a fully optimal KM tool. These research findings suggest that an education and awareness drive is required to create greater levels of exposure for the value and usage of the tool. The campaign should be included in the PSF's induction and on-boarding process, firstly, to create awareness of the value of the tool and, secondly, to facilitate the immediate update of content. The updating of the Expert Locator Tool should also be included in individuals' performance rating systems to ensure quality of information. Quality of newly added information increases system agility and adds to leveraging of the existing, highquality information. For the expertise pillar of the KM toolkit -the Content Locator Tool -findings of the tool equate to a group of experts who one can access via an IM collaboration platform or email to access content across the firm's geographical reach. The 'I'm tool' did not feature high in Figure 2, with a current value of 0.5, indicating low awareness (28 of the 30 participants were not aware of the tool), and conflicting responses from the other two participants resulted in an inconclusive benefits analysis of this tool. Its purpose and function regarding client engagement are not clear from the participants' perspective. For the IP pillar of the KM toolkit -the Project Tool and the Eminence Tool -the value of the Project Tool and Eminence Tool focuses on retaining the firm's IP and expertise from an eminence and projects perspective. The Eminence Tool focuses on the storage and accessibility of thought leadership pieces generated by the PSF regarding industries, countries and trending topics. The accessibility and availability of these eminence pieces to clients and potential clients add credibility to the PSF. The Project Tool within the PSF is the process of updating past project experience and storing this IP in a tool. The storage and availability of this information also contributed towards adding value and credibility to the PSF specifically when dealing with client proposals, which require experience. An analysis of the responses regarding the value and usage of the Project Tool and Eminence Tool highlighted the following common themes: The current value of the Project Tool is viewed at level 2. To increase its current value to its optimal potential value, participants suggested that senior management together with KM team should champion a drive for the Project Tool to be updated with quality content. Also, keeping track of projects has to be improved; post-project documents have to be completed and stored prior to project close off, and processes should be included in staff key performance measurements. Project tool content should go through a quality check to ensure confidentiality and that other language and logistical aspects align with the PSF's policies. Similarly, the Eminence Tool is viewed at level 2; it is currently used via a different search platform and not to its fullest potential. It could reach its potential value by making the PSF aware of how and where to access content and highlighting the tool's marketing prospects. The content validity and relevance to African insights give the Eminence Tool a level 4 potential value ranking. To achieve this level will require a robust communication and education drive within the PSF by using email communications, roadshows, posters and information sessions. Communications can also address the issue of how to effectively access the tool and how it can be used to increase credibility of the PSF in the market. Furthermore, it may be beneficial for the PSF to proactively distribute new material to client-facing employees, which will mitigate the concern of accessing relevant information. Participants also indicated that the PSF could be better aligned with Africaspecific topics, suggesting that a community of practice should be initiated with all industry experts to develop new local material. For the insights pillar of the KM toolkit -the Research Tool and the Compliance Tool contain insights that are relevant only to the PSF. An analysis of participants' value perceptions revealed that only two of the 30 participants made use of the Compliance Tool. Both participants concurred that the tool is valuable to client engagements; it offered company information and organograms, insight of various aspects of the local and global footprint of the PSF, financials and a number of important company statistics. The Research Tool helps with generating new intelligence about industries and trends by utilising a variety of sub-tools. These tools give access to big data to develop new and robust insights, which is then exploited in thought leadership pieces or business decision-making. Its value is perceived based on the PSF's insights into client, company and industry acumen using external sources (various research tools), and internal sources (e.g. the Compliance Tool, discussed further below). Its greatest value is to generate IP and its current value was at a level 4. Two themes emerged from participants' responses: • Data: The quality of the data that resides in the tool. • Accessibility: The ease of accessing this data when needed. The value of the Research Tool can be optimised even further to achieve its potential value at level 5. A simpler navigation would assist users with searching and accessing data. A process of proactively extracting the required data and distributing the data to the PSF would add value. In addition, evaluating the industry and country gaps in data, outsourcing relevant tools to mitigate these gaps, assessing the credibility of existing data, and instituting a call for action to have the data updated would increase the value of the Research Tool. Faster response times for support on the tool, creating user support guidelines, information sessions, and allocating more resources to assist with research requests should be considered. For the collaboration pillar of the KM toolkit -the IM Tool and OC Tool serve as collaboration tools in the PSF; the IM tool is mostly internally whereas the OC Tool is also applicable for external collaboration. The OC Tool, based on a SharePoint platform, focuses on external and internal collaboration both globally and locally. Four key themes about the collaboration pillar were visible from participants' responses: • Collaboration: The degree to which one can collaborate effectively on the IM Tool. • Awareness: The level of awareness of the IM Tool and OC Tool. • Ease of collaboration: Ability to share and use content on the OC Tool platform. • Negative perception: Participants' adverse beliefs about the IM Tool. The current value of the IM Tool rated at a 0.5 level, though participants indicated that its potential value is at a 3.5 level. The value of the OC Tool rated at a 2 level, and its potential level at a 4.5 level. Twenty of the 30 participants were aware of the IM Tool, but they maintained that a sharpened awareness of the IM Tool would not result in an increased use of IM for collaboration. The negative perceptions of 16 of the 20 IM-aware participants may have affected the way the PSF shares knowledge and it could in future reduce their agility towards meeting client requirements. There was deep concern from participants that the IM Tool was not easy to use and there was an overload of information, which made it difficult and time-consuming to sift through information to find relevant information. The participants who effectively used the IM Tool were able to see its value by incorporating this integrated collaboration solution. This means that collaboration can occur more rapidly, outputs are more agile, and innovation is possible. As mentioned above, The PSF's organisational culture hosts an entrepreneurial and creative environment, and the potential value of IM could increase its ranked position in the collaboration pillar of the KM toolkit. The KM toolkit pillars, elements and tools, illustrated in Figure 1, integrate the concepts found in the literature to the manner in which the PSF KM toolkit is practically used. Three main components give a theoretical description of professional services firms (Nordenflycht 2010:156): 1. Knowledge intensity: Output is reliant on the knowledge that resides within the firm. This means that the firm has a dependency on an intellectually skilled workforce across all functions, which makes it critical to have effective KM toolkits in place to harness IP. 2. Low capital intensity: Refers to when a firm does not have high production costs. This means that for knowledge-intensive firms with low capital intensity employee bargaining power is greater. Intellectual capital becomes even more powerful. 3. Professionalised workforce: Refers to a firm that has a specific knowledge base, which is regulated and controlled autonomously within a professional code of ethics. These three components describe the PSF -an inherently client-facing organisation operating within a highly intense knowledge economy that seeks to utilise their intellectual capital to be able to generate competencies relevant to the client, stressing the need to innovate their advice to their clients. Intellectual capital and innovation are critical to the success of client-facing firms (Qureshi, Briggs & Hlupic 2006;Seleim & Khalil 2011). The above discussion was of the research findings relating to the tools in the KM toolkit and have been integrated with concepts from the literature as follows: 1. Intellectual Property: Organisations that view information and knowledge as their primary service place high value on their IP. The KM toolkit houses a tool that is able to store IP for current and future use; the central database can be accessed by client-facing professionals to influence and build on client engagements. The tool houses evidence of the PSF's ability to fulfil client deliverables. It is as important for the organisation as it is for potential clients. 2. Expertise: This pillar is a central platform which houses information tools on all professionals within the PSF. Information is updated by users and accessed by all professionals globally. The tool is critical to the success of ongoing performance and new business efforts. Distributing information is not enough to guarantee reuse. Access to people with knowledge is as imperative as access to information. Expertise locator tools and people in the network who can assist in finding potential experts for projects are crucial approaches to reusing valuable, relevant knowledge. 3. Insights: A KM toolkit necessitates the incorporation of databases that provides information on clients, competitors and industries. This information is pertinent to client engagements and building strong, credible relationships. 4. Collaboration: For the PSF to be truly innovative, robust and agile, the collaboration pillar must be embedded within an organisational culture that is open to knowledge sharing. 5. Experience: An individual's experience or skill is paramount to the success of a project. Tools in the experience pillar point client-facing professionals in the right direction to find individuals who have experience and skills to perform optimally on projects. Central to the above five pillars of the KM toolkit, the Search, Links, Authoring, Tags, and Extensions and Signals (SLATES) model (McAfee 2006) represents the role of people, process and technology: • Search: Process of finding applicable content, searching for expertise and searching for relevant research material. • Links: Process of linking relevant content to service line and industry pages, ensuring that updated information is easily accessible. • Authoring: Ensuring that knowledge assets are contributed to the knowledge asset management database which links to the author on the Expert Locator Tool. • Tags: When entering content and attachments into the knowledge asset management database, a functionality ensures that the user enters keywords. • Extensions and Signals: An employee advocacy tool makes specific users and clients aware when new content is uploaded according to areas of industry specialisation or interest. This section discussed the pillars of the KM toolkit and the role of interaction between people, processes and technology. Next, the gaps are identified for improvement of the KM toolkit. Each tool within the KM toolkit depicted some elements that did not work very well and needed improvement. Table 3 presents a list of these elements. Gaps identified for the project tool were that there was an overwhelming need for information related to Africa, which remained unmet. This means that the PSF professionals are unable to find information that relates to Africa-specific topics, required by their clients. The inability to provide this information could mean a loss of client business as project experience evidence is not available. A firm-wide Africa campaign should be launched to capture Africa information. The second weakness was that the quality of information provided on the Project Tool was not updated and of a poor standard because information was not being captured suitably. The poor quality of information provided is just as good as not having any information available because poorquality information cannot be used for client engagement. There is a perception that the process of updating information takes too much effort. Another reason for the lack of contribution to the Project Tool could be the PSF's culture of sharing, which is not a proactive culture. People do not openly share information unless requested to do so for a specific engagement. The Project Tool therefore has not reached a stage of maturity where it can be trusted for the most updated, reliable information. This tool is the PSF's competitive advantage over other similar firms that are bidding for similar work because it depicts the firm's expertise, and therefore ability, to deliver on the engagement. The lack of contribution to the tool means that the PSF is unable to demonstrate their competitive advantage to clients. Concerning the Eminence Tool, the lack of awareness of where to access the tool led to the perception that the tool is not easily available and accessible, and therefore the participants were unable to find relevant information. Content that resides within this database was searched via a tedious process, which did not always result in the most accurate results. It was also a concern that this tool did not house enough Africa content of value to clients. The objective of this tool however is to centrally store the PSF eminence and not to develop the content. The lack of Africa content therefore could be the reason why this tool is underutilised. The underutilisation of the Eminence Tool means that the PSF is unable to demonstrate its innovative insights by creating the necessary exposure that is required to build vital client relationships. Verbatim feedback is presented in Table 4. About the Expert Locator Tool, though most of the participants were familiar with the tool, many did not know exactly how to use the tool optimally to extract its intended value. There was a perception that it needed an update because it was mandatory but there was a lack of understanding the actual objective of the tool. Content is updated inadequately and consequentially the search results are not optimised. There is a perception that the process of updating information is time-consuming. Updating content means that one can find experts to work on specific projects for clients, being able to locate people who speak different languages for global mobility projects and being able to nimbly put together proposals by accessing relevant information. The value of the tool can therefore be seen to enhance competitive advantage by being agile and technologically innovative. Thus, data on the tool must be kept up to date and relevant. A gap exists with both the Content Locator Tool and the IM Tool in terms of real-time access to relevant information, which is critical in a business that operates within a rigorous competitive climate. Utilising collaboration tools optimally can provide the basis for innovation and crossfunctional thinking that will assist with encouraging a collaborative culture within the PSF. There is a need for the PSF to collaborate externally within a secure environment. Without a conducive environment to share knowledge proactively for the benefit of the entire organisation, the value of each of the tools to ultimately gain competitive advantage will not be realised. Next, the recommendation is based on the research findings, analysis and discussion of the PSF's KM toolkit. Figure 3 presents a summary of components that will most likely improve the efficiency of the PSF's KM toolkit. Recommendation The recommendation is based on the research findings and literature review, which adds machine learning and gamification in addition to the suggestions that emerged from the interview, such as awareness campaigns, onboarding presentations and e-learning courses on KM tools for all staff. The implementation of these recommendations will assist with enhancing the KM toolkit by ensuring that the current value of each tool reaches its potential value: • Awareness campaigns: To educate users on the objectives of KM tools and how it can help with client engagements. To also show users how to access and use the tools optimally. • On-boarding presentations: High-level presentation targeting new staff will drive awareness of all tools. • E-Learning: Online courses on how to optimally use the tools and drive education on the objective of the tools for all staff will help with increasing understanding and usage. • Project tool: Emphasise the importance of contributing quality information to the tool as well as Africa content. The lack of providing quality information means that there is a risk of losing valuable IP which poses a challenge for future client engagements. • Eminence tool: The lack of Africa eminence should be addressed by identifying current Africa issues and themes and generating collateral that supports and adds valuable input to these subjects. • Expert locator tool: For existing staff it should be integrated with project scheduling processes for the tool to maintain relevance and credibility. • Gamification: Competitive elements like prizes and scores can also be included with the assumption that people will be inspired to advance or win in the context of the game. The approach of gamification can be used to endorse a range of activities and behaviours, from lobbying innovative ideas to inspiring collaboration. • Content locator tool: Conduct awareness drive around the time saving value of tool. • Research tool: A searchable dashboard is recommended. • IM tool: Needs to be incorporated into an integrated solution which makes it easier to filter information that is relevant and meets individual needs. • Online collaboration tool: It is important that the culture of sharing is further investigated. Discovering the reason for the lack of sharing willing and enthusiastically in the Accessibility Marketing Content '… the information should be easily searchable' '… used for proposals …' '… need to produce more from Africa point.' '… should be able to better search and access the information' '… used to support events …' '… more Africa content required …' 'The tool is not integrated, if searching under global I have to go to specific countries' '… take to client visits to share.' '… content is localised from global topics, we need to have a more unique view.' '… needs to be more visible' -'… we need more of an African view …' ' … should be in your face' --South African PSF will assist with creating remedies towards becoming a more collaborative organisation. Conclusion An evaluation of the current KM toolkit and recommendation of how to enhance the KM tools of a professional services firm could assist with the effective application of KM toolkits. The outcome of this research is a conceptual framework for describing and analysing the KM toolkit for a multinational company. The framework indicates the value creation that is realised from the implementation of a KM toolkit. In essence, understanding the reasons for a successful knowledge sharing culture will add value to the success of any KM programme. This study concludes that it is imperative to understand how KM can evolve with the changing work environment and integrate technological advancements for ongoing process improvements. Limitation of the study This research was limited to the KM toolkit of the PSF in South Africa. Therefore, the study findings do not depict the perspectives of international PSF employees. However, the findings give an indication of the perspectives of users of the South African based KM toolkit. Funding information This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. Data availability Recordings and transcripts of the interview data are available on request from the student author.
9,011.8
2022-04-26T00:00:00.000
[ "Business", "Computer Science" ]
Nonlinear dynamics of miniature optoelectronic oscillators based on whispering-gallery mode electrooptical modulators We propose a time-domain model to analyze the dynamical behavior of miniature optoelectronic oscillators (OEOs) based on whispering-gallery mode resonators. In these systems, the whispering-gallery mode resonator features a quadratic nonlinearity and operates as an electrooptical modulator, thereby eliminating the need for an integrated Mach-Zehnder modulator. The narrow optical resonances also eliminate the need for both an optical fiber delay line and an electric bandpass filter in the optoelectronic feedback loop. The architecture of miniature OEOs therefore appears as significantly simpler than the one of their traditional counterparts and permits us to achieve competitive metrics in terms of size, weight, and power. Our theoretical approach is based on the closed-loop coupling between the optical intracavity modes and the microwave signal generated via the photodetection of the output electrooptical comb. The resulting nonlinear oscillator model involves the slowly-varying envelopes of the microwave and optical fields, and its stability analysis permits the analytical determination the critical value of the feedback gain needed to trigger self-sustained oscillations. This stability analysis also allows us to understand how key parameters of the system such as cavity detuning or coupling efficiency influence the onset of the radiofrequency oscillation. Our study is complemented by time-domain simulations for the microwave and optical signals, which are in excellent agreement with the analytical predictions. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Introduction Optoelectronic oscillators (OEOs) are microwave photonic systems that concatenate a optical and electronic branch in a closed feedback loop. They have found numerous applications in lightwave and microwave technology, such as in communication engineering, sensing, analog computing, and most importantly, time-frequency metrology (see review article [1]). Indeed, one of the most noteworthy application of OEOs is ultra-low phase noise radiofrequency generation. This outcome can achieved via the combination of photon storage in a long optical delay line and narrowband electric filtering, as initially proposed by Yao and Maleki [2][3][4]. In its most conventional configuration, the OEO for ultrapure microwave generation incorporates a laser, an electrooptical (EO) Mach-Zehnder modulator, a few-km-long optical delay line, a photodiode, an electrical bandpass filter and an RF amplifier, as shown in Fig. 1(a). Using these commercial-off-the-shelf (COTS) components permits to achieve remarkable phase noise performances, down to a record −163 dBc/Hz at 6 kHz offset from a 10 GHz carrier [5]. However, the main drawback of these architectures is that they are bulky, heavy, and energy-greedy -thus not satisfying the fundamental constraints of size, weight and power (SWAP). Several alternatives approaches have been proposed to ensure SWAP convergence for OEOs. One of the most promising approach has been to replace both the delay-lines and electric bandpass filters with whispering-gallery mode (WGM) resonators, which are low-loss dielectric cavities capable of trapping photons for long durations via total internal reflection [6][7][8][9][10][11]. The lifetime τ ph of an intracavity photon of angular frequency ω 0 characterizes the optical storage capability of the resonator, and is linked to its quality factor following Q = ω 0 τ ph . At the telecom wavelength of 1550 nm, the photon lifetime in ultra-high-Q WGM resonators can typically vary from few tenths to few tens of µs; equivalently, the loaded linewidth 2κ = 1/τ ph of the corresponding resonances varies from few tens to few tenths of MHz (×2π). Therefore, because they could perform both photon storage and narrowband filtering in the linear regime, millimetric or sub-millimetric WGM resonators have been successfully inserted on OEO loops, and they have permitted a significant reduction of the oscillators in terms of size -see for example Refs. [12][13][14][15][16][17][18][19][20][21]. An additional step can be considered in order to accelerate the SWAP convergence: it is to couple a microwave strip cavity to a WGM resonator with χ (2) nonlinearity, which can then play the role of an electrooptical modulator and eliminate the need for its Mach-Zehnder equivalent [22]. In this case, the three tasks of photon storage, narrowband filtering and nonlinearity can be performed by the WGM resonator: the oscillator therefore becomes a miniature OEO, whose architecture is displayed in Fig. 1(b). As discussed by Maleki in Ref. [23], the main interest of this approach is that it effectively leads to the best SWAP performance for OEOs. At this date, the deterministic dynamics of narrowband OEOs with time-delayed feedback is quite well understood, and it is based on the approach of microwave envelope equations [1]. However, to the best of our knowledge, there is no theoretical model available to analyze the nonlinear dynamics and stability of miniature OEOs. Indeed, understanding the dynamical behavior of miniature OEOs requires an analysis of the electrooptical conversion phenomena that are taking place in a WGM cavity pumped by both a resonant laser and coupled to a RF strip cavity pumped by a microwave signal. These intracavity processes, which involve microwave and optical photons interacting quantum-mechanically, are the fundamental phenomena enabling the concepts of electrooptical WGM modulators [24][25][26][27][28] and ultra-sensitive microwave photonic receivers [29][30][31][32][33][34][35][36][37] . Most works related to electrooptical WGM resonators are restricted to the three-modes operation involving the pump, signal and idler modes. A noteworthy exception is for example the work of Ilchenko et al. in Ref. [28], where they analyzed the intracavity dynamics for an arbitrary number of modes. The multimode analysis is indispensable for the understanding and characterization of the miniature OEO, as these cascaded intracavity interactions contribute to the saturation nonlinearity in the feedback loop, thereby defining the amplitude of the stationary microwave and lightwave oscillations. The objective of this article is therefore to propose a full time-domain model accounting for all nonlinear interactions in miniature OEOs based on electrooptical WGM modulators. We also aim at performing an analytical stability study that will permit the determination of the threshold value of the feedback gain beyond which self-starting oscillations are triggered. This article is organized as follows. Section 2 is devoted to the description of the miniature OEO under study. The time-domain equations governing the dynamics of the microwave and optical intracavity fields in the open-loop configuration -corresponding to the multimode model for the electrooptical modulator -are presented in Sec. 3, where the semiclassical equations are subsequently deduced from their quantum counterparts. The closed-loop equations ruling the dynamics of the miniature OEO are derived in Sec. 4, where a stability analysis is performed to determine the threshold gain for the self-oscillations. The optimization analysis is led in Sec. 5, while Sec. 6 analyzes the important case of amplifierless miniature OEOs. The last section concludes the article. System The miniature OEO under study is displayed in Fig. 1(b). The WGM resonator is a lithium niobate (LN) disk of main radius a, that is used as a resonant electrooptical modulator. This modulator has an optical input, an RF input, and an optical output. The optical input is a telecom laser signal at power P L with wavelength λ L 1550 nm, and the corresponding angular frequency is ω L = 2πc/λ L with c being the velocity of light in vacuum. The WGM resonator has a free-spectral range that can be determined as Ω R = c/an g = 2π/T R , where n g is the group velocity index of the lithum niobate at the pump wavelength, and T R is the photon round-trip time in the optical cavity. The WGM cavity has a loaded quality factor Q = ω L /2κ, where κ = κ i + κ e is the loaded half-linewidth of the resonances at telecom wavelength, while κ i = ω L /2Q i and κ e = ω L /2Q e correspond to the intrinsic and extrinsic (i.e., coupling) contributions, respectively [10]. The WGMs of the resonator that are involved in this process belong to the same mode family. Therefore, they can be unambiguously labelled by their azimuthal order . Since the pumped mode has an azimuthal order 0 , it is useful to introduce the reduced azimuthal order l = − 0 so that the WGMs involved in the system's dynamics can now be symmetrically labeled as l = 0, ±1, ±2, . . ., with l = 0 being the pumped mode which has a resonant frequency ω 0 . The pump frequency ω L is very close to the resonant frequency ω 0 of the pumped mode, the detuning being equal to σ A = ω L − ω 0 . It is convenient to introduce the normalized optical detuning α = −σ A /κ, which is such that resonant pumping translates to |α| ≤ 1. The RF strip resonator coupled to the WGM disk has a resonance frequency that matches the FSR of the optical cavity. It has a loaded quality factor Q M = Ω R /2µ, where µ is the half-linewidth of the loaded RF cavity resonance. The microwave input with power P M has a frequency Ω M very close to Ω R , with the RF detuning σ C = Ω M − Ω R . Here also, we define the normalized RF detuning ξ = −σ C /µ, which is within the resonance when |ξ | ≤ 1. The second-order susceptibility χ (2) of the lithium niobate crystal is a nonlinearity that mediates the coherent interaction between the microwave photons Ω M fed to the RF strip cavity and the optical photons ω l circulating inside the WGM cavity. At the photon level, the intensity of this nonlinear interaction is weighted by a normalized coupling parameter g ∝ χ (2) , which has the dimension of an angular frequency [28,33,37,38]. Interestingly, the ratio between the energy of the optical photons comparatively to their microwave counterparts is approximately equal to their azimuthal eigenumber ω l /Ω R , which would be here of the order of a few thousands. The output optical signal of the WGM resonator is an electrooptical frequency comb whose intermodal frequency is an RF signal corresponding to the FSR of the cavity. This comb is sent to a photodetector (with sensitivity S), that retrieves this beating intermodal frequency and outputs a microwave signal, which is subsequently amplified and eventually phase shifted before being fed back to the RF electrode of the WGM electrooptical modulator (input impedance R out ) -thereby closing the optoelectronic feedback loop. The two main tasks to undertake are now (i) to build a time-domain model to describe the dynamics of this oscillator, and (ii) to perform the stability analysis of this model in order to determine the threshold gain leading to the self-oscillatory behavior. Open-loop configuration The analysis of the open-loop system is an unavoidable preliminary to the study of its closed-loop counterpart. In particular, it is essential for the understanding of the nonlinear dynamics of the WGM electrooptical modulator, which is central to the operation of the miniature OEO. Quantum formalism The interactions inside the WGM generator involve microwave photons of energy Ω R , and and optical photons of energy ω l . As explained in Fig. 2, the second-order susceptibility χ (2) mediates two different processes in the resonator. The first one is parametric upconversion following ω l + Ω R → ω l+1 . This interaction is always stimulated, i. e., it can only occur when the WGMR is RF-pumped. The second process is parametric downconversion, following ω l → ω l−1 + Ω R . This downconversion can be either stimulated (does only occur in presence of RF pumping) or spontaneous (does always occur regardless of RF pumping), with both processes having different microwave photon production rates. The interaction between optical and microwave photons is best decribed from the quantummechanical view point. In that framework, the intracavity fields are described by the annihilation operatorsâ l for the optical modes andĉ for the microwave field, as well as by the corresponding Fig. 2. Frequency-domain representation of photonic up-and down-conversion in a WGM resonator with χ (2) nonlinearity. These two processes can be leveraged to translate microwave energy to the optical domain inside the WGM resonator. When belonging to the same family, the eigenmodes of the resonator with free-spectral range Ω R are quasi-equidistantly spaced as ω l ω 0 + lΩ R , where l = − 0 is the reduced azimuthal eigenumber, and ω 0 is the pumped resonance. (a) Photonic upconversion (stimulated): An infrared photon annihilates a microwave photon and is upconverted as ω l + Ω R → ω l+1 . (b) Photonic downconversion (stimulated or spontaneous): An infrared photon emits a microwave photon and is downconverted as ω l → ω l−1 + Ω R . creation operatorsâ † l andĉ † . All these operators commute, except [â l ,â † l ] = 1 and [ĉ,ĉ † ] = 1. The operatorsn l =â † lâ l andn C =ĉ †ĉ stand for the photon numbers in the optical and microwave fields, respectively. The optical and microwave input signals are treated as quantum coherent states [39]. The total Hamiltonian of the open-loop system can be explicitly defined aŝ is the interaction Hamiltonian corresponding to the quadratic nonlinearity of the WGM resonator, is the free Hamiltonian corresponding to the cavity frequency detunings, and is the Hamiltonian that accounts for the optical and microwave pump fields A in and C in , which are defined as We can now use the total HamiltonianĤ tot to obtain the following equations for the annihilation operators in the Heisenberg picture: where the temporal vacuum fluctuations associated with losses have been explicitly introduced using the operatorsV i,l (V e,l ) for the intrinsic (extrinsic) optical losses for the mode l, andŴ i (Ŵ e ) for the intrinsic (extrinsic) microwave losses, respectively. These operators have zero expectation value and obey the commutation rules , with theV andŴ operators uniformly commuting as well. Semiclassical formalism The quantum formalism is required when certain phenomena such as spontaneous parametric down conversion need to be investigated in depth. In our system, we are only interested in the macroscopic and deterministic behavior of these intracavity fields, and therefore, only the stimulated effects are of interest. In that case, the approach where the fields are treated semiclassically is appropriate and provides sufficient accuracy. Passing from the quantum to the semiclassical model corresponds to transformations where the creation and annihilation operators are transformed into complex-valued, slowly-varying envelopes variables, followingâ l → A l ,â † l → A * l ,ĉ → C, andĉ † → C * . By analogy to the photon number operatorsâ † lâ l andĉ †ĉ , the real-valued quantities A * l A l ≡ |A l | 2 correspond to the number of optical photons in the mode l, while C * C ≡ |C| 2 is the number of microwave photons in the RF strip cavity. Both these photon number quantities are dimensionless, and so are A l and C. However, one should note that while A l and C are cavity fields, the input fields A in and C in are propagating fields: They are such that |A in | 2 and |C in | 2 correspond to photon fluxes (i. e., number of photons per second) entering the modulator when the optical and microwave input powers are P L and P M , respectively. Therefore, the unit of the input fields A in and C in is s −1/2 . In our analysis, we are only interested in the deterministic dynamics of the intracavity fields, and therefore we can disregard the quantum fluctuations (along with any other stochastic influence). Consequently, the quantum Eqs. (6) and (7) can now be rewritten under the following semiclassical form: where the new dynamical variables of the system are the complex-valued cavity field envelopes A l and C, of respective carrier frequencies ω L + lΩ R and Ω R . Output microwave and optical fields The output optical fields are expressed as for each mode l, and the total output field is Note that A out is a propagating field like A in (and not a cavity field like A l ), and consequently, its square modulus |A out | 2 is also a photon flux with units of s −1 . The corresponding optical output power in units of watts is and the optical power transmission coefficient of the modulator is therefore |T opt | 2 = P out /P in = ω L |A out | 2 /P L ∈ [0, 1]. In comparison, the transmission coefficient for a typical Mach-Zehnder electrooptical modulator is defined instead as |T opt , where x and φ are the suitably normalized RF and bias voltages. As far as the microwave output power is concerned, we note that an infinite-bandwidth photodetector would output a RF signal proportional to the incoming optical power, and we can write where V PD (t) is in volts, while S is the sensitivity of the photodiode in units of V/W. The generated microwave would be a multi-harmonic signal, and would feature spectral components of frequency n × Ω R , with n = 0, 1, 2, . . . The voltage output of the photodiode can therefore be Fourier-expanded as where c. c. stands for the complex conjugate of the preceding terms, and is the complex slowly-varying envelope corresponding to the microwave spectral component V PD,n (t) of frequency n × Ω R (in volts). The microwave power for the harmonic of frequency n × Ω R can then be evaluated as where R out is the characteristic load resistance in the RF branch. Model The miniature OEO corresponds to the closed-loop system where the output microwave signal of the photodetector is used to feed the RF electrode of the WGM electrooptical modulator. In order to mathematically describe this physical procedure, we assume that only the fundamental tone M 1 [see Eq. (15)] with frequency Ω R of the photodetected optical signal is fed back to the RF electrode of the modulator, while the DC and higher-harmonic tones are filtered out. Analogously to Eq. (5), we can determine that the microwave photon flux after the photodetector is P rf,1 / Ω R , where P rf,1 is the power of the fundamental tone as defined in Eq. (16). In order to close the oscillation loop, the corresponding voltage signal is subsequently amplified and phase-shifted before being injected in the RF electrode of the electrooptical modulator. The envelope of the normalized microwave signal at the input port of the WGM modulator is now defined as where Γ ≥ 0 is the real-valued dimensionless feedback gain, which is controlled by an RF amplifier just after the photodiode. All the loop losses are lumped into the feedback term Γ as well (including the portion of the RF signal that is outcoupled for technological utilization, but excluding the strip and WGM resonator losses). We can therefore express the gain as where G A (≥ 1) is the RF amplifier gain, while G L (≤ 1) is the loss factor of the electric branch. The parameter Φ stands for the microwave rountrip phase shift, that can be adjusted to any value (modulo 2π) using the in-loop RF phase shifter. From the technological perspective, it is useful to note that the output optical signal (electrooptical comb) of the miniature OEO is proportional to A out , while the microwave output signal is proportional to M 1 . In the later case, the RF power at the output of the photodiode is P rf,1 , while the microwave power of the signal after the amplifier is and it corresponds to the maximal RF power generated in the miniature OEO feedback loop. By replacing C in by C in,OEO in Eq. (9), we obtain the closed-loop model for the miniature OEO as where the dimensionless constant is a characteristic optoelectronic parameter of the oscillator ( 3.5 × 10 −3 in our case). Obviously, this efficiency coefficient η is larger when the photodetector sensitivity S is increased; it increases as well when Ω R is decreased, that is, when the resonator is enlarged. This is due to the fact that the electrical energy yields more microwave photons when their individual energy quantum is lower. This phenomenology indicates that high-Q mm-size WGM resonators, which are characterized by GHz-range FSRs, are the most suitable form that perspective. The reader can note that the overall electrical gain of the feedback loop is in fact the parameter β = ηΓe iΦ , which weights the efficiency of the process that retrieves microwave energy from the output electrooptical comb generated by the WGM modulator via photodetection, and feeds it back as an electrical signal inside the RF strip cavity of the modulator. Also note that since our input optical field A in is real-valued [see Eq. (5)], we can drop the calligraphic notation and simply write it as A in : it means that we have arbitrarily set its phase to 0, and as a consequence, the optical phase to all the intracavity fields A l is determined with regard to the pump laser field. Numerical simulation of the temporal dynamics Equations (20) and (21) govern the dynamics of the miniature OEO, and permit to undertake a complete theoretical analysis of that closed-loop system. In particular, they allow us to achieve a deep understanding of the system's temporal dynamics via numerical simulation as the gain Γ is varied. Figure 3 displays numerical simulations performed with the fourth-order Runge-Kutta algorithm, and we have considered a total of 41 modes (l = −20, . . . , 20). The initial conditions are set such that there are a few photons in the optical modes and in the RF cavity (|A l (0)| 2 ∼ |C(0)| 2 ∼ 1), and the field variables have random phases. The laser detuning is set at α = 0.5, and the loop phase shift is Φ = 0. The top row displays the time-domain dynamics of some output optical modes P opt,out,l = ω L |A out,l | 2 , where A out,l is defined in Eq. (10). We have numerically observed, as expected, that the dynamics of a given mode l is of the same order of magnitude (but not identical) to the one of its mirror mode −l: for that reason, we have only plotted the modes l ≥ 0 in order to avoid crowding the figures with redundant plots. The bottom row displays the temporal dynamics of the RF signal at the output of the amplifier, i. e. P rf,out as defined in Eq. (19). For the chosen parameters, numerical simulations asymptotically yield a non-null value for the pumped mode l = 0, but a null amplitude for the sidemodes l 0 when Γ<10.97, leading to a null RF output as well. Once the feedback gain Γ is set to a value higher than 10.97, the sidemodes dynamics eventually leads to constant non-zero amplitudes, and an RF signal is generated. We have not observed here metastable (unusually long) transient behavior as it can sometimes be the case in conventional OEOs (see Ref. [40]). When Γ = 12, Fig. 3(a) shows that the pumped mode becomes depleted and exchanges energy with the modes l = ±1, which subsequently settle to a non-null constant value. The dynamics of the other sidemodes (|l| ≥ 2) is still negligible at this point. As shown in Fig. 3(d), this process generates a RF signal at the same timescale, with P rf,out 0.04 mW. When the gain is increased to Γ = 20 [ Fig. 3(b)], the energy exchange from the pump to the sidemodes is more pronounced, and eventually leads to the situation where the output power in the sidemodes l = ±1 is higher than the one in the pumped mode l = 0 (note however that these are output fields, and not intracavity fields). The sidemode pair l = ±2 starts to have a noticeable amplitude as well. The RF signal dynamics displays a transient behavior qualitatively similar to the one of the optical modes, before settling to a steady-state value P rf,out 0.3 mW [ Fig. 3(e)]. As shown in Fig. 3(c), further increase of the gain to Γ = 40 leads to higher complexity in the pump-to-sidemode power conversion, so that the sidemode pair l = ±3 starts to display sizable oscillations as well. Accordingly, the RF signal settles to a higher value with P rf,out 0.9 mW [ Fig. 3(f)]. Several trends can be outlined in the OEO dynamics as the feedback gain Γ is increased. We can first observe that the output optical modes always have a power that is of the order of the laser pump (here, P L = 1 mW), and that the benefit of increasing the feedback gain is to improve the conversion efficiency from the pump to the sidemodes (up to a certain extent). The top row consistently shows the excitation of additional pairs of sidemodes as the gain is increased, thereby confirming that the WGM resonator plays the role of a dynamical frequency converter. The second observation is that while the optical power is only redistributed amongst the side modes, the RF power steadily increases with the gain. The third observation is that when the gain becomes larger, the transient dynamics is shortened while remaining in the µs timescale (set by the κ photon loss rates). However, this shortened transient dynamics induces pronounced, sharply peaked relaxation oscillations. In the next sub-section, we will investigate the stability properties of our time-domain model and define the conditions under which self-starting oscillations are triggered in the miniature OEO. Stability analysis and threshold gain When the gain parameter Γ is null, the system receives no RF excitation and the steady state solution of Eqs. (20) and (21) can be straightforwardly derived as This solution is the trivial equilibrium of our oscillator, and it corresponds to a situation where none of the sidemodes with l 0 is excited. When Γ is very low, conventional wisdom from self-oscillators theory (confirmed by our numerical simulations in the previous subsection) suggest that the same situation should prevail, i.e, the trivial solution represented in Eq. (28) should remain stable. However, as the gain is increased, there should be a critical value Γ cr beyond which self-sustained oscillations are obtained, with asymptotic values C 0 and A l 0. The objective of this subsection if to find Γ cr analytically. In order to determine the linear stability of the trivial fixed point of Eq. (23), we need to find the Jacobian of the flow corresponding to Eqs. (20) and (21). If we consider an electrooptical comb with 2N + 1 sidemodes, the variables of the perturbation flow are δA l with l = −N, . . . , N and δC, i.e., the dimensionality of this flow is 2N + 2 and the Jacobian around the trivial solution is an (2N + 2) × (2N + 2) complex-valued matrix. However, one can note that the perturbations δA l with |l| ≥ 2 are of second order and do not influence the eigenvalue spectrum of this Jacobian. This is due to the fact that the first sidemodes to be excited in electrooptical combs are necessarily the ones adjacent to the pumped mode, with l = ±1, and from there the comb sequentially grows "outwards" in the frequency domain. In other words, the sidemodes l = ±2, ±3, ±4, . . . are excited through a cascaded mechanism that require the modes l = ±1, ±2, ±3, . . . to be excited beforehand. This phenomenology is similar to the one observed in WGM OEOs with Mach-Zehnder modulators (see Ref. [15]), but quite different from the one observed in Kerr comb formation where the first modes to be excited via modulational instability are not necessarily adjacent to the pumped mode [10,41]. Along with the perturbations δA l with |l| ≥ 2, the perturbation δA 0 of the pumped mode is also irrelevant for the stability analysis, because it is a neutrally stable with a null eigenvalue. Therefore, stability analysis is drastically reduced from 2N + 2 to 3 perturbation variables, namely δA −1 , δA 1 and δC, which obey the linearized autonomous flow where A 0 is explicitly defined via Eq. (28), while β = ηΓe iΦ is the overall gain parameter in the electrical branch. The Barkhausen phase condition for autonomous oscillators imposes that β should be real-valued, i.e. the phase shifter should be set such that Φ = 0 or π (modulo 2π) -as we will see later on, the appropriate sign for β will actually depend on the sign of α. The complex-valued flow in Eq. (24) can be rewritten under the matrix form as δ X = J · δX, where δX = [δA * −1 , δA 1 , δC] T is the perturbation vector and J is the 3 × 3 Jacobian whose eigenvalues will decide the stability of the trivial fixed point. From the analytical point of view, it is mathematically difficult investigate the spectral stability of a three-dimensional Jacobian when it is complex-valued. However, this task is mathematically more tractable for real-valued Jacobian matrices. For this reason, we transform the complex-valued flow of Eq. (24) into a real-valued one by decomposing the perturbation vector and the Jacobian into their real and imaginary parts, following δX = δX r + iX i , and δJ = δJ r + iJ i . As a consequence, by plugging these decompositions into the autonomous flow δ X = J · δX, we find that Eq. (24) can now be rewritten under the form of a six-dimensional real-valued flow following being the expanded Jacobian, while the sub-matrices J r and J i are explicitly defined as , and q β = 2κ e βIm(A 0 ). Without loss of generality, we will simplify the calculations in the remainder of the article by considering that the microwave signal fed back to the RF strip resonator is resonant, i.e. ξ = 0. The solution to the quadratic Eq. (36) is where + κ 2 µ µ 17 + α 2 + 2κ e 9 + α 2 (42) Equation (40) involves two branches of solutions, the first one being −K 1 (K 2 − K 3 ) /(αA 2 in ) and the second one being −K 1 (K 2 + K 3 ) /(αA 2 in ). However, the second branch yields solutions that are about two orders of magnitude larger than the first one in absolute value: these solutions are unphysical and can be discarded in our current configuration. Therefore, we finally obtain the following formula for the critical feedback gain The miniature OEO is expected to oscillate when the feedback gain is such that Γ>Γ cr , and we observe that the feedback phase Φ has to be adjusted differently depending on the sign of α, i.e., depending on the direction of the detuning from resonance in the pumped mode. Figure 4 displays the variations of Γ cr as a function of the optical detuning α. One can obseerve that the curve is symmetric with regard to axis of symmetry at α = 0. Moreover, Γ cr diverges when α → 0 and when α → ±∞. This can be understood in first aproximation via the variations of the output field A out,0 = A in [2κ e /κ (1 + iα) − 1]. On the one hand, when α → 0, the pump is resonant and accordingly A out,0 is weak, so that the comb photodetection voltage is lowthus requiring a high gain Γ to offset this power deficit. On the other hand, when α → ±∞, the coupling is weak and so is A 0 , so that the electrooptical comb generation is poor and the photodetection signal is low as well. Therefore, it appears that optimal operation of the miniature OEO (i.e., low threshold feedback gain Γ cr ) requires to detune the pump laser in between these two asymptotic cases. Figure 5 shows the bifurcation diagrams for the optical output signals P opt,out,l , for the microwave power P rf,1 , and for the RF power P rf,out generated at the output of the RF amplifier as the gain Γ is varied. The first salient feature is that the optical power in paired modes ±l 0 displays a switching behavior, with P opt,out,l P opt,out,−l : However, the power in the pumped mode l = 0 and in the RF signals varies smoothly with the gain. This behavior is quite different from the one observed in Kerr optical frequency combs, for example, where paired modes typically have the same power [8][9][10]. The second observation that can be made is that quantitatively, P rf,1 P rf,out , with a ratio that can grow up to four orders of magnitude in our simulations. The third note is (20) and (21), while the solid line corresponds to the analytical solution provided in Eq. (44). It can be seen that the stability analysis permits to determine the threshold gain needed to trigger microwave oscillations with exactitude. It also appears that minimum gain is achieve for α ±1. Fig. 5. Bifurcation diagrams for the optical output signals P opt,out,l , for the microwave power P rf,1 generated by the photodiode (before the RF amplifier), and for the RF power P rf,out generated at the output of the RF amplifier. The parameters of the system are the same as those of Fig. 3, with α = 0.5 and Φ = 0. The critical value of the gain below which there is no OEO oscillation is Γ cr 10.97, in agreement with Fig. 4. Note that as the gain Γ is increased, there are optical mode power switches within a given sidemode pair ±l 0, while the pumped optical mode l = 0 and the RF signals are varying smoothly. that qualitatively, the RF power P rf,1 at the output of the photodiode does not increase steadily, while the power P rf,out after the amplifier always does. Optimization: system parameters leading to the smallest threshold gain In this section, we determine the optimal conditions leading to the smallest value of the critical gain Γ cr for the feedback gain. Optimal laser detuning from resonance We first need to find the optimal detuning α opt for which the gain becomes minimal. We look for the roots of the algebraic equation d(Γ cr )/dα = 0 for α>0, and we are led to the equation: which is bi-quadratic in α. There are two roots α 2 opt,± ; The solution α 2 opt,− has to be discarded for being negative (and thus, unphysical), while the other solution yields the desired results as The formula above can be simplified: indeed, the miniature OEO is generally configured in a way that the loaded optical resonance linewidth 2κ is much smaller than the loaded RF resonance linewidth. If we write this condition as |κ/µ| 1 and use this ratio as a smallness parameter, a Taylor expansion of Eq. (46) yields the following expression for the optimal detuning: It therefore appears that the laser driving miniature OEO should ideally be detuned to the edge of the optical resonance, since α = ±1 translates to σ A = ±κ. This is confirmed in Fig. 4 where it can be seen that the critical gain Γ cr is minimal ( 9) around α = ±1. We note that here, despite the fact that we have a relatively high ratio κ/µ ( 0.23), the approximation α opt = ±1 already appears to be very good, since the exact value given by Eq. (46) is 0.94. As noted above, the precision of this approximation α opt = ±1 is expected to increase as κ/µ → 0, i. e., for when the optical resonance becomes increasingly narrower than the microwave one. From a technological perspective, it is interesting to note that this requirement is fortunately not stringent, as the minimum appears to be relatively flat: in other words, a deviation of ±5% with regard to α opt still yields a close-to-minimum critical gain value. Optimal resonator coupling coefficient The objective here is to find the optimal value κ e,opt for the resonator coupling coefficient. One should keep in mind that the intrinsic coupling coefficient κ i is an intrinsic property of the resonator and cannot be tuned. However, κ e can be viewed as a coupling efficiency parameter that is indeed tunable, for example by varying the distance (a few λ L ) between the prism and the resonator in Fig. 1. It results that the loaded linewidth 2κ can be varied by the same token. The critical gain defined in Eq. (44) is written as a function of κ and κ e , which in inconvenient in the present case because both parameters are coupling dependent. We therefore need to rewrite that equation in a way that a single parameter becomes responsible for the variations in coupling strength. For that purpose, it is convenient to introduce the parameter which is the ratio between outcoupling and total losses in the resonator. The resonator is in the regime of undercoupling when 0<ρ< 1 2 (most losses are intrinsic), overcoupling when 1 2 <ρ<1 (most losses are extrinsic), and critical coupling when ρ = 1 2 . The limit case ρ = 0 corresponds to the situation where the resonator is uncoupled (all losses are intrinsic), while the limit case ρ = 1 corresponds to the situation where the intrinsic losses are null (the intrinsic Q-factor is infinite and all losses are coupling-induced). The critical gain defined in Eq. (44) can now be rewritten as a function of the intrinsic loss parameter κ i , and the coupling ratio ρ whose variations from 0 to 1 scan all the possible coupling configurations. It is noteworthy that this coefficient ρ plays a major role in the quantum applications of WGM resonators [42]. In order to find the exact value of the optimal ρ opt (or equivalently, the optimal κ e,opt ), one has to insert Eq. (46) into Eq. (44), and obtain this optimal value is as the solution of the algebraic equation dΓ cr /dρ = 0. However, this procedure would be cumbersome because the equations involved are algebraically long and complicated. Nevertheless, these calculations can be significantly simplified if we straightforwardly consider the approximation |κ/µ| 1 (along with α opt ±1), which give accurate results as shown in Sec. 5.1 dealing with the optimal laser detuning. In that case, the formula for the critical gain can be approximated as The formula above yields Γ cr 8 with our parameters, a value that approximates quite well the minimum that is obtained in Fig. 4. Equation (49) also clearly indicates that the critical gain needed to trigger the microwave oscillations in the miniature OEO increases when the resonator becomes too undercoupled (ρ → 0) or too overcoupled (ρ → 1). The optimal value ρ opt leading to a minimum critical gain is readily found by solving the algebraic equation dΓ cr /dρ = 0, which therefore leads to the approximation: corresponding to critical coupling (κ i κ e and Q i Q e ). Numerical simulations indicate that the critical coupling condition is not stringent, and a deviation of ±5% with regard to ρ opt still yields a close-to-minimum critical gain value. This optimal value permits to find the absolute minimum for the critical gain as For our parameters, we obtain Γ min 4.4, which is then the absolute minimum gain needed to trigger oscillations in our miniature OEO. The formula from Eq. (51) indicates that the threshold gain can be lowered by increasing the nonlinearity, photodetector sensitivity, and optical power, which was expected; but more importantly, it indicates that increasing the intrinsic Q-factor of the WGM resonator is more effective than increasing the Q-factor of the microwave strip cavity. Threshold laser power in the amplifierless miniature OEO In the preceding sections, we have analyzed an architecture of miniature OEO where an amplifier is inserted in the electrical branch, and the role of the stability analysis was to find the feedback strength Γ cr needed to self-start the microwave oscillation. We had implicitly assumed that the amplifier had a tunable gain, while the optical power was fixed. However, it is possible to have instead an amplifier with fixed gain, while the pump laser is power-tunable. The question in this case is to find the critical laser power P L,cr that is needed to trigger RF oscillations. We can use the results from Sec. 4.3 to solve this problem. Hence, considering the relationships A 2 in = P L / ω L and Γ = G A G L , we can use Eq. (44) to derive the critical laser power as where K 1 , K 2 , K 3 , and Φ are the same as in Eq. (44). It results that high gain amplification allows for lower laser powers, and vice versa. For example, Ilchenko et al. have reported in Ref. [43] a miniature OEO where the laser power was around 70 µW while the amplifier had a gain of 45 dB (i. e., G A ∼ 180). However, on the other hand, higher laser power permits to use amplifiers with lower gain: In fact, if the optical power is high enough, it is even possible to get rid of the amplifier, thereby leading to an amplifierless miniature OEO. The reader can note that amplifierless OEOs have already been demonstrated with conventional fiber-based architectures (see for example Ref. [44]). In our system, eliminating the amplifier mathematically corresponds to set G A = 1 in Eq. (52). As a consequence, the OEO architecture of the miniature OEO presented in Fig. 1 is significantly simplified. The critical laser power needed to trigger RF oscillations in the amplifierless miniature OEO can be exactly calculated as P L,cr = Υ(α, ρ)/G L . From this analysis, we can now define the absolute minimal optical power that is needed to trigger microwave oscillations in an amplifierless OEO. The procedure for doing so is to consider negligible electrical losses (G L,opt = 1), optimal laser detuning (α = α opt ) and optimal coupling (ρ = ρ opt ), so that this absolute minimal laser power can be calculated as P L,min = Υ(α opt , ρ opt ) G L,opt Υ ±1, For our parameters, this value is corresponds to 4.4 mW. The reader can also note that the last approximations in Eq. (53) can be readily obtained from Eq. (51) by setting Γ min = 1 and extracting the equivalent optical power. It should be noted that if amplifierless miniature OEOs have the great advantage to simplify the architecture of the system, they require a careful management of the thermal effects induced in the WGM resonator by the higher laser power [45][46][47]. Conclusion In this article, we have proposed a mathematical framework to study the time-domain nonlinear dynamics of miniature OEOs based on nonlinear WGM resonators. Our model uses time-domain equations to track the dynamics of the complex-valued envelopes of the optical and microwave fields. We have performed a stability analysis that permitted to calculate analytically the threshold value of the feedback gain that is needed to self-start the microwave oscillations. An optimization analysis has also been performed, and led us to the conclusion that the system should ideally be operated at the edge of the optical resonance and close to critical coupling. Further investigation has shown that beyond a certain laser power, RF amplification is not needed anymore and the miniature OEO can become amplifierless. Several open points remain with regard to technology related to miniature OEOs. The first one is to understand the detrimental role played by dispersion, parasitic nonlinearities and thermal effects inside the WGM resonator [10,48]. The second challenge is to understand how the various sources of noise are translated to phase noise in the output RF signal. Modifications of the fundamental architecture can also be considered in order to achieve higher operating frequencies, such as multiple-FSR microwave pumping or frequency multiplication, for example. Finally, these miniature OEOs could also emerge as a technological platform of choice to explore several applications in quantum photonics [38,[49][50][51][52][53]. Disclosures The authors declare no conflicts of interest.
10,348.8
2020-10-12T00:00:00.000
[ "Physics" ]
"Oh, Sorry, I Think I Interrupted You": Designing Repair Strategies for Robotic Longitudinal Well-being Coaching Robotic well-being coaches have been shown to successfully promote people's mental well-being. To provide successful coaching, a robotic coach should have the capability to repair the mistakes it makes. Past investigations of robot mistakes are limited to game or task-based, one-off and in-lab studies. This paper presents a 4-phase design process to design repair strategies for robotic longitudinal well-being coaching with the involvement of real-world stakeholders: 1) designing repair strategies with a professional well-being coach; 2) a longitudinal study with the involvement of experienced users (i.e., who had already interacted with a robotic coach) to investigate the repair strategies defined in (1); 3) a design workshop with users from the study in (2) to gather their perspectives on the robotic coach's repair strategies; 4) discussing the results obtained in (2) and (3) with the mental well-being professional to reflect on how to design repair strategies for robotic coaching. Our results show that users have different expectations for a robotic coach than a human coach, which influences how repair strategies should be designed. We show that different repair strategies (e.g., apologizing, explaining, or repairing empathically) are appropriate in different scenarios, and that preferences for repair strategies change during longitudinal interactions with the robotic coach. INTRODUCTION Robot mistakes and failures are a very well-known problem in the human-robot interaction (HRI) research community [15].For example, robots can hit objects, interrupt people speaking, misunderstand human speech, not respond to people, and experience Wi-Fi malfunctions [45,50].Therefore, designing repair strategies is extremely important for the success of the human-robot interaction.Very recently, the HRI interest in applying and using robots as mental well-being coaches has increased.Robotic coaches for well-being have been examined for use at the workplace [2,48], in public environments [3,35], in a lab context [1,6,10], and for at-home use [24][25][26].In these contexts, addressing the problem of robot mistakes by designing repair strategies is even more relevant for the success of the coaching practice and for making a step forward towards the deployment of such robotic coaches in real-world scenarios. Past works have investigated the problem of repair strategies [11,28,44,54] but they have several limitations.First, most of them are limited to game-based (e.g., [45]) or task-based scenarios (e.g., [28]), without exploring the extent to which those results may be applicable to other contexts, such as well-being coaching.Second, in most of these studies, the users interacted with the robot in a one-off interaction [45,59].This constrained the design of repair strategies to short-term interactions without considering longitudinal effects (i.e., how the users' perceptions change over time).Third, most of the studies have been undertaken in the lab [28,59], making it difficult to generalise these results to real-world settings. In this study, we sought to overcome those limitations by designing repair strategies for robotic longitudinal well-being coaching with the involvement of real-world stakeholders.To this end, we undertook a 4-phase design process in which we involved a professional mental well-being coach, and users who had already interacted with a robotic coach in their workplace.As a first step, we designed a set of repair strategies together with the professional human coach, utilizing their expertise and experience.In the second phase, we deployed a robotic coach at a workplace and compared the two sets of repair strategies over four weeks by involving 12 users who had already interacted with a robotic coach before.After the study, we informed the users of the study set-up, and asked for their feedback on the robot's mistakes in a design workshop.Finally, we reflected on the user study results and user insights with the professional coach from the first phase, in order to further inform the design of repair strategies for robotic well-being coaches. This paper contributes to: 1) designing repair strategies appropriate for robotic well-being coaching; 2) investigating the longitudinal effect of repair strategies; and 3) collecting user perspectives on robot mistakes and repair strategies in a real-world context. RELATED WORK Well-being Coaching and Mistakes.While there is no single definition of well-being [36], we focus on the definition of well-being as positive psychological functioning [42].Mental well-being coaching aims to support coachees to thrive in their life [21] and improve positive psychological functioning [53].Positive psychology coaching in particular aims to help coachees to focus on positive aspects of their life [46], through exercises where the coachee e.g.focuses on experiences where they felt grateful, in order to enhance positive memories and goal attainment [13,58].During coaching, difficulties with the coach, such as the coach being vague or not sensitive Repair: Technical explanation (in addition to brief apology) Why: It helps to understand that the robot isn't being malicious, and that the problem is external to the robot and neither the robot nor the user can do anything about it.Name: Lucy Session 2 Session 3 Session 4 Session 8 Session 20 Scenario: The robot interrupts you (a returning user), for the second time in the session, when you are sharing an experience very enthusiastically. Why: The microphone is not working well. Repair: Technical explanation + opportunity to share again what you were saying Why: Nice to get an explanation but no need for me to be distracted thinking about another topic (what I feel about this interruption).I just want to summarise my feelings again. Name: Michelle Repair: Apologise and say 'please continue' Why: Don't want to forget the story!Easy to take the lazier option and minimise details when having to repeat so better to try to get me to keep going.Name: Jen Scenario: The robot doesn't respond to you for 20 seconds for first time in this session.Why: The connection to the service generating the robot's response is slow. Repair: Simple apology Why: I expect to wait sometimes... so it's totally fine I don't need to know why necessarily (vibe of making excuses for poor performance?)Name: Michelle Name: Jen Repair: Technical explanation / instructions for use Why: Since this is a first time use, there, it's likely the set-up isn't optimal so helping the user to improve / fix will help in the future.It may also be worth pointing out to the user in a semiempathetic way that "these things may happen, and if they happen again I'll let you know" or something like that, so that the user doesn't get surprised by any future / repeated mistakes and to set expectations.Repair: Technical explanation and possibly (once issue is fixed) something to "restart" the session (the flow is probably broken) Why: As a user, I'd probably be quite worried that something had seriously gone wrong so some pointers how to fix it would be good.10s is quite long, so maybe a "holding response" would also help to avoid too much awkward silence Name: or supportive enough, not being flexible, and struggling with the concepts of coaching, can disrupt the coach-coachee relationship [8].Such issues can negatively impact the alliance between the coach and coachee [43], and as such disrupt the interaction [12].Resolving such mistakes is important so that the coachee can gain the maximum positive impact from coaching.Robotic Coaching.In a workplace coaching context, robotic forms have been compared, finding preference for a toy-like robot [48].In public settings, robots have been used to conduct group Mindfulness sessions [3], finding that robots can be helpful but need to be more responsive; and to conduct private deep-breathing sessions at a health centre [35], finding that participants successfully completed deep breathing exercises with the robot.Finally, in home contexts, a robotic coach has been deployed in several at-home studies [24][25][26], finding that users' well-being improved, and that the robot was perceived more positively when it related to the user as a companion rather than a coach.While these studies all examined robotic coaching in different contexts, none of them focused on the examination and repair of robot mistakes during coaching.Spitale et al. examined how users expressed their behaviour during the mistakes a robotic coach makes [50], however they did not examine what repair strategies robotic coaches should use when they make mistakes.In this study, we present the first steps into examining what repairs are applicable to robotic well-being coach mistakes.Robot Mistakes and Repairs in HRI.Within the context of social interactions, past works have focused on understanding robot failures to improve various aspects of human perception towards robots, such as trust [20].Correia et al. explored how technical failures of an autonomous social robot affects trust during a HRI collaborative scenario [11].Their results showed that a faulty robot is perceived as significantly less trustworthy.Analogously, van Wareven et al. investigated the effect of robot failure severity on participants' subjective rating of the robot in a room-escape scenario [54] and found that the severity affects the faith participants had on robots in future scenarios.Salem et al. explored how robot mistakes affect trustworthiness and acceptance in human-robot collaboration [44], and showed that subjective perceptions of the robot and assessments of its reliability and trustworthiness has been significantly affected by the robot's performance.Robot mistakes and repair strategies have been largely examined in the context of task-based interactions.Kontogiorgos et al. examined a Furhat robot simulating errors during Wizard-of-Oz cooking instructions [28][29][30], finding that participants responded most positively through non-verbal responses to a robot's explanations.Esterwood and Robert also found that explanations were the most effective strategy for trust repair after repeated mistakes in a box sorting task [14,16].Additionally, Sebo et al. found that a robot's apology was a more effective trust repair strategy than denial during a game [45].However, these studies did not examine the longitudinal effects of different repair strategies. DESIGN PROCESS OVERVIEW This work aims to understand how a robotic longitudinal well-being coach could repair mistakes during coaching.We have undertaken a design process that included four phases depicted at glance in Fig. 1: (Phase 1) repair strategy design, (Phase 2) user study, (Phase 3) design workshop with users, and (Phase 4) coach reflections on user insights.Each phase was informative for the following one, and we set our goals every step of the way during the design process. Phase 1 aimed at designing two sets of repair strategies for robotic longitudinal well-being coaches.To accomplish this, we first discussed with a professional well-being coach the robot mistakes in well-being coaching (Discussion 1), and we then defined with them potential repair strategies (Discussion 2), as detailed in Section 4. The results of this design phase were the formulation of empathic and non-empathic repair strategies.Building upon these results, Phase 2 aimed at investigating the user perception towards a robotic coach that used empathic and non-empathic repair strategies in a user study, as described in Section 5. We found that users had various opinions and preferences towards empathic and nonempathic conditions, opening up further investigation.Hence, we designed Phase 3 as a design workshop with users of the study to better understand the appropriateness of each repair strategy in different scenarios, as detailed in Section 6.The workshop results showed that different repair strategies may be preferred depending on what errors the users experienced, and personal preferences.Finally, we involved the mental well-being coach from Phase 1 in Phase 4 to gather their feedback and reflect on the findings from Phase 2 and Phase 3, as reported in Section 7. PHASE 1: REPAIR STRATEGY DESIGN In Phase 1, we set out to understand how to design repair strategies for robotic coaches, i.e., what a robot should say when it makes a mistake.To do this, we had two discussions with a professional wellbeing coach, in order to find out how they repair mistakes during a coaching session, and to design appropriate repair strategies for a robotic well-being coach with the professional coach. DISCUSSION 1: Robot Mistakes In this first 2-hour discussion with a professional well-being coach, we followed a structure consisting of four parts: 1) show the coach videos of a robotic coach interacting with participants, and instances of interaction ruptures; 2) ask the coach what the robot did wrong; 3) ask the coach how the robot should attempt to repair those situations; and 4) how these repair strategies should best be deployed in a user study.Below, we detail each part of the discussion.1) In our previous study [48], we collected video recordings of coachees interacting with a robotic coach in a workplace setting, and subsequently analyzed interaction ruptures, i.e. instances where the robotic coach made mistakes and/or where coachees felt awkward [50].We used this video data, and selected short clips that contained instances of interaction ruptures (as annotated in [50]).We showed 6 clips in total, ranging from 22 seconds to 2 minutes and 20 seconds. 2) The coach noted that the robot's main mistakes were interrupting the coachee, and not providing a response for a long period of time.Additionally, the robot did not respond appropriately to the content of the user's speech.In the videos from our previous study [50], the robot was responding in a pre-scripted manner, which was a barrier to generating appropriate responses on the go. 3) The coach mentioned that initially, the robot should apologize for the mistake it has made, and then identify what mistake has been made (e.g., "Oh, sorry, I think I interrupted you."), as that is what they would do as a coach in a session.Robot apologies have also been previously used to repair trust in human-robot interactions [40].The coach suggested that after the apology, the robot should then acknowledge its limitations as a robot: the robot should explain why a mistake happened, or give its best guess of the technical fault if there is no definitive information (e.g., "The Wi-Fi is not working well."); and the robot should emphasize its intent to improve (e.g., "I'm trying to improve."), since self-awareness of robot limitations could put people at ease.4) In order to examine users' perceptions of a robotic coach's repair strategies, we discussed with the coach whether the robot should make intentional mistakes during a coaching session, and then deploy the repair strategies defined above.We decided to examine the most common mistakes of the robot, i.e., interrupting and not responding, which we identified from the videos.In order to collect repair strategies with the structure defined above, we scheduled a second discussion with the coach.Outcome: Identifying common robot mistakes during coaching interactions (interrupting and not responding), and creating the structure for repair strategies (apology, explanation, and intent to improve). DISCUSSION 2: Repair Strategies In this 1.5-hour discussion, we used bodystorming [38], i.e., we simulated an interaction with the robotic coach by asking the professional coach to act as the robotic coach, and two researchers acting as coachees.We did this in order to experience the coaching session, the mistakes, and the perception of the planned repair strategies from the perspective of the coachees.In preparation for the discussion, we adapted four existing well-being exercises with the coach: (1) savouring, where the coach asks the coachee to reflect on a positive memory in the recent past ( [47]); (2) gratitude, where the coach asks the coachee to think of things they were grateful for ( [18]); (3) accomplishments, where coachees reflect on accomplishments ( [18]); and (4) one door closes one door opens, where coachees think of a time when a door closed (i.e., they missed out on an opportunity), and what doors opened as a result (i.e., new opportunities arose), in order to cultivate optimism ( [27]). During the bodystorming, we asked the coach to act as a robot and intentionally interrupt and not respond to the researchers, and then use the repair strategy we designed (see Sec. 4) in Discussion 1 (i.e., apology, explanation, and intent to improve).We provided the coach with a list of causes for robot errors such as interrupting and not responding, to use in their explanation: microphone fault, processing error, slow Wi-Fi, and error in speech understanding. We structured the 1.5-hour session as follows: 1) 30 minutes of coaching, mistakes and repair strategies with 15 minutes per each researcher while the other took notes; 2) 15 minutes of discussion; 3) 30 minutes of coaching, mistakes and repair strategies with 15 minutes per researcher while the other took notes; and 4) 15 minutes of discussion.We detail each part of the discussion below. 1) The coach conducted each exercise with the two researchers, acting as the robotic coach, and asked for 2 instances of positive experiences during each exercise (e.g., 2 moments when the coachee felt grateful).During each instance, the coach made a mistake (not responding or interrupting), and then used a repair strategy (apology, explanation, and intent to improve).We collected 8 different phrasings of repair strategies in this manner. 2) After the first half hour, both researchers and the coach had a discussion on how the session went.Both researchers noted that despite the employed repair strategies, they did not feel understood or listened to, and that the coach did not understand how they felt.The coach also mentioned that it could be helpful to ask the coachee how they felt after the coach made a mistake.Both researchers noted that the coaching interaction appeared awkward.We decided that to resolve these issues (which have also been previously reported in robotic coaching [1,3,48]), the robotic coach should be empathic.In fact, empathic communication in a therapeutic context can help a client feel listened to, understood and accepted, and have their feelings validated-improving outcomes for well-being [23,56].Empathy can also be conducive to resolving affective ruptures in interpersonal interactions [17].The coach suggested that we amend the repair strategy structure to be empathic as follows (additions in bold): apology, ask for user emotion, empathize with user emotion, reassure user, explanation, and intent to improve. 3) For the second 30-minute coaching session, each researcher again did two exercises with the coach.This time, the coach used the empathic repair strategies as designed above.Again, the coach administered each of the four exercises, with 2 instances of each, resulting in 8 different phrasings of empathic repair strategies.4) In the final discussion, we found that the coach felt the empathic strategies better suited a robotic coach, and the researchers felt listened to and heard, and that the coach understood how they felt. The researchers also observed that the interaction appeared less awkward.We concluded that in order to examine how empathy impacts users' perceptions of repair strategies, we should conduct a between-subjects study comparing the two different types of repair strategies (empathic and non-empathic).We also discussed that in order to investigate how users' opinions of robotic coach repair strategies evolve over time, the study should be longitudinal.Outcome: Study design of comparing empathic and non-empathic repair strategies, and 8 phrasings of each type of repair strategy. PHASE 2: USER STUDY In order to examine the empathic and non-empathic repair strategies defined in Phase 1 (Sec.4), we conducted a longitudinal user study where the robot executed those repair strategies (Phase 2 in Fig. 1).The study design, the experiment protocol, and the consent forms were approved by the Ethics Committee of the Department of Computer Science and Technology, University of Cambridge. Protocol & Questionnaires We conducted the study over four weeks, with one well-being exercise (a maximum of 10 minutes) administered by the robotic well-being coach per week.Users interacted with the robot at their workplace, in a room reserved for the study, with the robot standing on a table 1.5 meters away from the user, and the user sat at a chair next to the table.After each interaction, users filled in a PANAS questionnaire about their positive and negative emotions [55], RoPE questionnaire about the user's perception of the robot's empathy [9], RoSAS about the user's perception of the robot's social attributes [7], and the MDMT questionnaire about trust in the robot [34].At the end of the study, users took part in a semi-structured interview about their overall experience with the robot (15-20 minutes).This interview had two parts: one with general questions about the robot, its mistakes and repair strategies; and one after disclosing the study protocol and the pre-planned mistakes to the user, followed by questions about the robot's mistakes and repair strategies.Interview questions are reported in the Supplementary Material (Sec.1). Users The users ( = 12) were recruited from the host company called Cambridge Consultants Inc., and had previously interacted with a robotic well-being coach in a 4-week study [48], where they experienced commonly known robot errors (such as interruptions and slow responses).We selected these experienced users in order to to mitigate the novelty effect [4], so that we could engage users in a critical discussion about the robot's mistakes and repair strategies after the study, which would not be influenced by the novelty of a robotic well-being coach.Users were also screened for anxiety (GAD-7 questionnaire) [57] and depression (PHQ-9 questionnaire) [31].We chose this screening in order to not use a robotic coach with a clinical population, which we do not consider ethical prior to thorough examination with a non-clinical population.The users were split into two groups ( = 6 per condition), one group experiencing the empathic repair strategy condition, and the other the non-empathic repair strategy condition.1 user was aged 18-25, 4 were aged 26-35, 3 were aged 36-45, and 4 were aged 46-55.3 users were female, 1 non-binary, and 8 male.Minority genders (female and non-binary) were balanced across conditions, with 2 in each condition.Users rated their previous experience with social robots as ( = 3, = 1.044) on a scale from 1 (lowest) to 5 (highest).All users were native or fluent speakers of English, and all had an Undergraduate, Master's, or PhD degree.The users' demographics reflect the demographics of the host company Cambridge Consultants Inc., and as such have the distribution of the ages, genders, and degree statuses as described. Robot Platform and Architecture We used the QTrobot by LuxAI S.p.A. 1 -a 90 cm tall, tabletop childlike robot with static legs, 4 degrees of freedom (DOF) arms, 2 DOF neck, and a screen face.We chose this robot as it has been previously used successfully as a robotic coach [3,48].The fully autonomous robotic coach was implemented using our newly developed VITA system [49], a multi-modal LLM-based system for longitudinal and adaptive robotic mental well-being coaching, that is open source leveraging on the HARMONI framework [51]. Exercises The robotic coach administered a different Positive Psychology exercise each week, one per week.The robotic coach delivered the same four exercises used during Discussion 2 described in Section 4. Each exercise consisted of the robot asking for two different examples from the users, and two follow-up questions per example.Followup questions were generated by sending the user's utterance in response to the robot's questions to ChatGPT gpt-3.5-turbomodel via OpenAI APIs2 .We used ChatGPT to generate follow-up questions, in order to minimize the impact of a pre-scripted robot not responding appropriately, which was a mistake the coach identified from the robotic coach videos in Phase 1 (Sec.4). Administered Robot Mistakes and Repairs Each week, the robot was pre-programmed to make two mistakes, each during one of its utterance turns during the interaction.While the conversational flow itself was automated (as described in Sec.5.4), this conversational flow was interrupted at pre-determined times to administer the mistake and the repair.The timing and type of mistake was counterbalanced across the sessions to avoid repetitiveness (each timing and type is listed in Tables 1, 2 and 3 in the Supplementary Material).These mistakes were either interrupting the user (3-7 seconds into the user's speaking turn), or not responding to the user for a longer period of time (12-18 seconds).These mistakes were chosen since they were the most common mistakes in robotic well-being coaching, as identified from watching the robotic coaching videos with the professional coach in Phase 1 (Sec.4).The robot made these mistakes at different stages (either after initially explaining the exercise, or when asking the follow-up questions).The mistakes were distributed as follows: (Session 1) interrupting and not responding, (Session 2) interrupting twice, (Session 3) not responding twice, (Session 4) not responding and interrupting. Study Conditions Our two study conditions for the between-subjects study were the robot administering either empathic repair strategies, or nonempathic repair strategies.The repair strategies were deployed whenever the robot made a pre-planned mistake, and were constructed together with the professional well-being coach. The basic structure of both conditions was defined together with the professional well-being coach, to include an apology, a technical explanation for why the error occurred (realistic explanations for why each type of mistake typically occurs in HRI-e.g., microphone or Wi-Fi malfunction [50]), and intent to improve (to reassure the user).In the empathic condition, the robot would also ask the user about the emotion they were experiencing due to the mistake, cognitively empathize with the emotion [5] (i.e.repeating the emotion of the user to acknowledge it), and affectively reassure them [39] (aiming to reduce worry and to reassure the user that the robot is attempting to listen to them).To cognitively empathize and affectively reassure the user, the user's utterance in response to the robot's question about how they were feeling was sent to ChatGPT gpt-3.5-turbomodel via OpenAI APIs, and ChatGPT was prompted to return the utterance's emotional valence (positive, neutral, or negative), and the specific emotion (repeated back to the user by the robot when ChatGPT returned the utterance's emotion as negative, to acknowledge it). Repair Strategy Construction. The repair strategies and examples of each were constructed (with differences between conditions italicized) with the following structure.The specific wording of each repair strategy was different, and was based on the professional coach's phrasing.A full list of all repair strategies is made available in Supplementary Material (Sec.2).Non-empathic: Apology, explain technical error, intent to improve, ask for repetition of user's previous utterance Example: "Oh, sorry, I think I interrupted you.My microphone isn't working well today.I'm trying to do better.Could you repeat what you were saying before I interrupted you?" Empathic: Apology, ask for user emotion, cognitively empathize with user emotion [5], affective reassurance [39] User Study Findings 5.7.1 Data Analysis.Due to the sample size for a between-subjects study, we use descriptive statistics to describe the quantitative differences between user groups' perceptions of the empathic and nonempathic repair strategies, as well as the longitudinal perception of the repair strategies.For qualitative analysis, we use Framework Analysis [52], consisting of the steps of: (1) familiarization with the data, (2) identifying a thematic framework, (3) indexing, (4) charting the data, and ( 5) interpretation of the data. Quantitative Results . We present these results to contextualize our qualitative results, as well as the user design workshop (Sec.6) and coach feedback (Sec.7) findings.As described in Sec.5.1, we administered questionnaires after each session, and after the four sessions of the study.We also measured users' well-being with Ryff's well-being questionnaire [42] pre-and post-study (min.: 18, max.: 108).The median of well-being was Empathic: (( = 100, = 105)); Non-empathic: (( = 90.5, = 92.5)).These results confirmed our expectations of no significant impact on wellbeing, due the negative impact of the planned mistakes on the coaching experience, as well as the screened user group with high levels of well-being to begin with (see Sec. 5.2). We present post-study quantitative measures in the Table 1.As post-study measures, we administered the WAI-SR (Working Alliance Inventory Short) questionnaire [37] to measure user alliance with the robotic coach, and the SUS (System Usability Scale) questionnaire [19] in order to examine users' perceptions of the robotic coach between conditions.Additionally, we administered custom questions on a Likert scale from 1 (lowest) to 5 (highest), about the robot's understanding of what the user said, felt, and how it adapted to them; the robot's mistakes in terms of whether it made mistakes, whether the users understood why it made mistakes, were irritated by the mistakes, the session was disrupted by the mistakes, and whether the robot repaired the mistakes; and on the repairs in terms of how appropriate they were, how appropriate the amount of repairs was, whether the repairs were administered at the right time, and whether the repairs were empathic.The data indicates that the robot was perceived slightly more positively in the empathic condition for alliance, usability, understanding, and success of repairs than in the non-empathic condition.For mistakes, the robot was also perceived more positively, with users better understanding why the robot made mistakes, and feeling less disrupted and irritated. We find the longitudinal quantitative data from week-by-week measures to be a useful indicator of users' experience.We calculated the median for each measure (reported in Sec.5.1) for each week, within each condition group and across all users.Our data show decreasing trends across all users and within both conditions throughout the weeks (Fig. 2) for trust in the robot (MDMT), robot's perceived empathy (RoPE), and a similar trend for the empathic condition for the robot's social attributes (RoSAS).These data indicate that over time, the users' experience with the robot was decreasing in quality.The qualitative data in the next section (Sec.5.7.3) helps understand why this may be the case.Mood (PANAS) measures remained relatively stable across the weeks, with the non-empathic group experiencing lower mood throughout.Together with the other measures, this may indicate a more negative experience with the robot in the non-empathic condition.5.7.3 Qualitative Results.We analysed our qualitative results with Framework Analysis [52], to examine users' longitudinal experiences, and the two condition groups (empathic and non-empathic).We present quotes related to these themes in the Supplementary Material (Sec.3).For instance, some users called the empathic strategies of the robot "caring" (P20) and that it "adapted to the sentiment" they were feeling (P01), but that the empathy became less genuine over time (P01) and was "a bit over the top" (P05).Some users stated a preference for not needing empathy in a robot: "it felt too much like a machine and empathy doesn't apply" (P02). In terms of longitudinal experiences, users noted that in the initial weeks, they were trusting of the robot recognizing its mistakes, and of its repairs.However, this trust decreased throughout the weeks, as shown by quantitative (Fig. 2).For instance, explanations were viewed as helpful to understand the robot was not "broken" (P09), however users noted that the robot's explanations seemed less genuine towards the end of the study and felt like "excuses" (P11).In fact, some users noted that repairs "made [the interaction] worse" (P04), disrupting the interaction.The repetitiveness of the repairs was also viewed as disruptive, and that they were not "meeting my needs, as in not understanding" (P06). The challenges for repair strategy design found by this analysis are 1) users become less receptive to the repair strategies over time, 2) users' personal preferences for repairs (especially empathic vs non-empathic repairs), and 3) repairs are sometimes helpful but sometimes disruptive.Users had extensive feedback on how repair strategies could be improved, and when and how often to apply them.To investigate this further, we organized a design workshop with the users as the next phase of the study. PHASE 3: WORKSHOP WITH USERS We conducted a design workshop with users from the study in Phase 2 ( = 10, with P04 and P07 not available to attend), in order to gather their opinions on how a robotic well-being coach should deploy repair strategies in different scenarios during coaching.We discussed the challenges identified in Phase 2 (Sec.5), namely the negative perception of repair strategies over time, users' personal preferences, and when repairs are helpful and when disruptive.The workshop was conducted in an online video call, with a Miro board for collaboration 3 .The workshop was structured into five sections: (1) showing users videos of both repair strategy conditions (empathic and non-empathic) to inform them of the robot's capabilities, (2) warm-up exercise, (3) thinking through appropriate repairs for pre-defined scenarios, and generating mistake scenarios that each user experienced, (4) thinking through appropriate repairs for those user-generated scenarios, and (5) distilling insights for robotic well-being coach repairs for different contexts. Repair Strategy Preferences for User-generated Mistake Scenarios We have included the 12 user-generated mistake scenarios in the Supplementary Material (Sec.4).In addition to the robot's preprogrammed mistakes of interrupting and non-responding, users generated other scenarios (referred to as "Sc." throughout this paper) of robot mistakes, e.g. the robot misunderstanding them or asking generic questions.Users could select multiple options for repair strategies for their scenarios, from 5 categories (including the categories "empathic" and "technical explanation" which they experienced in the study, and "instructions on use", "brief apology", and "do nothing", which they suggested in the post-study interviews) and also add another suggestion in the "other" section.Overall, for the 12 user-generated scenarios, user responses from the categories tallied up as follows: do nothing (32), empathic (15), brief apology (48), technical explanation (6), and instructions on use (10).Do nothing or apologize briefly -Users show an overall preference for the robot either doing nothing or giving a brief apology. Reasons for these included "For a small error just continue with the session" (P06, Sc. 4), "Better to move on" (P01, Sc. 4), "Not worth dwelling over, but [use] quick apology for throwing the user off."(P08, Sc. 5).Users explained that longer repairs may in some cases distract from the session, and the robot should "continue momentum" (P01, Sc. 4) and "try and get [the] conversation back on track" (P05, Sc. 5). Empathic -There were no scenarios where most users wanted empathic repairs.In scenarios where users did want them (2-3 users), the reasons were e.g."Being empathetic, efficient and give advice on improving user experience seems useful." (P11, Sc. 2), and "Any empathetic apology should be to validate user feelings and be quick [..] Providing advice on how to avoid this error would be useful again to continue conversation along."(P11, Sc. 7).One user noted that a repair "Should be an empathic apology but not necessarily asking how user felt." (P01, Sc.10), wanting an empathic repair but in a brief format. Technical explanation and Instructions on use -Users preferred instructions over explanations.Reasons for instructions were e.g."I would like to know how to get robot to repeat question in the future if it is my fault."(P11, Sc. 9), "If the robot can't adapt to the user can the users adapt to the robot" (P06, Sc. 10).Users wanted instructions to interact with the robot when they could be able to correct the error.In contrast, explanations should be used e.g."If the reason for not understanding is known, try and point it out to the user so they can address it."(P05, Sc. 10) and "Given it's a multiple repeat I'd want to know that it has realised it's mistake and by giving an explanation I would assume it can actually move on."(P03, Sc. 11).In these cases, the user wanted the robot to inform them of the cause of the error to increase the transparency of whether the robot is aware of the error and the reason for it, as well as whether the user should further address it (outside of the session).Technical explanations and instructions could be used in tandem, e.g., "Robot should explain the technical error, perhaps it did not hear properly and microphone should be moved closer." (P09, Sc. 7). User Insights and Discussion While the previous sections show that users had significant personal preferences, we conducted a final discussion at the end of the design workshop to shape some more general insights for robotic coach repair strategies.These insights are detailed here. To repair or not to repair?-In general, users wanted the robot to repair when the mistake was more disruptive to the session than deploying a repair would be.Users noted the robot should not repair minor mistakes, because paying attention to the mistake could cause further disruption to the session.Minor mistakes were e.g., "if it interrupts the user momentarily" (P11), "it interrupts the user and the user keeps talking" (P08), "if it's likely a one-off issue that will not impact further" (P03).In these cases, the users preferred the robot to either do nothing or give a brief apology.The robot should also not repair if it disrupts the session: "if the user is in the flow and it would be detrimental to the session to interrupt the user" (P10), and "the repair would be more interrupting / distracting than what is trying to be repaired".Users noted that the repairs would need to be selected to be less disruptive (i.e., in the case of a minor mistake, a more detailed repair than a brief apology might be disruptive).Scenarios where the robot should definitely repair were "when the user needs to participate in the fix" (P03), or "if the robot's response depends on the user" (P11).In terms of longitudinally administering repairs, users noted that initially the robot should focus on introducing its main functionalities.In the next few sessions, it is important to administer repairs to introduce the repair capability to the user and improve their experience (P01), and to give the user any necessary instructions to resolve issues (P10).Users noted that over time they would expect less repairs, since the robot is not a "stranger" anymore (P01), and that it "can cut through and carry on" (P10). When to use empathic repairs?-In general, users wanted empathic repairs when they were likely to feel frustrated.This could be the case in "repeated mistakes, especially when I have been talking for a while already" (P05), "when it interrupts during a story/detailed explanation of an experience" (P03), and "when the error has affected the session in a way that could have negatively impacted the flow" (P10).However, one user noted that empathic repairs could make the user "feel more frustrated" (P01).This indicates that empathic repairs should be dynamically deployed based on the user's response to the repair itself. When to give technical explanations?-In general, explanations were wanted to increase transparency in the case of severe technical errors.For example, "when the error has occurred for the first time overall for rare errors" (P12), and "when an error has occurred that is outside the user's control" (P10).However, the explanation should be "something the user can understand" (P03). When to give instructions on use? -In general, users wanted instructions from the robot when the mistake was something that the user can help correct, during the first few interaction sessions with the robot.Examples were "when the user can actually make a simple fix (like move the microphone closer)" (P03) and "if the error has to do with the user's way of communicating" (P11). PHASE 4: COACH REFLECTIONS To reflect on the insights provided by the robotic coach users, we took these insights back to the professional well-being coach who had collaborated with us to design the robotic coach and its repair strategies in Phase 1 (Sec.4).In this discussion (1 hour), we asked the coach to reflect on the results of our user study and the collected user insights, and to give their opinions on them, as follows. Robotic coach repair strategy design can not be solely based on professional coach repair strategies -We can not conclude simply that "empathic repair strategies are better than nonempathic ones", as was the original expectation of the authors and the well-being coach.Instead, our findings indicate that a robotic well-being coach is fundamentally different in its capabilities when compared to a human coach, and as such users' expectations are different.The professional coach noted that repeated social mistakes (interrupting or non-responding) in human coaching are rare, as humans have the capacity to better intuit social signals and thus make fewer mistakes.As coachees do not expect many mistakes in human coaching, empathic repairs may be helpful in the rare situations where mistakes do occur.However, due to limited robot capabilities, social mistakes occur often in robotic coaching (as shown by the videos in Sec. 4 [50]).Due to repeated mistakes, while the coach's and researchers' intuition was that empathic repairs would be similarly applicable to a robotic coach, the user study (Sec.5) and design workshop with users (Sec.6) contradicted this intuition.Some users enjoyed empathic repairs in the first few interaction sessions, but became averse when repeated in later interactions, viewing them as disingenuous.Other users experienced the empathic repairs as disingenuous already from initial interactions, due to viewing the robot as a tool that empathic expressions are incongruous with.As such, repair design needs to acknowledge the limited capabilities and higher error rate of robotic coaches.Repair strategies are not always necessary -In some instances users perceived the repair strategies as disruptive (see Sec. 5-6).The professional coach agreed that repairs could be detrimental if they were "long-winded explanations" that may be perceived as excuses rather than helpful.The coach agreed that users may "get used to a robot and its errors", and due to this, repairs could be reduced over time in the case of repetitive errors.Future research is needed in to distinguish whether and when a user no longer requires repair for a specific error, e.g. by detecting a user's behavioral signals. Repairs should be utilized according to user preference -There is no "one-size-fits-all" with regards to non-/empathic (or other) repairs in robotic well-being coaching.Some users appreciated the empathic repairs and viewed them as increasing trust and improving the interaction, while some viewed them as disingenuous and even deceptive due to the fundamentally non-empathic nature of a robot as a machine.The professional coach noted that at times they adjust their level of empathic expression to their client, by matching their expression to that observed from the client.Previous research proposed the adaptation of a robot's empathic expressions to a user's positive emotional signals [32].Future research is needed to analyze user feedback during the administration of the repair strategy itself (i.e., observing a user's response to a repair via behavioral signals), and accordingly adjusting future repairs.Robot-specific repair strategies include technical explanations and instructions on use -The professional coach noted that technical error explanation is a robot specific behaviour, and there is no direct comparison to human-to-human coaching, but that if they were making a repeated error, they would want to give an explanation to their client.Future research is needed to develop robot awareness of mistake occurrence, cause, and to generate authentic technical explanations.In terms of instructions on use, the coach compared this to the situation where their client may have hearing loss, and the coach might in turn increase their speech volume.Future research should focus on how robots may detect and adapt to such personal requirements of each user, while applying appropriate personalised instructions on use of the robot. Repetitions of repair strategies are detrimental -Due to social interaction timing issues (i.e., mistakes such as interruptions and non-responding) that are inherent to social robots [22,33,50], robotic coaching can have an awkward rhythm.When repair strategies are continuously administered to repair these issues (even with different phrasings across sessions, as we have done in our user study), the effect can be detrimental to the user.The coach compared this to a "phone helpline", where the phrase "your call is important to us" is often repeated, but "becomes less true over time". The repair strategies we have proposed here assume the current level of disruption in state-of-the-art HRI, and they may not hold as robot capabilities further develop.Future research should aim to reduce such latency in robotic conversational interaction, so that repair strategies will gradually become less needed. CONCLUSIONS WITH A CRITICAL LOOK This paper contributes insights for designing repair strategies for longitudinal robotic well-being coaching, informed by real-world users' and a professional coach's perspectives.We have shown how our 4-phase design process and its outcomes can contribute toward the real-world deployment of longitudinal robotic coaches that are capable of repairing their mistakes and thus improving coaching interactions.As part of this process, we designed our initial study to compare empathic and non-empathic repairs, based on two discussions with a professional well-being coach (Sec.4).However, in our between-subjects study where a robotic coach administered these repair strategies (Sec.5), we found that users had detailed feedback beyond the question of whether empathic or non-empathic repair strategies were better.We then designed a workshop with users to give detailed feedback on repairs for a robotic coach (Sec.6), and reflecting on these user insights with the professional coach from Phase 1 (Sec.7).We encourage researchers to include such retrospective discussions with users and stakeholders in their research, especially when their intuitions are not confirmed, to better understand how robots are experienced in the real world.We would also like to direct a critical eye toward our own study.In this paper, we conducted a user study in which the robot made intentional mistakes and repairs.However, intentional mistakes rely on timing, which is difficult for social robots.Additionally, the timing of a repair can have an impact on its success [41].Thoroughly investigating the timing of the mistakes and repairs, and how this may have impacted user perceptions is out of scope of this work.We will investigate this in future research, and encourage the HRI field to reflect on how intentional social mistakes can be investigated taking into account the challenge of timing in interactions.Also, we used a LLM (ChatGPT) to generate the robot's responses to users.In some cases, the LLM spontaneously asked users for clarification when it did not "understand" the user's utterance.In this paper, we asked users to recall the mistakes and repairs they experienced during the interaction (including those spontaneously deployed by the LLM), however, further analysis on spontaneous repairs and users' perceptions of them is out of scope of this work.We invite future research to investigate the impact of spontaneous LLM repairs on users' perceptions of robotic coaches.These repair strategies have been designed specifically for well-being coaching.In previous HRI literature on game-and task-based scenarios, repair strategies focus on a robot's mistakes where a right vs wrong condition is clear due to a set goal [14,16,[28][29][30]45].In conversational contexts such as coaching, identifying the mistake itself, and consequently the necessity for and appropriate type of repair, is more complex due to factors such as user preferences.Despite this complexity, we have attempted to distil relevant insights and reflections on such repair strategies.Future research should further investigate longitudinal coaching interactions, administering repair strategies according to our insights, and further refine these in an iterative manner. Repair: Technical, Explain why it happened in more detail and mention that it will probably happen again Why: Happening for the first time so it was strange.Knowing how to prevent it from happening would have helped in future sessions Name: Daren Repair: Empathic + technical explanation + instructions for use (if user can do anything about it) Why: If it's true, it's an impressive capability, and good for users to know how to correct.Name: Michelle Repair: Technical -Mainly acknowledge the mistake, then try again/move on Why: Mistakes happen, no reason to make it a big deal Name: Paul Repair: Technical Explain + Instructions Why: It's the first time the user has seen this error, so they need to understand and know how / why to fix.Name: Joe Repair: Balanced empathic and non-empathic (acknowledge the error and keep going) Why: Acknowledging the error shows the robot is on top of things/remains the expert in the room, but also maintains momentum of the session. Apologise briefly but no explanation or other repair needed Why: Doing more than this causes unnecessary interruption.A brief apology and then moving on is what i'd expect of a human in this scenario Name: Lucy Repair: Technical explanation Why: A first error probably doesn't need an instruction to fix yetallows for the user to self-correct initially Name: Martin Repair:Technical Why: Explain there has been a glitch and what to do to correct it Name:Jason Repair: Instructions and technical Why: Might be helpful at this stage to give the user a reason and a few useful prompts of ways they could optimise the interaction, if there's a physical/technical problem Name: Martin Repair: Brief Technical Explanation Why: The user knows what the error is as it's happened multiple times but their repair attempts haven't worked so more instructions to fix are unlikely to achieve the fix -but should acknowledge in a small way the error.Name: Joe Repair: Technical explanation + apology Why: Set expectations for the rest of this current session.This expectation probably won't carry over to other sessions as it's a temporary problem.Name: Michelle Repair: Empathetic + Technical -Keep the user informed Why: At this point the user is probably annoyed, so both keep them informed allow them a chance to voice their frustration Name: Paul Repair: Empathetic and technical explanation Why: I think at this point, since it has happened 3 times, I would start to get annoyed.So empathy would make me feel validated.A technical explanation then would allow me to potentially have control over the solution and also have empathy for the robot.Name: Emma Figure 1 : Figure 1: A timeline of the four study phases with their goals, methodologies, and outcomes that steered the next phase. Figure 2 : Figure 2: Longitudinal trends of quantitative measures in users' experiences of empathic and non-empathic repairs. it would provide validation to the users feelings, provide insight into the robots issues to allow the user to have empathy, and instructions would give me a sense of control over the situation. Robert Repair: Empathic and technical Why: To know that the robot isn't just ignoring what I'm saying, and understand that there is a recurring technical issue Name: Daren Repair: Instructions on use Why: This is a problem which is fixable by the user, but they might not realise that by themself, so pointing this out could make the rest of the session run more smoothly.This is also what i'd expect of a human counsellor etc in this situation.Name: Lucy Repair: short apology Why: It's the 3rd time so don't need to get into a detailed discussion Name: Jen Repair: Empathic and instructional Why: Need to acknowledge that the user was enthusiastically responding, and apologise for the interruption.Then give instructions to solve it so it doesn't happen again.BUT NOT ASKING HOW IT MADE YOU FEEL!Also, maybe come back to the same question later to recap and fill in the gaps Name: Martin Repair: Instructions + maybe a quick mic test?Why: No strong opinions, not really sure what is best Name: Paul Repair: Empathy + Technical Explanation + instructions Why: Again, Name: Emma Repair: Instructions on use Why: The user should know what the error is so doesn't need the explanation -but it's a straightforward fix to improve the quality of the rest (and majority of) the session. Why: lag with query to chatgpt API. Name: Emma Scenario: Asked g. "what wood did you use to make your workbench?")Why: Language model picks up on wrong sentiment is user speech Name: Martin Scenario: The robot interrupts you (a returning user) for the third time in this session, whilst apologising for interrupting you the previous time Why: idk ¯\_( )_/Name: Lucy Goal: Define two sets of repair strategies to compare in a user study Methdology: 2 discussions with a professional coach Goal: Examine user perceptions of robotic coach, and empathic and non empathic repair strategies Methdology: Between-subjects user study (n = 12) over four weeks at a workplace Outcome: Two sets of repair strategies: empathic and non-empathic Table 1 : , explain technical error, intent to improve, ask for repetition of user's previous utterance."Thanks for being understanding.My intention is to listen to what you are saying, but sometimes I experience errors.My microphone isn't working well today.I'm trying to do better.Could you repeat what you were saying before I interrupted you?" Post-study quantitative measures (higher measures bolded).(C) denotes a custom question.Arrows illustrate where positive changes occurred in the empathic condition.
11,891.6
2024-01-08T00:00:00.000
[ "Engineering", "Psychology", "Computer Science" ]
Photonic-based multi-wavelength sensor for object identification A Photonic-based multi-wavelength sensor capable of discriminating objects is proposed and demonstrated for intruder detection and identification. The sensor uses a laser combination module for input wavelength signal multiplexing and beam overlapping, a custom-made curved optical cavity for multi-beam spot generation through internal beam reflection and transmission and a high-speed imager for scattered reflectance spectral measurements. Experimental results show that five different wavelengths, namely 473nm, 532nm, 635nm, 670nm and 785nm, are necessary for discriminating various intruding objects of interest through spectral reflectance and slope measurements. Objects selected for experiments were brick, cement sheet, cotton, leather and roof tile. ©2010 Optical Society of America OCIS codes: (300.6360) Spectroscopy, laser; (120.0280) Remote sensing; (280.3420) Laser sensors; (200.4560) Optical data processing. References and links 1. P. Hosmer, “Use of laser scanning technology for perimeter protection,” IEEE Aerosp. Electron. Syst. Mag. 19(8), 13–17 (2004). 2. K. Sahba, K. E. Alameh, and C. L. Smith, “Obstacle detection and spectral discrimination using multiwavelength motionless wide angle laser scanning,” Opt. Express 16(8), 5822–5831 (2008). 3. K. Sahba, K. E. Alameh, C. L. Smith, and A. Paap, “Cylindrical quasi-cavity waveguide for static wide angle pattern projection,” Opt. Express 15(6), 3023–3030 (2007). 4. K. Sahba, S. Askraba, and K. E. Alameh, “Non-contact laser spectroscopy for plant discrimination in terrestrial crop spraying,” Opt. Express 14(25), 12485–12493 (2006). 5. B. R. Myneni, F. G. Hall, J. P. Sellers, and A. L. Marshak, “The interpretation of spectral vegetation indexes,” IEEE Trans. Geosci. Rem. Sens. 33(2), 481–486 (1995). 6. A. Paap, S. Askraba, K. E. Alameh, and J. Rowe, “Photonic-based spectral reflectance sensor for ground-based plant detection and weed discrimination,” Opt. Express 16(2), 1051–1055 (2008). Introduction Prevention of unauthorized entry into buildings containing assets is a significant measure in reduction of crime and terrorism.Defence and Security organisations have recently adopted laser scanning technologies for projectile guidance, surveillance, satellite and missile tracking and target discrimination and recognition.Over the last decade, terrestrial laser scanning (TLS) has increasingly played an advanced role in exterior and interior intrusion sensing as a part of Critical Infrastructure Protection (CIP), specifically Perimeter Intruder Detection Systems (PIDS).Laser scanning technique has been tested in British Home Office -Police Scientific Development Branch (PSDB) in 2004.It was found that laser scanning has the capability to detect humans in 30m range and vehicles in 80m range with low false alarm rates [1]. Multiwavelength laser scanning is a natural progression from object detection to object identification and classification, where specific features of objects and materials are discriminated by measuring their reflectance characteristics at specific wavelengths and matching them with their spectral reflectance curves.With the recent advances in the development of high-speed sensors and high-speed data processors, the implementation of multi-wavelength laser scanners for object identification has now become feasible. While holographic gratings can be used to generate multiple single-wavelength laser spots for intruder detection, the same cannot be implemented for multi-wavelength laser scanning.The reason being laser beams of different wavelengths will be diffracted from a holographic grating at different angles, hence making this technique impractical for maintaining a highdegree of overlapping between beams of different wavelengths projected at a particular spot [2].A two-wavelength photonic-based sensor for object discrimination has recently been reported [2], where an optical cavity is used for generating a laser spot array and maintaining adequate overlapping between tapped collimated laser beams of different wavelengths over a long optical path [2,3].However, the main drawback of this approach is the limitations in the number of objects that can be discriminated [2]. By increasing the number of wavelengths at which objects exhibit different optical characteristics, the number of objects and materials that can be identified and discriminated increases significantly.Figure 1 shows a typical reflectance spectra obtained by using two different spectrometers, visible (from 400 to 850nm) and infrared (from 850 to 2100nm) spectrometers for cotton, soil, vegetation and clear water.The reflectance spectrum of a material can be used as a unique signature that identifies this material from other materials.This is the basis for the multiwavelength remote sensing for object identification.In this paper, novel multiwavelength photonic-based sensor architecture for object discrimination and identification is proposed and demonstrated.The sensor architecture is based on the projection of pulsed laser beams of different wavelengths and the processing of the recorded reflected intensity data to achieve identification and discrimination of five objects commonly encountered in intruder-detection scenarios, namely, brick, cement, cotton, leather and roof tile.Through the use of a novel combination of the normalized difference vegetation index (NDVI) and slopes of the reflectance spectrum we develop a demonstrator that can identify objects on a limited basis.Results clearly show that by optimizing the wavelength values, different objects do differ in at least one region of the reflectance spectrum and that a multiple laser array can detect differences between objects. Multiwavelength MicroPhotonic Sensor Architecture The schematic diagram for the proposed photonics-based multiwavelength sensor architecture for object identification is shown in Fig. 2. It is comprised of a laser combination module, a multi-spot beam generator, an area image sensor and collecting lens.The laser beams are modulated using custom-made electronic drivers integrated on a printed circuit board.This result in laser pulses of different wavelengths illuminating the objects under investigation sequentially.Object discrimination and identification is achieved by recording and processing the intensities of the different laser beams reflected off the various spots illuminating the object, for each illuminating wavelength.The bench top set up in the laboratory used to conduct the experiment is shown in Fig. 3. Laser combination module The laser combination module has two sections, as shown in Fig. 4 Section 1 is a laser combination module which includes three laser diodes of wavelengths 635nm, 670nm and 785nm, respectively, two free space beam combiners and a constant-current laser driver.Section 2 is another laser combination module that combines two lasers of wavelengths 473nm and 532nm driven by a constant-power laser driver.This arrangement of laser sources and optical combiners enables the generation, overlapping and polarization alignment of five The laser diodes are switched sequentially using laser drivers that are switched via a custom-made electronic printed circuit board.The optical output power of each laser diode is adjusted via trim-pots integrated on the laser driver circuits.The output optical power levels for the 473nm and 532nm lasers were set to 8mW and 7mW, respectively, while the other three lasers had equal output power levels of 6mW.The divergence for all output collimated laser beams was less than 1.5 mrad.The output power of each laser beam from the cavity was measured using a free-space optical power meter.The active area of the detector has a diameter of 5mm.This power was mounted onto a linear optical stage, which enables precise alignment of the laser beams to the centre of the detector's active area and accurate measurements of the laser beam intensities which are displayed in Fig. 5. Multi-spot beam generator The output laser beam from the laser combination module passes through the custom fabricated multi-spot beam generator for object illumination.This beam generator is made of BK-7 glass with inner and outer interface radii of R 1 and R 2 , respectively.θ is the angle of curvature of the multi-spot beam generator (Fig. 6(a)).The rear side of the glass is coated with a highly reflective (R≥99.5%)and the front side with a partial transmission (T≥13%) thin film.An uncoated 10mm entrance and exit windows are used at both ends of the rear side of the glass medium.Hence, an input collimated optical beam undergoes multiple reflections within the optical cavity, and every time it hits the front surface a small fraction (around 13%) of its optical power is transmitted, thus projecting a laser spot array onto an object sample.This multi-spot beam generator has a 45° curvature and generates 20 spots when an incident beam is injected through the entrance window.The number of outgoing beams depends on the incident angle of laser beam. Image sensor The intensities of the laser beams reflected off the spots illuminated by the beam generator are captured by an area image sensor that images the reflected laser beams sequentially.Figure 7 shows the spectral response of the image sensor used in the experiments.This particular imager exhibits high sensitivity over the wavelength range (470 -785nm).An imaging lens is usually used in conjunction with the imager sensor in order to map the intensities of the beams scattered from the different laser spot into the imaging plane.For the imager used in the experiments, a 0.5-inch interline transfer CCD imager was employed having 768(H) × 494(V) pixels of size 8.4µm × 9.8µm.A C-mount TV lens of focal length ƒ = 12.5mm was used to collect the light scattered from the illuminated laser spots.The estimated CCD acquisition time is 200µsec and the estimated over all acquisition time is 2msec.The lens iris was adjusted appropriately to avoid saturation of the imaged laser spot array.The images from the camera are digitized in 12-bit form using a Spiricon frame grabber circuit board. Object discrimination method The object discrimination method is based on determining the slope in the reflectance at the five wavelengths [2,[4][5][6].The four slope values, S 1 , S 2 , S 3 and S 4 are defined as: where λ n is the wavelength of the laser diode in nanometers, R λ = I λ /P λ is the calculated reflectance, I λ is the peak intensity of a beam spot imaged by the image sensor (usually represented by a 12-bit digital number (DN)) and P λ is the measured optical power for each spot generated by the optical structure in watts.The peak intensity values, I λ , were obtained by applying a non-normalized Gaussian curve fitted to the one-dimensional intensity profile of the imaged laser spot.The intensity profile is a row of pixels crossing the middle of the laser spot, along the x-axis.The Gaussian curve is fitted to the intensity profile of the laser spot to obtain the peak intensity of laser spot using the Matlab add-on toolbox.An example of such fitted Gaussian curve is shown in Fig. 8. Experimental results and discussion Five different objects namely: brick, cement sheet, roof tile, cotton and leather were used to demonstrate the proof-of-concept of the photonic-based sensor in the laboratory.Each object was first characterized with two different commercially available (visible and near infrared) spectrometers.The experimental setup for measuring the reflectance spectrum is shown in Fig. 9.The reflectance of a roof tile is generally low but has a peak around 600nm.For a brick, the reflectance spectrum is piecewise linear with a higher slope over the visible part of the spectrum.The reflectance of cotton has a peak at 473nm and several deflection points within the visible and near infrared parts of the spectrum.On the other hand, the reflectance of leather is generally low for visible wavelengths and exhibits several peaks over the infrared part of the spectrum.For the cement sheet, the reflectance increases almost monotonically with increasing wavelength.The measured spectral reflectance curves for all objects are shown in Fig. 10.Note that these reflectance spectra were obtained by using two different spectrometers, namely a visible spectrometer of spectral range 400-850nm and an infrared spectrometer of spectral range 850-2100nm.To identify the above mentioned objects, specific wavelengths, namely 473nm, 532nm, 635nm, 670nm and 785nm, were selected for two main reasons.Firstly, the spectral reflectance slopes at the different wavelengths are significant and do not overlap simultaneously and secondly, these wavelengths are synthesized using commercially available lasers. Spectrometer The object samples were placed at 2m from the optical cavity and illuminated with an array of coplanar laser beams emitted through the multiwavelength photonic-based sensor and the reflected intensities from these objects were measured as illustrated in Fig. 2. The average values of slopes S 1 , S 2 , S 3 and S 4 for the sample objects, calculated using Eq. ( 1) are shown in Fig. 11.Each object is distinguishable in at least one slope.The measured standard deviations of the slope values are shown in Fig. 12.Clearly, no simultaneous overlapping between slope values of different objects was present, demonstrating accurate discrimination of the various objects.For example if we look at the average slopes for a cement sheet, shown in Fig. 12, we do not see any overlapping with other objects in slopes s1, s3 and s4, and hence the cement sheet can be discriminated from other objects.For cotton, there is no overlapping in slopes s1, s2 and s4 with the other objects under investigation, while overlapping is seen in slope s3 with roof tile.For brick, there is no overlapping in slopes s1, s2 and s3, while overlapping is seen in slope s4 with roof tile and leather.For roof tile, there is no overlapping in slopes s1, while overlapping is seen in slopes s2, s3 and s4 with other objects.For leather, there is no overlapping in slopes s1 and s2 with any other object.Note that the variances of the slopes are mainly due to fluctuations in the response of the image sensor and the optical intensities of the laser diodes.The nonoverlapping slopes shown in Fig. 12 demonstrate the ability of the proposed novel multiwavelength photonic-based sensor to identify and discriminate objects frequently encountered in intruder-detection scenarios. Conclusion and future work A novel five-waveband laser scanner for intruder detection and object discrimination has been proposed and its principle has been demonstrated.The optical reflectance properties of various natural objects commonly encountered in the military perimeter such as brick, cement sheet, roof tile, cotton and leather have been measured, and the optimum wavelengths necessary for object identification and discrimination has been determined.Analyses of the spectral characteristics for the selected objects have shown that the lasers with 473nm, 532nm, 635nm, 670nm and 785nm wavelengths are the most appropriate for identification and discrimination of the selected objects.Object samples have been illuminated with an array of collimated laser beams emitted through a multi-spot beam generator integrating lasers, free space beam combiners and an image sensor, and the reflectance properties of the objects under investigation have been measured.These measurements were carried out in the laboratory with ambient fluorescent light.Since our initial goal is a proof-of-concept demonstration, the experimental set up was done in laboratory conditions.Note however, that the performance of the laser scanning system in the field will be reported elsewhere.Discrimination between objects has been demonstrated by determining four spectral slopes at the selected wavelengths.Spectral slope measurements have confirmed no simultaneous overlapping between slope values of the different objects, making the identification of the selected objects accurate even in the presence of laser power fluctuations. It is important to notice that the addition of too many lasers increases the cost, bulkiness and slows the operating speed of the sensor.Future work will focus on determining the accuracy of the sensor for a broader range of objects.The end goal of this research project is to design a laser scanning system capable of identifying a wide range of objects in the field and attaining a detection range greater than 30m. Fig. 1 . Fig.1.Typical measured reflectance spectrum of cotton soil, vegetation and clear water.The reflectance spectrum of a material is a unique signature that can be used for material identification. Fig. 2 . Fig. 2. Schematic diagram for the experimental set up. Fig. 3 . Fig.3.Experimental set up for object discrimination.Objects are illuminated with laser beams at varying wavelengths along one optical path, striking the same spot on the object.By measuring and processing the reflected light intensities for each wavelength, a large variety of objects can be identified. # 122550 -$15.00USD Received 12 Jan 2010; revised 3 Feb 2010; accepted 3 Feb 2010; published 5 Feb 2010 (C) 2010 OSA collimated laser beams of different wavelengths.Polarization alignment for all laser beams is necessary in order to minimize the impact of the polarization-dependent scattering loss of the object under investigation.All combined laser beams are collimated at a diameter of 4mm. Fig. 4 . Fig. 4. Laser beam combination module with five wavelengths and four beam combiners.This arrangement generates five collimated and overlapped laser beams with the same polarization orientation. Fig. 5 . Fig. 5. Measured output optical power for each laser beam after passing through the optical cavity. Fig. 6 . Fig. 6.The multi-spot beam generator: (a) schematic diagram showing 90° curvature; (b) photograph of the fabricated product showing 45° curvature.The front and rear surface are coated with semitransparent and highly reflective thin films, respectively. Fig. 7 . Fig. 7. Spectral response of the image sensor used for the experiments (as per manufacturers specifications). Fig. 9 . Fig. 9. Experimental setup for measuring the reflectance spectra of the different sample objects. Fig. 10 . Fig. 10.Typical measured spectral response of sample objects used for experimentation.
3,851.4
2010-02-15T00:00:00.000
[ "Physics" ]
Reflections on Rotational Osteotomies around the Patellofemoral Joint Torsional abnormalities of the femur represent a significant risk factor for patellar instability or patellofemoral complaints. Although their clinical implication has been demonstrated, there is still a debate going on about different aspects. These include, especially, the various methods of measurements with a wide range of physiologic values, the indication or clear recommendation for surgical correction, and the site of the rotational osteotomy. Nevertheless, good subjective and objective functional results were reported after femoral rotational osteotomies. This is mostly not a review of the literature, but a collection of personal thoughts and observations. Introduction Why should we consider performing a rotational osteotomy at all? This question is controversially debated in expert circles and no equal opinion exists. There is still a general lack of knowledge on this theme. Nevertheless, increased interest and the higher number of studies published in the latest literature underline the importance of this topic. Various factors became strong arguments in the last 30 years to consider rotational osteotomies. Accordingly, a profound discussion is raised and there is a strong need to assess different perspectives in detail. Femoral torsion (FT) is a significant parameter in knee and hip disorders. Therefore, both joints must be integrated into the complete assessment. The patellofemoral tracking is influenced by the complex interaction of the skeletal geometry, soft tissues, and neuromuscular control [1]. The patellofemoral and femorotibial joints act synergistically and ensure undisturbed movements in the knee joint. Under normal conditions, the high loading and shearing forces acting on the knee and the associated soft tissues are well controlled and optimally distributed. Pathologic limb factors may change these interactions and cause maltracking of the extensor mechanism and along the entire leg between the hip and foot. Torsional deformities influence the patellofemoral kinematics and cause altered vectors and forces acting on the patellofemoral joint (PFJ). This may lead to patellar instability, cartilage failure with secondary osteoarthritis due to increased patellofemoral stress, and musculotendinous insufficiency [1,2]. Therefore, they should be depicted and addressed for the appropriate treatment. However, rotational deformities of the lower limb are often missed or ignored due to difficulties in their assessment and the necessary treatment. In recent years, we have observed an increasing number of patients with combined hip and patellofemoral problems. In some cases, the MRIs of hip and knee showed normal conditions. The only pathologic finding was an overall increase in femoral antetorsion. The clinical examination of the hip confirmed the pathologic finding of an increase in internal rotation. Complaints on both locations disappeared after supracondylar external rotation osteotomy. In addition, torsional abnormalities were recently also integrated into the concept of femoroacetabular impingement, as they affect the impingement-free hip range of motion [3]. Excessively high FT may cause posterior extraarticular ischiofemoral impingement (conflict between ischium and lesser or major trochanter) with posterior and gluteal pain. Decreased FT may cause an anterior impingement resulting in anterior groin pain with labrum lesions (subluxation movements) and snapping psoas tendon (hyperextension of the tendon). By paying more attention to this, we have observed many patients suffering from patellofemoral complaints and the corresponding hip pain. Accordingly, both joints must be assessed in detail for a precise diagnosis. Considering these aspects of torsional abnormalities, numerous questions arise: Which criteria and parameters are helpful for a better understanding of this subject? What are the points of restraint or hesitation? Which observations from daily practice are valuable? What subjects are/remain unclear? Clinical Evaluation Which observations should put us on track? Torsional deformities of the femur were recognized as a possible cause of patellofemoral problems. A long history with ongoing symptoms, unsuccessful conservative treatment over a long period, functional disability, and the clinical examination are important factors for a primary diagnosis. General Examination The clinical diagnosis of abnormal FT may be difficult and a structured evaluation is needed. At first, the specific assessment of the knee with focus on the PFJ and the soft tissue structures is performed (instability tests of the patella, tightness, patella position, patella height, muscle conditions, contractures). Secondly, a clinical evaluation consists of the measurement of the hip range of movement in a prone and supine position. In patients presenting with pathologic high femoral antetorsion, increased internal and decreased external hip rotation in 90 • of hip flexion is documented, often combined with a positive posterior extraarticular impingement sign with gluteal pain. Therefore, both anterior and posterior impingement tests (FABER test) should be performed [3]. In addition to the clinical examination of the PFJ and the hip, an assessment of the foot position is necessary. Foot Position A physical examination in a standing position shows that patients with increased femoral antetorsion, in most cases, have inward pointed knees and the knee joint faces medially when the foot is in a normal position ( Figure 1 Discomfort is often noted during walking and squatting due to in-toeing. In-toeing of the foot is reported as a diagnostic clinical sign with high specificity for increased FT [4]. However, it must be noted that many of the patients with increased FT walk with a normal foot position [4]. This can lead to an underestimation or misdiagnosis of abnormal FT. In- Discomfort is often noted during walking and squatting due to in-toeing. In-toeing of the foot is reported as a diagnostic clinical sign with high specificity for increased FT [4]. However, it must be noted that many of the patients with increased FT walk with a normal foot position [4]. This can lead to an underestimation or misdiagnosis of abnormal FT. Increased femoral antetorsion may be compensated by increased external tibial torsion (TT), which rotates the leg outward. These patients show medially faced knee joints with a normal foot position. They are likely to have a normal foot progression angle (FPA) due to these compensatory effects of FT and TT [4,5]. The FPA represents the angle of out-toeing of the foot compared with the line of gait progression with normal values between 5-15 • . According to this, the interplay between FT and TT for the final position of the foot must be considered. Special attention must be given to patients suffering from additional hip pain caused by posterior extra-articular ischiofemoral impingement. They try to avoid the impingement pain by active external femoral rotation. In this compensating way of walking, they experience less hip discomfort. However, on the other hand, this external rotation could provoke patellofemoral imbalance [6]. For a clinical interpretation, if a patient presents with in-toeing, there is a high probability that increased FT will co-exist. However, abnormal FT can basically be combined with any type of gait pattern of the foot. If in-toeing was used as the only diagnostic criteria for increased FT, a high percentage of patients with increased FT would be missed [4]. Therefore, measuring FT with CT scans or MRI is strongly recommended in all patients with suspicion for increased femoral antetorsion, even in the absence of in-toeing of the foot. Functional Impairments FT has a direct influence on the amount of abductor strength and the gait pattern. Objective functional impairments are of special interest. People with increased femoral antetorsion often present with external rotator and hip abductor weakness. They have difficulties with functional activities such as stair climbing, standing from sitting, squatting, and jumping. In addition, the pathologic internal rotation causes lateral forces acting on the patella and with this an "increased dynamic Q angle". This excessive lateral force vector may cause patellofemoral pain due to the increased stress (cartilage/bone) and/or lateral patellar subluxation with instability symptoms. Abnormal Skeletal Geometry When are bony abnormalities clinically significant? The major argument to perform a rotational osteotomy in patients presenting with patellofemoral complaints is a significant pathologic femoral antetorsion [1,2,7]. Therefore, patients with a high suspicion of torsional abnormalities during a clinical evaluation need precise measurements of the total FT. According to the possible great clinical importance of abnormal skeletal geometry, it is necessary to have a closer look at the physiological variations and the different values of specific measurement methods. Measurements The correct quantification of FT is essential for diagnosing femoral torsional abnormalities, especially for surgical decision-making and planning of corrective osteotomies [7]. CT-based, segmental multi-level CT assessment, MR-based, and three-dimensional reconstruction measurements are available to assess FT more precisely [8][9][10]. Therefore, the most important question is: What is a normal FT? The normal range of physiologic total FT is reported between 10 to 25 • [3]. Values of more than 25-30 • of total femoral antetorsion are considered as increased, decreased FT is defined as <10 • [1,[11][12][13][14][15]. Differences between the neck (proximal), mid (diaphysal), and distal femoral torsion were established [8][9][10]. In normal controls, a negative correlation between the neck torsion and shaft torsion was found, suggesting that the internal torsion of the neck is accompanied by an external torsion of the shaft [9]. In subjects with a high-torsion, a significant increase in the neck internal torsion in combination with a high lack in the shaft external torsion was described [9]. An evaluation of the segmental torsion of the femur allows a more detailed analysis of femoral alignment [8]. All three levels contribute to the total FT [9]. The values of FT differ considerably depending on the variety of measurement methods and the location of measurement. Using CT-based measurements, the Murphy method with the most distal definition for the proximal femoral neck axis showed high values of mean FT (28 • ), the Lee method with the most proximal definition of the neck axis showed low values (11 • ). It is reported that the Murphy method most closely reflects the true anatomic FT [3]. It is crucial to state the applied method for a correct assessment of femoral torsion. Disregard of the differences among the methods could lead to a misdiagnosis in surgical planning and/or postoperative control. The same measurement methods must be used to compare pre-and postoperative conditions. The personal clinical experience is helpful to choose the preferred measurement method. The values of tibial torsion (TT) must also be considered for a complete assessment of the lower limb torsion. This is of special interest for the clinical examination (foot position with FPA). TT is assessed by measuring the rotational angle of the proximal tibia (line along the posterior tibial plateau) relative to the distal tibia (straight line through the centers of the medial and lateral malleolus). A normal TT is defined between 25 and 40 • [16]. Values of more than 40 • of TT are considered as increased, decreased TT is defined as < 25 • . The femorotibial index is then calculated with TT minus FT [17,18]. For a clinical interpretation, there is no clear recommendation when a correction of pathologic FT is indicated considering the wide range of physiological femoral rotation values. According to our experience, if total femoral antetorsion exceeds the normal by 25 • and no additional pathologic morphology in the PFJ is documented, femoral external rotational osteotomy should be considered, if it exceeds 30 • it is recommended [1,8,11] (Table 1). This applies to all patients with long lasting symptoms around the knee joint, clear pathologic findings during a clinical evaluation, and an unsuccessful conservative treatment. • Clear correlation between increased femoral antetorsion and patellofemoral complaints. • Increased total femoral antetorsion documented by imaging. • Persisting symptoms with failed conservative treatment over a long period. Considering the direct interplay between FT and TT, combined femoral and tibial rotational corrections at the same time should be avoided. It is recommended to correct at first one site and then, to repeat the measurements with the same method of the limb alignment. With this, over-or undercorrections could be avoided. Rotational Osteotomy What should we consider performing a rotational osteotomy? Site of Correction Actually, there is still a debate about the ideal site for the osteotomy, which can be performed proximally (intertrochanteric level) or distally (supracondylar level) [8]. When only the total FT was measured, no differentiation can be made for a torsional deformity located at the proximal, diaphyseal, or distal aspect of the femur. Knowing the segment of the femur in which the torsional deformity is localized could be helpful. If an osteotomy is performed at the site of the deformity, a normal anatomy could be probably better restored [8]. However, the clinical relevance of correction osteotomies performed at the site of the pathologic segment is not proven to date. Earlier, corrections were performed at the intertrochanteric level to avoid an acute angular change of the quadriceps muscle. In addition, patients with a significant increase in neck internal torsion (documented by segmental measurements) could be corrected at the proximal level considering the differences of torsion in the three segments [9]. In addition, surgeons were used to performing proximal osteotomies over a long period. In the last two decades, supracondylar osteotomies were increasingly used to correct additional pathologies at the patellofemoral joint (trochleoplasties, distalisation of the patella, cartilage repair) or the soft tissues (MPFL reconstruction, ligament balancing) at the same time [1,7]. Meanwhile, isolated external rotational osteotomies are routinely performed at the supracondylar level without complications such as muscular problems or complaints due to the initial incongruity of the rotated bones. The major advantages of the supracondylar correction include the possibility to perform the necessary additional interventions at the same site, for example, arthroscopy, intraarticular treatments, MPFL reconstruction, and finally soft tissue balancing [1,5,7]. For clinical relevance, both intertrochanteric and supracondylar procedures externally rotate the trochlea underneath the patella, restore correct stability, decrease the compression in the lateral trochlea, and reduce patellofemoral (and hip) pain [1,5,12,17]. The site of the osteotomy is decided based on personal preference. The rotational correction for maltorsion of the tibia is performed below the tibial tubercle in the proximal, medial, or distal tibia. The use of internal or external fixation devices is possible. Surgical Technique A lateral subvastus approach is used for the supracondylar osteotomies [1,7,12]. Two Kirschner wires are placed anteriorly 3-5 cm above the metaphysis to monitor the planned angle of correction. The osteotomy is performed horizontally with an oscillating saw and then the distal femur is externally rotated to correct the increased internal rotation until the two Kirschner wires are positioned as the preoperative planning has indicated. Fixation of the osteotomy is performed using a locking screw osteosynthesis plate (Figure 2a,b). The final step consists of balancing the soft tissues around the PFJ. Postoperatively, partial weight bearing with 20 kg for 6 weeks is recommended. Combined Bony Deformities Patients with patellofemoral complaints may have complex bony deformities in combination with torsional abnormalities, varus-/valgus deviation, and dysplastic trochlea [7,12,15]. In these cases, all underlying pathologies should be addressed to restore the cor- The final step consists of balancing the soft tissues around the PFJ. Postoperatively, partial weight bearing with 20 kg for 6 weeks is recommended. Combined Bony Deformities Patients with patellofemoral complaints may have complex bony deformities in combination with torsional abnormalities, varus-/valgus deviation, and dysplastic trochlea [7,12,15]. In these cases, all underlying pathologies should be addressed to restore the correct limb alignment [7,12]. The same lateral approach is used, in which the individually adapted surgical procedure is performed. Clinical Experience Femoral rotational osteotomies to correct deformities have been increasingly performed with encouraging results [3,19,20]. Distal femoral derotational osteotomies are an excellent treatment option in patients with patellofemoral instability or pain [1,5,7,12]. Moreover, combined surgical interventions showed good subjective and objective functional results in most cases [5,7,12]. A significant decrease in pain, increased comfort when walking and squatting, as well as an improved function were noted [7]. The strongest argument to perform an external rotation osteotomy is the fact that it corrects the underlying pathology and restores the horizontal limb alignment ( Table 2). • Clinical experience with encouraging results after femoral derotation osteotomies. • Good clinical and functional results after secondary rotational osteotomy in failures after other patellofemoral interventions. It improves not only patellofemoral kinematics and with this patellar stability, but also the involved muscle forces, soft tissue restraints, and hip movements. Good subjective and objective functional results are reported in most cases. Reasons for Surgical Failures What may lead to an unsuccessful clinical outcome? We have seen that many surgeons hesitate or refuse to perform rotational osteotomies, although a clinical assessment and imaging documented the pathologic femoral antetorsion. The major concerns are summarized in Table 3. In these cases, other surgical interventions were preferred. Considering this, we observed numerous patients with remaining complaints after variable surgical interventions to treat patellofemoral problems. The precise assessment of the skeletal morphology depicted in some of these cases of pathologic femoral antetorsion values as the single or most important cause for the patellofemoral problems, which was not addressed. This means that these patients had a clear indication for derotational osteotomy, but other interventions were performed. Only the secondary external rotation osteotomy on the supracondylar level could finally resolve the problem. Occasionally, it was even necessary to reverse the primary procedures at revision. Rather than the correction of a pathologic rotation, other interventions, such as MPFL reconstruction, medial soft tissue plication, medialization osteotomy of the tibial tubercle, or raising of the lateral trochlea, were performed. Such attempts to compensate the primary problem without a precise clinical evaluation and the corresponding imaging measurements may not only result in unsuccessful outcomes, but may also cause additional complaints. Medialization of the tibial tubercle leads to increased external tibial torsion and may therefore exacerbate the symptoms [5]. MPFL reconstructions risk pulling the patella too much to the medial with increased compression. Raising of the lateral condyle may cause lateral hypercompression. • Different available measurement methods with a wide range of values. • Differences between neck, mid, and distal femoral torsion and their contribution to the total femoral torsion. • Unclear recommendation for proximal or distal osteotomy. • Invasive surgery with locking plate and plate removal. • Long rehabilitation time. It is most important that the chosen surgical procedure aims to eliminate the cause of the patellofemoral problem. Therefore, a detailed analysis to identify the relevant factors is crucial. The surgical intervention should aim to regain mostly normal anatomic and biomechanical conditions. Lately, abnormal femoral torsion was depicted in about 23% of patients suffering from patellar instability, thus representing an important risk factor with clinical implications [9]. We observed also patients with multiple causes and factors for patellofemoral complaints. Increased femoral antetorsion can be combined with trochlear dysplasia, insufficiency of the MPFL, patella alta, pathologic tibial torsion, increased tibial tuberosity-trochlear groove distance, pathologic coronal alignment (valgus/varus), and/or different hip abnormalities [5,7,12,15]. Occasionally, only one risk factor for patellar instability is addressed during surgery (Figure 3a-c). This may lead to persisting complaints. For clinical relevance, derotational osteotomy should be considered in the first place in patients with increased femoral antetorsion, other compensatory interventions are avoided whenever possible. If multiple risk factors for patellar instability or pain are documented, then a meticulous analysis of what should be corrected is necessary. We have learned that sometimes combined interventions are necessary. Alternatives Are there alternatives for rotational osteotomies? The association of the described functional impairments offers the possibility of structured training to improve the functional performance. A dynamic hip strength and power might improve the objective function of the whole limb and the patellofemoral tracking. Therefore, rehabilitation should focus on hip extension, hip external rotation, and hip abduction to control femoral rotation movements. This includes progressive resistance training for strength, power, and muscular endurance. In addition, a core muscle control with trunk stability must be addressed. For clinical relevance, derotational osteotomy should be considered in the first place in patients with increased femoral antetorsion, other compensatory interventions are avoided whenever possible. If multiple risk factors for patellar instability or pain are documented, then a meticulous analysis of what should be corrected is necessary. We have learned that sometimes combined interventions are necessary. Alternatives Are there alternatives for rotational osteotomies? The association of the described functional impairments offers the possibility of structured training to improve the functional performance. A dynamic hip strength and power might improve the objective function of the whole limb and the patellofemoral tracking. Therefore, rehabilitation should focus on hip extension, hip external rotation, and hip abduction to control femoral rotation movements. This includes progressive resistance training for strength, power, and muscular endurance. In addition, a core muscle control with trunk stability must be addressed. Conclusions and Observations Rotational osteotomies are indicated for those patients with a significant pathologic total femoral torsion and persistent patellofemoral complaints after failed conservative therapy over a long period. A supracondylar external rotation osteotomy is recommended if the total femoral antetorsion value exceeds 25 • . Distal femoral external rotation osteotomy is an individually adapted surgical procedure for restoring the horizontal limb alignment. It improves patellar stability and patellofemoral kinematics and yields good subjective and objective functional results. Funding: This research received no external founding.
4,856.6
2021-01-27T00:00:00.000
[ "Medicine", "Engineering" ]
Particulated, Extracted Human Teeth Characterization by SEM–EDX Evaluation as a Biomaterial for Socket Preservation: An In Vitro Study The aim of the study was to evaluate the chemical composition of crushed, extracted human teeth and the quantity of biomaterial that can be obtained from this process. A total of 100 human teeth, extracted due to trauma, decay, or periodontal disease, were analyzed. After extraction, all the teeth were classified, measured, and weighed on a microscale. The human teeth were crushed immediately using the Smart Dentin Grinder machine (KometaBio Inc., Cresskill, NJ, USA), a device specially designed for this procedure. The human tooth particles obtained were of 300–1200 microns, obtained by sieving through a special sorting filter, which divided the material into two compartments. The crushed teeth were weighed on a microscale, and scanning electron microscopy (SEM) evaluation was performed. After processing, 0.25 gr of human teeth produced 1.0 cc of biomaterial. Significant differences in tooth weight were found between the first and second upper molars compared with the lower molars. The chemical composition of the particulate was clearly similar to natural bone. Scanning electron microscopy–energy dispersive X-ray (SEM–EDX) analysis of the tooth particles obtained mean results of Ca% 23.42 ± 0.34 and P% 9.51 ± 0.11. Pore size distribution curves expressed the interparticle pore range as one small peak at 0.0053 µm. This result is in accordance with helium gas pycnometer findings; the augmented porosity corresponded to interparticle spaces and only 2.533% corresponded to intraparticle porosity. Autogenous tooth particulate biomaterial made from human extracted teeth may be considered a potential material for bone regeneration due to its chemical composition and the quantity obtained. After grinding the teeth, the resulting material increases in quantity by up to three times its original volume, such that two extracted mandibular lateral incisors teeth will provide a sufficient amount of material to fill four empty mandibular alveoli. The tooth particles present intra and extra pores up to 44.48% after pycnometer evaluation in order to increase the blood supply and support slow resorption of the grafted material, which supports healing and replacement resorption to achieve lamellar bone. After SEM–EDX evaluation, it appears that calcium and phosphates are still present within the collagen components even after the particle cleaning procedures that are conducted before use. Introduction In recent years, the physicochemical properties of biomaterials have been analyzed extensively to identify characteristics that will maximize the clinical outcomes of bone defect repair. In this context, two characteristics-grain size and the biomaterial's composition-directly influence the biomaterial's resorption activity and the speed of resorption [1]. A bone replacement material must have "bimodal" behavior, which, in the early stages of differentiation, allows osteoblasts to build bridges between grains of different sizes and integrate with other osteoblasts, supporting both proliferation and differentiation. New bone formation is stimulated by the activation of mesenchymal stem cells on the rough surfaces of biomaterials [2][3][4]. The ultimate goal is the union of completely differentiated osteoblasts, which will support the production of the bone matrix. This requires a bone replacement material with a porous structure including macropores, micropores, and nanopores [5,6]. In terms of roughness and external porosity, the surface of the bone replacement material's particles will directly influence the attachment of solvents to the surface of the biomaterial, allowing advanced cell colonization and the process of biomaterial remodeling to commence. The presence of macropores and micropores in the particles of the graft biomaterial has been shown to be a very important criterion, allowing blood vessels to enter and favoring bone growth through osteoconduction within the pores. The structural properties and the physical and chemical characteristics of composite ceramics have been seen to affect their behavior in vivo, whether dependently or independently, whereby the outcome will depend on the case's individual bone repair parameters. Synthetic scaffolds can be used in regenerative and reconstructive surgery to treat bone defects. Biomaterials consisting of collagen and ceramic material are typically evaluated in terms of the proportions of liquid, collagen, and hydroxyapatite they contain. Porcine hydroxyapatite (HA) has lower crystallinity due to the presence of collagen in its composition. Changing the size, porosity, and crystallinity of each HA-based bone substitute material will influence the integration of the biomaterial within the implantation site and new bone formation [7,8]. To allow tissue penetration into the pores (and thus bone repair), they must be greater than 100 µm [9][10][11][12][13]. The most commonly used biomaterials are bioceramics based on calcium phosphate (Ca-P). The Ca-Ps have a composition and structure highly similar to the bone mineral phase, which presents osteoconductive properties and thus stimulates bone formation. Among the various materials assayed in recent years, tricalcium phosphate (TCP) has shown promising results in animal experiments and clinical studies [14][15][16]. At least one case series and several animal studies have reported promising results from a technique in which extraction sockets were augmented with autologous, particulate, mineralized dentin placed immediately after tooth extraction [17][18][19]. Although the supply of human teeth is, in fact, limited, when an extraction takes place, the tooth is naturally available and should be used to correct the damage caused by the extraction and subsequent lack of function, which leads to extensive resorption. To perform this procedure, the "Smart Dentin Grinder" TM machine was specially designed to crush, grind, and classify extracted teeth into different size particles. A special Dentin chemical cleanser is applied for 5 min to eliminate bacteria from the tooth, and right after, the tooth is washed with PBS two times. This novel procedure can be performed with any extracted teeth. Technically speaking, an autologous material can be returned to its donor without any treatment. However, in the case of the protocol that we have followed, there are multiple steps where strong disinfectant agents are used that are very effective in removing any bacteria/virus and many other biohazards that might be present. Although this is true for allografts, this is not the case for autografts. Using our own biology in order to treat ourselves is not subject to ethical considerations. The aim of this study was to determine the chemical composition and the amount of biomaterial obtained from crushed human teeth in order to fill empty alveolus with material from the manufacturer's protocol. Materials and Methods The study protocol was approved by the Catholic University of Murcia Ethics Committee (UCAM; registration number 6781; 21-07-2017). Human teeth were extracted from 50 patients aged between 36 and 65 years, who received no financial compensation. All the patients signed informed consent forms to donate their teeth for use in the study. The teeth were extracted because of trauma, decay, or periodontal disease that had caused damage to one or two teeth in the upper maxilla and/or mandible. A total of 100 teeth were collected from 50 donors. The teeth were cleaned using straight fissure carbide burs, trimming the periodontal ligament, and dried with an air syringe. Each tooth was immediately classified, measured, and weighed. All the teeth were stored in separate sterile crystal containers at room temperature for 1-3 months, 1 per donor, labeling each container with the donor's details and the characteristics of the teeth (type, weight, dimensions). After being cleaned and dried, the teeth were immediately crushed using the "Smart Dentin Grinder" device (KometaBio Inc., Cresskill, NJ, USA). The idea was to process an autologous dentin graft as a replacement for autologous bone harvesting. By doing so, we can preserve the tooth in the form of a particulate without diminishing the bioactive properties of dentin, a plethora of BMPs (bone morphogenic proteins) and growth factors, therefore leveraging it as a biocompatible, bioactive, bio-inert graft. Using dentin for non-autologous purposes or alternatively as an allograft, which requires extensive processing, is certainly not efficient and is not part of the current study's parameters. Autologous bone is still considered the gold standard for grafting. Autologous dentin not only has the same effects as autologous bone, we argue that, due to its inert and strong scaffold of dense HA, it is, in effect, better than autologous bone. The tooth particles were sized at 300-1200 microns, obtained by sieving the particles into two different compartments ( Figure 1). The tooth particulate was then immersed in a basic alcohol cleanser in a sterile container for 10 min to dissolve all organic waste and bacteria. Afterward, the teeth particles were placed in ethylenediaminetetra-acetic acid (EDTA) for 2 min for partial demineralization and then washed with sterile saline for 3 min (Figure 2). Virus and fungi are all eliminated using the dentin cleanser that is part of the protocol. The dentin cleanser is a strong alkali (sodium hydroxide and ethanol combination) that is very effective in removing all bacteria, virus, and fungi. As for prions, we are not sure whether the dentin cleanser is able to remove all prions, but, again, these are the patient's own prions, because this is an autologous graft. Notes: The above procedure will result in 20um of demineralized dentin surface exposure to induce osteogenic activity of dentin. Figure 2. The manufacturer's protocol for grinding teeth. The ground tooth material was analyzed by scanning electron microscopy (SEM) to evaluate its characteristics ( Figure 3). For the SEM study, the particulate samples were placed in liquid nitrogen for approximately 2 min. The particles were coated with a carbon film (BalTec CED 030; BalTec, Balzers, Liechtenstein) for SEM analysis at ×10 magnification. The resolution was 0.8nm @ 15KV; 1.4nm @ 1 KV; 0.6 nm @ 30KV (STEM mode); 3.0@ 20 kV at 10 nA; and WD 8.5 nm using a Gemini II Electron Optics (Carl Zeiss Microscopy Gmbh, Jena, Germany), which is fitted with detectors for secondary electrons and backscattered electrons in order to allow for exploration of the different biological processes involved in tissue healing and to identify morphological changes in the cellular components of different materials. Mineralogical analysis of the material was performed by X-ray diffraction (XRD). XRD patterns were obtained using a Bruker AXS D8-ADVANCE X-ray Diffractometer (Karlsruhe, Germany) applying CuK1 radiation (0.15418 nm) and a second curved Notes: The above procedure will result in 20um of demineralized dentin surface exposure to induce osteogenic activity of dentin. The ground tooth material was analyzed by scanning electron microscopy (SEM) to evaluate its characteristics ( Figure 3). For the SEM study, the particulate samples were placed in liquid nitrogen for approximately 2 min. The particles were coated with a carbon film (BalTec CED 030; BalTec, Balzers, Liechtenstein) for SEM analysis at ×10 magnification. The resolution was 0.8nm @ 15KV; 1.4nm @ 1 KV; 0.6 nm @ 30KV (STEM mode); 3.0@ 20 kV at 10 nA; and WD 8.5 nm using a Gemini II Electron Optics (Carl Zeiss Microscopy Gmbh, Jena, Germany), which is fitted with detectors for secondary electrons and backscattered electrons in order to allow for exploration of the different biological processes involved in tissue healing and to identify morphological changes in the cellular components of different materials. Mineralogical analysis of the material was performed by X-ray diffraction (XRD). XRD patterns were obtained using a Bruker AXS D8-ADVANCE X-ray Diffractometer (Karlsruhe, Germany) applying CuK1 radiation (0.15418 nm) and a second curved The ground tooth material was analyzed by scanning electron microscopy (SEM) to evaluate its characteristics ( Figure 3). For the SEM study, the particulate samples were placed in liquid nitrogen for approximately 2 min. The particles were coated with a carbon film (BalTec CED 030; BalTec, Balzers, Liechtenstein) for SEM analysis at ×10 magnification. The resolution was 0.8nm @ 15KV; 1.4nm @ 1 KV; 0.6 nm @ 30KV (STEM mode); 3.0@ 20 kV at 10 nA; and WD 8.5 nm using a Gemini II Electron Optics (Carl Zeiss Microscopy Gmbh, Jena, Germany), which is fitted with detectors for secondary electrons and backscattered electrons in order to allow for exploration of the different biological processes involved in tissue healing and to identify morphological changes in the cellular components of different materials. Mineralogical analysis of the material was performed by X-ray diffraction (XRD). XRD patterns were obtained using a Bruker AXS D8-ADVANCE X-ray Diffractometer (Karlsruhe, Germany) applying CuK1 radiation (0.15418 nm) and a second curved graphite monochromator. Diffractograms of the samples were compared with data from the Joint Committee on Powder Diffraction Standards (JCPDS) database (Figure 4). The samples' porosity and pore size distribution were analyzed by mercury porosimetry using an automatic pore size analyzer (Poremaster-60 GT, Quantachrome Instruments, Boyton Beach, FL, USA) within a 6.215-411,475.500 KPa pressure range, corresponding to a pore diameter range of 236,641.05-3.57 nm. A total of 3 particulate samples (~0.47 g) were analyzed using this technique. An additional sample was also used in every case if the measured values for porosity differed by more than 5%. Helium gas pycnometry (Quantachrome Instruments, Boyton Beach, FL, USA) was used to determine the particle's real density (sample mass/volume of the solid), excluding empty spaces. The samples' porosity and pore size distribution were analyzed by mercury porosimetry using an automatic pore size analyzer (Poremaster-60 GT, Quantachrome Instruments, Boyton Beach, FL, USA) within a 6.215-411,475.500 KPa pressure range, corresponding to a pore diameter range of 236,641.05-3.57 nm. A total of 3 particulate samples (~0.47 g) were analyzed using this technique. An additional sample was also used in every case if the measured values for porosity differed by more than 5%. Helium gas pycnometry (Quantachrome Instruments, Boyton Beach, FL, USA) was used to determine the particle's real density (sample mass/volume of the solid), excluding empty spaces. The samples' porosity and pore size distribution were analyzed by mercury porosimetry using an automatic pore size analyzer (Poremaster-60 GT, Quantachrome Instruments, Boyton Beach, FL, USA) within a 6.215-411,475.500 KPa pressure range, corresponding to a pore diameter range of 236,641.05-3.57 nm. A total of 3 particulate samples (~0.47 g) were analyzed using this technique. An additional sample was also used in every case if the measured values for porosity differed by more than 5%. Helium gas pycnometry (Quantachrome Instruments, Boyton Beach, FL, USA) was used to determine the particle's real density (sample mass/volume of the solid), excluding empty spaces. Statistical Analysis Statistical analysis was performed using PASW Statistics v.18.0.0 software (SPSS). A descriptive test of a mean and standard deviation of each human tooth length, width, and weight was conducted. One-way ANOVA was applied for the comparison of the means, assuming a level of significance of 95% (p < 0.05). Results Human upper central incisors measured 6.5 ± 0.2 mm in length, 1.2 ± 0.6 mm in width, and weighed 1.3 ± 0.9 gr, while first mandibular molars measured 6.9 ± 0.2 mm in length, 2.1 ± 0.7 mm in width, and weighed 2.2 ± 1.1. These data show the significant differences between central incisors and first molars, which presented twice the width and weight of the incisors. Table 1 shows the mean tooth dimensions obtained for each type of tooth. Figure 3 shows the X-ray diffraction (XRD) patterns of central incisor tooth particles. XRD patterns are associated with the biomaterial's chemical composition. The crushed tooth particles presented high crystallinity (Figure 4). A human extracted tooth weighing 0.25 gr produced at least 1.0 cc of particulate (Table 2). Analyzing the material by mercury porosimetry, two kinds of spaces were identified: those that correspond to empty spaces between particles (commonly designated as "interstices" or "interparticle" spaces) and those that correspond to the spaces within the particles themselves (known as "pores" or "intraparticle" spaces). The results obtained for the granules of human teeth particles showed that with increasing pressure, mercury penetrated into the increasingly amorphous pores. Pore size distribution curves must be interpreted, a technique in which it is important to specify the size range of the measured pores. The size of these spaces depends on particle size, number, and shape, as well as the distribution of particle sizes. A big peak is related to a big particle (47.2 µm), corresponding to the intrusion of mercury into the interparticle spaces. The cumulative curve denoted an intrusion into the pores of between 219 µm and 38.2 µm, followed by a plateau after 38.2 µm, where no intrusion was detected. The initial rise of the curve mostly corresponded to the filling of the spaces between the particles, whereas the later stage of rising was related to the pores within the individual particles. The intraparticle pore range was more obvious, in which one small peak at 0.0053 µm was clearly visible. Discussion Bone graft materials derived from teeth with an absence of antigenicity improve bone formation and bone remodeling capabilities. A wide range of bone graft materials are available, and choosing the right one presents a challenging decision that will be dictated by the bone substitute material's physicochemical properties in relation to the type of defect and the main purpose of the procedure [20][21][22]. Bone grafts derived from teeth can be considered to be an attractive option due to their autogenous origin and favorable clinical results, which have shown that these materials offer good osteoinductive capacities. Nevertheless, they pose some risk of viral infection and are limited in quantity, while most of the synthetic materials offer osteoconductive competence and can be supplied in unlimited quantities [23][24][25]. The SEM micrographs provided information about the morphology of the crushed tooth particulate, which presented no critical defect and a homogeneous microstructure with aggregates of high density. In mercury porosimetry analysis, the inter and intraparticle pore distinction is not always clear. The information provided by pore size distribution curves must be interpreted, a technique in which the size range of the measured pores is of fundamental importance. In the present study, the tooth particles consisted of a highly porous network with an average pore size of 0.431 ± 0.213 µm. The total porosity of the samples analyzed had an average of 54.868%, which is comparable to replacement biomaterials of different origins and with the most useful ones, which are around 60%. As research has demonstrated, the degree of porosity and its disposition directly influences the biological behavior of biomaterial grafts. In addition, there is a direct relationship between these parameters and resorption rates [26,27]. EDX was used to determine the elemental composition of the dentin particulate, obtaining a Ca/P ratio of 1.67 ± 0.09, which is similar to that of synthetic HA. The presence of traces of magnesium was also observed, known to be the impurity in calcium phosphate as a raw material. The composition of the samples determined by quantitative analysis at different points of the sample surfaces showed the presence of Ca, P, and O. Although demineralized dentin exhibits matrix-derived growth and differentiating factors for effective osteogenesis, the newly formed bone that is generated and the residual demineralized dentin is too weak to allow adequate implant anchorage. However, the use of the Smart Dentin Grinder Machine enables us to prepare a natural biomaterial from freshly extracted autologous teeth in the form of a bacteria-free particulate for immediate use as an autogenous graft biomaterial in a single surgical session. Teeth and mandibular/maxillary bone have a high level of similarity with dentin, both presenting similar chemical structures and composition in organic, protein, and mineral phases. For this reason, our research team (in light of our own findings and those of other investigations) proposes that non-functional extracted teeth or periodontally involved teeth should no longer be discarded [28]. Extracted teeth can be ground to produce an autogenous dentin particulate within 15 min of extraction and can then be grafted into the post-extraction alveoli. In this way, the patient's own extracted tooth acts as a clinically useful bone graft material that offers all the advantages of autogenous bone due to the similarity of composition between bone and dentin. The particulate tooth material provides excellent biocompatibility without eliciting an immune response or a foreign material reaction or infection after it is used. In addition, it has osteoinduction, osteoconduction, and progressive substitution capabilities, and it can be worked into various sizes and shapes [28]. Moreover, some patients refuse allografts or xenografts on the basis of their origins-a problem that this technique overcomes. Conclusions Autogenous tooth particulate biomaterial made from human extracted teeth may be considered a potential material for bone regeneration due to its chemical composition and the high quantity of material obtained from each tooth. After grinding the teeth, the resulting material increases in volume by up to three times, so that two extracted mandibular lateral incisors teeth will provide a sufficient amount of material to fill four empty mandibular alveoli. The tooth particles present intraand extra-porosity up to 44.48% after pycnometer evaluation in order to increase the blood supply and support slow resorption of the grafted material, which will support healing and replacement resorption to achieve lamellar bone. After SEM-EDX evaluation, it appears that calcium and phosphates are still present within the collagen components even after the particle cleaning procedures that are conducted before use.
4,830
2019-01-25T00:00:00.000
[ "Medicine", "Materials Science" ]
Luminescence Intensity Ratio Thermometry with Er3+: Performance Overview The figures of merit of luminescence intensity ratio (LIR) thermometry for Er3+ in 40 different crystals and glasses have been calculated and compared. For calculations, the relevant data has been collected from the literature while the missing data were derived from available absorption and emission spectra. The calculated parameters include Judd–Ofelt parameters, refractive indexes, Slater integrals, spin–orbit coupling parameters, reduced matrix elements (RMEs), energy differences between emitting levels used for LIR, absolute, and relative sensitivities. We found a slight variation of RMEs between hosts because of variations in values of Slater integrals and spin–orbit coupling parameters, and we calculated their average values over 40 hosts. The calculations showed that crystals perform better than glasses in Er3+-based thermometry, and we identified hosts that have large values of both absolute and relative sensitivity. Introduction The measurements of temperature, one of seven fundamental physical quantities, can be classified according to the nature of contact between the measurement object and instrument to invasive (where there is direct contact, e.g., thermocouples, thermistors), semiinvasive (where measuring object is altered in a way to enable contactless measurements), and non-invasive (where the temperature is estimated remotely, e.g., optical pyrometers) [1]. The first type necessarily perturbs the temperature of measurement objects which limits its use in microscopic objects. In addition, such approaches are difficult to implement on moving objects or in harsh environments, for example, in high-intensity electromagnetic fields, radioactive, or chemically challenging surroundings. Thus, the current market of thermometers, accounting for more than 80% of all sensors [2], demands methods that allow for remote or microscopic measurements. Among many perspective optical semiinvasive techniques, luminescence thermometry which uses thermographic phosphors has drawn the largest attention [3,4]. The thermographic phosphor probe can be incorporated within the measured object or on its surface, on macroscale to nanoscale sizes, or can be mounted on the surface of the fiber-optic cables and bring to proximity of measuring objects. Luminescent thermometry has found a range of valuable applications, from engineering to biomedical [5], and, currently, it is a widely researched topic with an exponentially increasing number of published research papers [6]. Presently, many types of materials are used for the construction of thermometry probes. These include rare-earth and transition metal activated phosphors, semiconductor quantum dots, organic dyes, and metal-organic complexes, carbon dots, and luminescent polymers. Among the rare-earth crystals are by far the most exploited type [5], usually exploited in the so-called luminescence intensity ratio (LIR, sometimes called fluorescence intensity ratio (FIR) or labeled as ∆) temperature read-out scheme that is based on the LIR(T) = I H (T) where B is the temperature invariant parameter that depends only on the host. The thermometer's performance is estimated by the absolute (S a ) and relative (S r ) sensitivities, and temperature resolution (∆T) given by [10]: where σ a and σ r are the absolute and relative uncertainties in measurement of LIR, presented as standard deviations. For the temperature dependence of LIR given by Equation (1), sensitivities have the following form: The absolute sensitivity reaches maximum at T = ∆E/2k with the value of [8]: where e = 2.718 is a number (the natural logarithm base). The ideal situation for LIR is the Boltzmann luminescence thermometer since it is easily calibrated with the well-known and simple theory. According to Equation (3), the relative sensitivity only depends on the value of energy difference between thermalized energy levels. The choice of levels with the energy gap larger than 2000 cm −1 may result in the thermalization loss at low temperatures, and even around room temperatures, while the small energy gap gives small relative sensitivities. One should consider that for achieving the Boltzmann's thermal equilibrium some other conditions must be fulfilled besides the suitable energy difference between the levels, as recently demonstrated by Geitenbeek et al. [11] and Suta et al. [12]. Furthermore, considering the adjacent energy levels of trivalent rare-earth used for thermometry, the largest energy gap is in the Eu 3+ (between 5 D 1 and 5 D 0 levels) and is approximately 1750 cm −1 . Thus, the current research of LIR of a single emission center is aimed at increasing the relative sensitivity without the loss of thermalization (and deviation from the Boltzmann distribution). One recently demonstrated solution is the inclusion of the third, non-adjacent level, with higher energy, which is thermalized with the second level. If the first and second levels are thermalized, and second and third levels are thermalized, then the ratios of emission intensities of the first and third levels will follow the Boltzmann distribution, even if their separation is greater than stated above, see Figure 1 for the case of Er 3+ . The conventional LIR of Er 3+ is equal to the ratio of emissions from 2 H 11/2 (~523 nm) and 4 S 3/2 (~542 nm) levels, which are separated by~700 cm −1 , thus giving the relative sensitivity of~1.1% K −1 . By observing intensities to the ground level from 4 F 7/2 (~485 nm) with the 4 S 3/2 , it is evident that their relative change is much larger than with 2 H 11/2 . This larger energy difference, according to Equation (3), ultimately results in more than a three-fold increase in relative sensitivity. Judd-Ofelt Theory and Its Relevance for Luminescence Thermometry The electronic configuration of trivalent erbium is that of Xenon plus the 11 electrons in the 4f shell, i.e., Er 3+ has [Xe]4f 11 electronic configuration. With only 3 electrons missing from the completely filled 4f shell, Er 3+ shares the same LS terms and LSJ levels as the Nd 3+ who has 3 electrons in the 4f shell. The transitions from one to another level are followed by the reception or release of energy. The probability for such phenomena for a given set of initial and final levels is given by the wavefunctions and the appropriate moment operator. The exchange of energy in the intra-configurational 4f transitions with the highest intensity is of induced electric dipole and magnetic dipole types [13]. What was puzzling only half a century ago was the origin of these "electric dipole" interactions, as they were clearly and strictly forbidden by the Parity selection rule, also known as the Laporte rule. The solution to this problem came in 1962 in the papers simultaneously published by Judd [14] and Ofelt [15], to what is latter known as the Judd-Ofelt theory (JO). For the sake of brevity, it will not be explained here, and the reader is instead referred to the excellent references [16][17][18]; however, we will touch on several basics that are the most relevant for the present research. Judd-Ofelt Theory and Its Relevance for Luminescence Thermometry The electronic configuration of trivalent erbium is that of Xenon plus the 11 electrons in the 4f shell, i.e., Er 3+ has [Xe]4f 11 electronic configuration. With only 3 electrons missing from the completely filled 4f shell, Er 3+ shares the same LS terms and LSJ levels as the Nd 3+ who has 3 electrons in the 4f shell. The transitions from one to another level are followed by the reception or release of energy. The probability for such phenomena for a given set of initial and final levels is given by the wavefunctions and the appropriate moment operator. The exchange of energy in the intra-configurational 4f transitions with the highest intensity is of induced electric dipole and magnetic dipole types [13]. What was puzzling only half a century ago was the origin of these "electric dipole" interactions, as they were clearly and strictly forbidden by the Parity selection rule, also known as the Laporte rule. The solution to this problem came in 1962 in the papers simultaneously published by Judd [14] and Ofelt [15], to what is latter known as the Judd-Ofelt theory (JO). For the sake of brevity, it will not be explained here, and the reader is instead referred to the excellent references [16][17][18]; however, we will touch on several basics that are the most relevant for the present research. In RE 3+ (trivalent rare-earth) ions in general, the electrostatic (He) and s-o (Hso) interactions between 4f electrons are dominant and are of the approximately same magnitude, thus the Hamiltonian can be given approximately as H = He + Hso [17]. The electrostatic Hamiltonian can be reduced to the electron-electron repulsion form [17], which can be split into its radial (F k ) and angular (fk) parts: The radial parameters are the Slater integrals given by [19]: In RE 3+ (trivalent rare-earth) ions in general, the electrostatic (H e ) and s-o (H so ) interactions between 4f electrons are dominant and are of the approximately same magnitude, thus the Hamiltonian can be given approximately as H = H e + H so [17]. The electrostatic Hamiltonian can be reduced to the electron-electron repulsion form [17], which can be split into its radial (F k ) and angular (f k ) parts: The radial parameters are the Slater integrals given by [19]: where r > is greater and r < is smaller than r i and r j , and R are the radial parts of the wavefunction. The Slater integrals can be evaluated by the Hartree-Fock method; however, it does not provide accurate results, and it is best to obtain them semi-empirically by adjusting them to the experimentally observed energies of 4f levels [20]. H so mixes all states that have the same J quantum number, and it is proportional to the s-o coupling parameter, ζ, which is further proportional to the number of electrons within the 4f shell. Er 3+ has a relatively high value of ζ in comparison with other trivalent lanthanide ions, providing a large mixing of states [21]. In this intermediate coupling approximation scheme, the wavefunctions are expressed as a linear combination of all other states in the configuration with the same J quantum number [17,18,22,23]: As the 4f electrons are shielded by the outer higher-energy electrons the crystal field (CF) introduces only a perturbation to the Hamiltonian [16]. Nevertheless, that perturbation weakens the already mentioned Laporte (parity) selection rule that forbids the ED transitions within the configuration. The 4f-4f transitions of electric dipole (ED) type become allowed and are known as the induced ED [24]. The radiative transition probability for such spontaneous emission is then equal to [25]: or for the purely induced ED emission (MD is an abbreviation for the magnetic dipole): where h = 6.626 × 10 −27 erg·s is the Planck's constant. X is the local field correction, ν SLJ→S L J is the emission barycenter energy, and D is the dipole strength given in esu 2 cm 2 units. The emission barycenter is [26]: where i is the intensity at a given energy. The local field correction for ED emission is given by [27]: where n is the refractive index that should be given at the wavelength of the barycenter of the emission. It can be calculated from the Sellmeier's equation for a given material, which is given in the form [28,29]: In the JO scheme, the ED strength is given by [26]: where e = 4.803 × 10 10 esu, U λ SLJ→S L J is the abbreviation for squared RMEs 4 f 11 SLJ U λ 4 f 11 S L J 2 , which in turn can be calculated from the Slater integrals and the s-o coupling parameter. Ω λ are the JO intensity parameters, obtained semi-empirically or by the ab initio calculations (from the crystal-field parameters). The integrated emission intensity for the transition SLJ→S L J is given by [30,31]: or without the hν if the spectrum is recorded in counts instead of power units [32]. LIR of two emissions from the thermally coupled levels is then given by: where I H/L are the integrated intensities from the higher and lower level, respectively (without ν H /ν L if recorded in counts). According to the Boltzmann distribution, the optical center population is given by: where g = 2J + 1 are the degeneracies of the selected levels. Equation (15) can be rewritten as Equation (1) where B is the temperature invariant parameter that is given by: or if the intensities are recorded in counts instead of power units, without the ν H /ν L . As we have demonstrated in our previous article [8], by inserting Equation (8) into Equation (17), the LIR, the absolute sensitivity (and everything related to it) and temperature resolution can be predicted by JO parameters, as the B parameter can be obtained from: (18) or in the case of the pure ED transitions: For the case of spectra recorded in counts, ν H /ν L should be to the power of 3. The shielding of 4f electrons by electrons from outer orbitals ensures that the RE 3+ spectra are featured by sharp peaks whose energies are almost host-independent. This is reflected in the almost host invariant reduced matrix elements. However, as the Slater parameters deviate significantly in Er 3+ , using such approximation may introduce significant errors. For the analysis of this type, it is more accurate to use the reduced matrix elements that are calculated from Slater integrals and s-o coupling parameters, which are calculated semi-empirically from the positions of the energy levels. Analogously, the small variations in energy level positions may provide significant variations in energy level differences, and thus large deviations in absolute and relative sensitivities. Finally, the small differences in refractive index become enormous when they propagate in the local field correction coefficient and, thus, it is of utmost importance to use accurate values. In this study, the observed levels are energetically very close, thus, it is a good approximation to consider the refractive index as wavelength-independent; however, the exact method is always preferred. Calculations of Er 3+ Radiative Properties in Different Hosts For the study, we have selected 40 different hosts doped with Er 3+ (Table 1), from the literature that contained the most complete set of data needed for the analysis presented in this paper. As the JO parametrization is traditionally performed semi-empirically from the absorption spectrum, powders and non-transparent materials are not included in this analysis. In the 3rd column, Table 2 gives the energies of the 4 S 3/2 , 2 H 11/2 , and 4 F 7/2 levels used for the two LIRs that will be theoretically investigated. As stated in the introduction, this is important in the estimation of the thermometric figures of merit, and it is linked to the Slater integrals and s-o parameters. The table also includes the Slater integrals and s-o coupling parameters of the 16 out of the 40 hosts, and the JO intensity parameters for all the hosts, taken from references Table 1. (15) in Ref. [17]. b Slater integrals calculated from Racah parameters by Equation (17) in Ref. [17]. c Slater integrals and spin-orbit coupling parameter not provided, the RME values the authors used are by Carnall in Ref. [66], or by d Weber in Ref. [35]. e Energy levels are not given in the literature, values in the table are provided approximately. Figure 2 presents the variation of Slater integrals and s-o coupling parameters in those 16 hosts. Although there are no large differences in parameters between the crystals and glasses, there are certain trends that may be observed by the type of compound. Deviations in parameters' values from host to host can be large, so the use of Carnall or Weber tables [22,35] for Er 3+ RMEs can introduce large errors in the later calculations. Figure 3 presents the JO parameters as given in Table 2. Glass hosts have smaller values of JO parameters than crystals, on average. When crystals are analyzed, the largest values of Ω 2 parameter are found in tungstates and molybdates, while the smallest values are in garnets, phosphates, silicates, and oxysulfides. Ω 6 are expectedly higher in fluorides, [17]. b Slater integrals calculated from Racah parameters by Equation 17 in Ref. [17]. c Slater integrals and spin-orbit coupling parameter not provided, the RME values the authors used are by Carnall in Ref. [66], or by d Weber in Ref. [35]. e Energy levels are not given in the literature, values in the table are provided approximately. Figure 2 presents the variation of Slater integrals and s-o coupling parameters in those 16 hosts. Although there are no large differences in parameters between the crystals and glasses, there are certain trends that may be observed by the type of compound. Deviations in parameters' values from host to host can be large, so the use of Carnall or Weber tables [22,35] for Er 3+ RMEs can introduce large errors in the later calculations. Table 2. Figure 3 presents the JO parameters as given in Table 2. Glass hosts have smaller values of JO parameters than crystals, on average. When crystals are analyzed, the largest values of Ω2 parameter are found in tungstates and molybdates, while the smallest values are in garnets, phosphates, silicates, and oxysulfides. Ω6 are expectedly higher in fluorides, phosphates, and silicates. In glasses, borate glasses have lower Ω2, while phosphate glasses have higher Ω2. Ω6 is on average higher in phosphate glasses. No clear correlation could be given for the Ω4 parameter in crystals or glasses. Table 2. Table 2. The squared RMEs for each transition investigated for LIR are given in Table 3. This list can be used beyond the scope of this paper for accurate calculations of JO parameters. The deviations from the average RME values are given in Figure 4, and they are large for the 2 H11/2→ 4 I15/2 transition. Thus, the use of Carnall's or Weber's values [22,35] might introduce significant errors in the JO parameters estimation, as the RMEs were calculated for the LaF3 and YAlO3, respectively. The average RMEs values calculated from Table 3 Table 2. The squared RMEs for each transition investigated for LIR are given in Table 3. This list can be used beyond the scope of this paper for accurate calculations of JO parameters. The deviations from the average RME values are given in Figure 4, and they are large for the 2 H 11/2 → 4 I 15/2 transition. Thus, the use of Carnall's or Weber's values [22,35] might introduce significant errors in the JO parameters estimation, as the RMEs were calculated for the LaF 3 and YAlO 3 , respectively. The average RMEs values calculated from Table 3 are given in Table 4, together with the deviations from the values by Carnall and Weber. The refractive index values taken from the corresponding references are also listed in Table 3. If Sellmeier's equation is given, the refractive index is calculated at the wavelength of the emission. From the refractive index value, the local field correction is calculated according to Equation (11). The induced ED strengths (the last column of Table 3) are calculated for each transition and for each using JO parameters from Table 2 and local field corrections and RMEs from Table 3. Table 3. Squared RMEs for hosts in Table 1, recalculated by RELIC software [17] from Slater integrals and s-o coupling parameters in Table 2, refractive index values, local corrections for emission, and induced electric dipole strengths. Note: if Slater integrals were not provided in Table 2, the squared RMEs will be given from the tables by Carnall [66], unless indicated that the authors used tables by Weber [35]. No. Initial a RME values not calculated by RELIC software, but given in the corresponding reference. b Refractive Index values approx. wavelength-independent. c RME values from Carnall [66], d from Weber [35]. Table 3 from their average values. Initial level U 2 U 4 U 6 U 2 (C) U 4 (C) U 6 (C) U 2 (W) U 4 (W) U 6 (W) 4 Table 3 from their average values. Table 4. Average RME values estimated from squared RMEs listed in Table 3. Deviations of average values from squared RMEs reported by Carnall (C) and Weber (W), in percentage. Calculations of LIR Parameters For this theoretical analysis, two Er 3+ -based LIRs are considered, the traditional LIR that uses the temperature-dependent ratio of emissions from 4 S 3/2 and 2 H 11 levels, and the relatively novel concept that uses the temperature-dependent ratio emissions from 4 S 3/2 and 4 F 7/2 levels. Table 5 provides the energy differences between 4 S 3/2 and 2 H 11 and 4 S 3/2 and 4 F 7/2 that are used to calculate the room-temperature-relative sensitivities for each host using Equation (2). The temperature invariant B parameters are calculated from the data in Table 3 using Equation (19) (version for spectra recorded in counts). Then, using Equations (2)-(4) and calculated B values, it was possible to derive the LIR's absolute sensitivity, the maximal absolute sensitivity value, and the temperature at which maximal absolute sensitivity occurs. The relation between relative and absolute sensitivities of traditional LIR (that uses Er 3+ emissions from 2 H 11/2 and 4 S 3/2 levels) for different hosts is presented in Figure 5a-c. As a rule of thumb, the higher the sensitivity value the better is the performance of thermometry. From Figure 5a, one can see that glasses tend to perform slightly weaker than crystals, on average. Figure 5b compares the LIR performance of different crystals. Fluorides', garnets', phosphates', and silicates' performances are worse than for other hosts. The best results are obtained with simple oxides, vanadates, niobates, molybdates, and tungstates. Figure 5c illustrates the performances of only glass hosts. Even the number of hosts in this set is rather small, it is possible to observe that Er 3+ activated borate glasses perform worse than other glasses. Fluorophosphate glasses show high relative sensitivities, but somewhat small absolute sensitivities. The best combination of sensitivities is achieved in PbO-PbF 2 glass. Similar conclusions can be drawn for the novel LIR type (that uses Er 3+ emissions from 4 F 7/2 and 4 S 3/2 levels), Figure 5d-f. Among different glasses, telluritefluoride glasses show the best performance. For crystals, the situation is almost equivalent to that of traditional LIR. Figure 6a-c gives the relation between relative sensitivity and absolute sensitivity at the temperature at which the absolute sensitivity has its maximum for the traditional LIR, while Figure 6d-f show the same relationship for the novel type LIR. Analogous conclusions can be drawn as in the previous analysis ( Figure 5). Among glass hosts, tellurite-fluoride, tungstate, and molybdate glasses show the best performances. Among crystals, the performance trend is almost the same, but the NaY(MoO 4 ) 2 shows the worst performance at elevated temperatures. The best overall performer is LiLa(WO 4 ) 2 . As a limit of the study, we must note that the values of the energy levels, Slater integrals and s-o parameters, refractive index values, and JO parameters are taken from literature, so one cannot estimate the level of their accuracy. The extreme outliers are to be taken with caution. Table 5. Calculated luminescence thermometry parameters: energy gaps (∆E) from Er 3+ 4 S 3/2 level to 2 H 11/2 and 4 F 7/2 , relative temperature sensitivities (S r ) for LIRs between selected levels, B LIR parameters, absolute sensitivities at room temperature (S a ), maximum sensitivity value (S amax ), temperatures at which maximum absolute sensitivity occurs (T(S amax )), and relative sensitivities at T(S amax ) (S r (T(S amax )). No. Higher Figures 6a-c gives the relation between relative sensitivity and absolute sensitivity at the temperature at which the absolute sensitivity has its maximum for the traditional LIR, while Figures 6d-f show the same relationship for the novel type LIR. Analogous conclusions can be drawn as in the previous analysis ( Figure 5). Among glass hosts, tellurite- As a limit of the study, we must note that the values of the energy levels, Slater integrals and s-o parameters, refractive index values, and JO parameters are taken from literature, so one cannot estimate the level of their accuracy. The extreme outliers are to be taken with caution. Conclusions The conventional thermometric characterizations are lengthy, complicated, and expensive. Given that there is an infinite number of possible hosts and doping concentrations of luminescent activators, the guidelines in selecting the appropriate material are important, and they can be provided by the Judd-Ofelt thermometric model which predicts thermometric figures of merit from its 3 intensity parameters. Er 3+ deserves special attention in luminescence thermometry. It features LIR between 2 H11/2 and 4 S3/2 levels with energy separation of ~700 cm −1 , and a recently introduced LIR between 4 F7/2 and 4 S3/2 levels, whose higher energy separation allows for up to 3x larger relative sensitivity. The performances of 40 various crystals and glasses were predicted by the Judd-Ofelt thermometric model, and guidelines were set to aid the search for the best phosphor for LIR thermometry. It was demonstrated that the Slater integrals and s-o coupling parameters significantly vary from host to host so that their values should not be adopted from other hosts. Consequently, for Er 3+ , the squared reduced matrix elements also significantly vary between hosts (especially for the 2 H11/2→ 4 I15/2 transition). Therefore, RMEs from frequently used Carnall or Weber tables should be replaced by the average RMEs for the three transitions that are used in these LIR read-out schemes, if the exact RMEs cannot be obtained. This will allow for the improved precision in the prediction of thermometric sensor performances, as well as for the improved Judd-Ofelt parametrization of Er 3+ doped compounds. Conclusions The conventional thermometric characterizations are lengthy, complicated, and expensive. Given that there is an infinite number of possible hosts and doping concentrations of luminescent activators, the guidelines in selecting the appropriate material are important, and they can be provided by the Judd-Ofelt thermometric model which predicts thermometric figures of merit from its 3 intensity parameters. Er 3+ deserves special attention in luminescence thermometry. It features LIR between 2 H 11/2 and 4 S 3/2 levels with energy separation of~700 cm −1 , and a recently introduced LIR between 4 F 7/2 and 4 S 3/2 levels, whose higher energy separation allows for up to 3× larger relative sensitivity. The performances of 40 various crystals and glasses were predicted by the Judd-Ofelt thermometric model, and guidelines were set to aid the search for the best phosphor for LIR thermometry. It was demonstrated that the Slater integrals and s-o coupling parameters significantly vary from host to host so that their values should not be adopted from other hosts. Consequently, for Er 3+ , the squared reduced matrix elements also significantly vary between hosts (especially for the 2 H 11/2 → 4 I 15/2 transition). Therefore, RMEs from frequently used Carnall or Weber tables should be replaced by the average RMEs for the three transitions that are used in these LIR read-out schemes, if the exact RMEs cannot be obtained. This will allow for the improved precision in the prediction of thermometric sensor performances, as well as for the improved Judd-Ofelt parametrization of Er 3+ doped compounds.
6,316.8
2021-02-05T00:00:00.000
[ "Physics", "Materials Science" ]
Heavy-Meson Decay Constants: Isospin Breaking from QCD Sum Rules Compared with lattice QCD, isospin breaking of heavy-meson decay constants from QCD sum rules agrees for $D$ mesons but disagrees (by a factor of four) for $B$ mesons. Decay constants of heavy-light mesons from QCD sum-rule perspective QCD sum rules [1] constitute analytic relationships between the experimentally measurable properties of hadrons, on the one hand, and the fundamental parameters of the underlying quantum field theory of the strong interactions, quantum chromodynamics (QCD), on the other hand. They therefore represent a standard (occasionally perhaps indispensable) theoretical tool of hadron physics. Some time ago, we embarked on thorough investigations of both the accuracy and precision actually achievable by and the systematic uncertainties inherent to the formalism of QCD sum rules [2][3][4][5][6]. As a consequence of these studies, we proposed a somewhat modified algorithm that provides estimates of intrinsic errors [7][8][9][10][11]. With this improvement at our disposal, we revisited QCD sum-rule extractions of a variety of hadronic observables, particularly of the leptonic decay constants of the heavy-light mesons [12][13][14][15][16], which led us to the conclusion [17][18][19][20][21] that, in the bottom-meson system, the decay constants of the pseudoscalar mesons exceed those of their vector counterparts, as has been confirmed thereafter in lattice QCD [22]. For pseudoscalar (P q ) and vector (V q ) mesons H q ≡ P q , V q , of mass M H q , built up by a heavy quark Q = c, b and a light quark q = u, d of masses m Q and m q , respectively, the decay constants, f H q , of these mesons [with momentum p and, in the case of vector mesons, polarization vector ε µ (p)] are defined, in terms of suitable, interpolating heavy-light axial-vector (A) and vector (V) quark-current operators, by Our actual aim [23,24] is the impact of the isospin-spoiling disparity of the u-and d-quark masses [25] m u (2 GeV) = 2.3 +0. Dependence of heavy-light meson decay constants on light-quark mass Ignoring electromagnetic and weak interactions, we focus on the strong interactions of the light quarks u and d, which then are basically characterized by just their masses m u and m d , respectively. In order to track the dependence of the heavy-light meson decay constants on the light-quark masses, we consider a fictitious light quark q of mass m q which we allow to vary from the average of the u and d masses [25] up to the mass m s of the strange quark, m s = (95±5) MeV [25]. Such generalization, however, enables us to introduce generic decay-constant functions f H (m q ) of that continuously varying light-quark mass m q , derived from our QCD sum-rule approach such that the decay constants at the m q values of interest may be defined by the identification f H u|d|ud ≡ f H (m u|d|ud ). In terms of the u-and d-quark mass difference δm ≡ m d −m u > 0, the decay-constant differences f H d − f H u thus emerge, at lowest order in δm, from the derivative of f H (m q ) with respect to m q in some vicinity of the u and d masses, say, at their average m ud : For simplicity, we prefer to introduce the shifted and rescaled, dimensionless light-quark mass variable By benefitting from mutual cancellations of individual errors, a substantial diminution of uncertainties ought to be gained by discussing, instead of our decay-constant functions f H (x q ) themselves, the ratios of the redefined decay-constant functions f H (x q ) and their values f H (0) ≡ f H ud at the mass average m ud . The derivative of any such decay-constant ratio with respect to x q , that is, its slope, at the origin x q = 0, 3 Heavy-light meson decay constants: improved QCD sum-rule formalism QCD sum rules arise from evaluation of vacuum expectation values of nonlocal products of convenient interpolating currents simultaneously at hadronic and QCD levels, by use of Wilson's operator product expansion for conversion of nonlocal operators into series of local ones, of Borel transformations from relevant momenta to Borel variables, generically called τ, and of the quark-hadron duality assumption, asserting equality of hadron-state and perturbative-QCD contributions beyond effective thresholds s H q eff . Taking into account the effective thresholds' Borel variable dependence s H q eff = s H q eff (τ) proves crucial for the accuracy of QCD sum-rule outcomes and our ability to estimate their intrinsic uncertainties [2][3][4][5][6][7][8][9][10][11]. Application of all these procedures to interpolating currents of type J = A, V results in QCD sum rules receiving contributions of purely perturbative origin, which may be represented by dispersion integrals of spectral densities ρ J (s, m Q , m q , m sea , α s ), and nonperturbative contributions Π J (τ, m Q , m q , q q , . . . ) parametrized by so-called vacuum condensates (vacuum expectation values of colour-singlet operators constructed from the degrees of freedom of QCD) that characterize the properties of the QCD vacuum. Picking up the terminology of lattice-QCD practitioners, in the operator product expansion we take the liberty to discriminate notationally those quarks which compose the interpolating currents from the "sea" quarks that contribute only to radiative corrections and denote all the masses of the latter by m sea . As perturbative series expansions in powers of the strong coupling α s , our spectral densities are known [26][27][28][29] fully up to two-loop order O(α s ) but at three-loop order O(α 2 s ) only for the case m q = m sea = 0. Following Ref. [28], we ensure perturbative convergence of our QCD sum-rule results by adopting for the definition of the quark masses the modified minimal-subtraction (MS) renormalization scheme. Physical quantities, such as decay constants, cannot depend on unphysical (renormalization) scales introduced on calculational grounds. Nevertheless, procedures required by the application of the QCD sum-rule formalism induce for several reasons artificial scale dependences of the resulting predictions: • Unavoidable truncations of perturbative expansions produce scale dependences of spectral densities. • Effective thresholds are determined at given scales such as to reproduce experimental meson masses. Although implementation of our advanced QCD sum-rule algorithms to heavy-meson decay constants removes a large portion of such scale dependences [12][13][14][15][16][17][18][19][20][21], remaining scale dependences contribute to the systematic uncertainties of any extracted QCD sum-rule outcomes. Consequently, in the following we present our findings for the scales µ at which the quoted estimates have been obtained. These scales are µ = 1.7 GeV for the charmed mesons D and D * and µ = 3.75 GeV for the bottom mesons B and B * . The numerical values of the set of parameters employed as input to our operator product expansion are given in Table 1. In addition, we need an idea about how the mass M H q of the fictitious (Q q) meson H q and the vacuum condensate q q of that fictitious light quark q behave with varying quark mass m q : [13,15,20,28] l g s σ G ℓ (2 GeV) l ℓ (2 GeV) (0.8 ± 0.2) GeV 2 [13,15,20,28] • Lattice QCD [32][33][34][35] hints at a linear rise of the masses of charmed and bottom mesons with m q . So, for the masses M H q (x q ) of the heavy-light (Q q) mesons, we assume a linear x q dependence from the measured [25] nonstrange-meson mass M H ud up to the corresponding strange-meson mass M H s [25]: • For the light-quark condensate q q , we assume a linear x q dependence from l ℓ ≡ ( ū u + d d value m q , the spread of predictions, obtained by application of our QCD sum-rule algorithms along the lines detailed in our earlier decay-constant extractions [12][13][14][15][16], for the case of linear (n = 1), quadratic (n = 2), and cubic (n = 3) behaviour of the effective thresholds s H q eff (τ) provides, by its central value and half-width, estimates of the decay-constant functions f H (m q ) and ratios R H (x q ), Eq. (3), as well as their systematic uncertainties [7][8][9][10][11], shown for the mesons H q = D, D * , B, B * in Figs. 1 and 2, respectively. Decay-constant differences from the slopes of decay-constant functions In order to eventually extract from the QCD sum-rule outcomes in Fig. 2 the slopes R ′ H (0) determining, as expressed by Eq. (4), the decay-constant differences f H d − f H u , we describe the behaviour of any ratio R H (x q ) as function of the mass-ratio variable x q in its respective x q interval, in three different ways, two of naïve polynomial shape and one inspired by heavy-meson chiral perturbation theory (HMχPT) [36]: In the HMχPT case, the term R χ (m q , m ud , m s ) is independent of the mesons H q under consideration; its (somewhat lengthy) explicit expression can be found in the Appendix of Ref. [24]. The parameters a H , b H i and c H i (i = 1, 2) may be utilized to optimize our fits. Table 2 summarizes our findings for the slopes R ′ H (0), derived by fitting the x q dependences of the ratios R H (x q ) depicted in Fig. 2 for each of the three ansätze (5a), (5b), and (5c), as well as the averages of the three individual results, for H = D, D * , B, B * . Table 2. Numerical values of the decay-constant-governing slope R ′ H (0) of the light-quark-mass dependent ratio R H (x q ) at the average u-d mass point x q = 0, resulting from the three fits given by Eqs. (5a), (5b) and (5c) in their respective x q intervals, and of the corresponding averages for the charmed and bottom mesons H = D, D * , B, B * . Meson R ′ H (0) linear fit (5a) quadratic fit (5b) HMχPT fit (5c) average indicating that the decay constants satisfy the inequality f H d > f H u for all the mesons H = D, D * , B, B * . Multiplication of the above ratios by the decay constants collected in Table 3 Table 4 confronts our predictions [24] with available lattice-QCD outcomes for the decay-constant differences f H d − f H u of pseudoscalar heavy-light mesons: In the case of the D mesons, we find a really nice agreement. In the case of the B mesons, however, our QCD sum-rule predictions are by a factor of four smaller than respective lattice-QCD claims, betraying a tension of some three standard deviations. Confidence in our approach may be drawn from the fact that our results are of very similar size as those reported by lattice QCD for the K mesons [41]: both systems lead to differences of the order of 1 MeV. CONF12 Table 4. Decay-constant difference of pseudoscalar heavy mesons: QCD sum rule [24] vs. lattice QCD [37][38][39][40].
2,622.2
2016-09-29T00:00:00.000
[ "Physics" ]
Examining the Effects of Food and Product Production Values and Production Added Value on Inflation Over the Years: Empirical Evidence for Turkey In this study, it is aimed to examine the effects of food and product production values on inflation. In the study, the variables of the World Bank Country Reports between 1991-2019 Consumer Price Index, Wholesale Price Index, Food Production Index, Product Production Index and Production Value Added were used. According to the results obtained from the study, there is a statistically significant relationship between TUFE and TOFE and GUE, UUE and UKD variables (p &lt;0.01). According to the results of controlled correlation analysis, the effects of food and product production indexes on consumer and wholesale inflation level are not statistically significant (p&gt; 0.05). The effect of UKD and GUE parameters on inflation is statistically significant (p &lt;0.05). Explanation power of both models is very high. According to the regression coefficients, UKD has a negative effect, and GUE has a positive effect. The results show that production has a positive effect on inflation, while production value added has a decreasing effect on consumer and wholesale prices. These results show that the production in our country is actually high cost and its added value is low. Introduction Although the concept of inflation has a long history among the economists, the controversies about its first outlet or source still continue. While the Spanish scholastic Juan de Mariana who writes up to the ends of the 16 th century, draw the mainlines of the compelling nature of inflation, the monetary writings of David Hume on the 18 th century evaluated the unit changes in the monetary stock as the valuation changes affecting the unit value of the monetary assets of the other individuals (Arteta et al., 2018;Bagus et al., 2014;Blanchard et al., 2010;Garcia and Werner, 2010). The global economy has witnessed a considerable decrease in inflation since 1970's. Inflation decreased throughout the world as the median annual national consumer prices inflation decreased by approximately 1.7 percent which is the lowest level of about half century in 2015 from about 17 percent summit which was experienced in 1974. Among the developed economies, the median inflation similarly reduced to the lowest level as 0.3 percent from the highest 15 percent level in the same period (Ha et al., 2019). While there is a requirement for everybody to meet their own needs in daily life, the prices of these needs change very day with the influence of the pricing mechanisms of the market as well. Sometimes their prices decrease or it could be considered whenever the purchasing power of the money increases or sometimes whenever the quantity of the things which are purchased with the same amount of money decreases. The prices should be compared with each other in order to see the growth of the economy in every period, day, week, month or year in economy and its growth remains dependent on inflation, risks, production, investments, interests or similar other reasons (Durmuş, 2019;Mumtaz and Surico, 2012;Ciccarelli and Mojon, 2010;Ihrig et al., 2010;Calvo and Reinhart, 2002;Aghion et al., 1999). The decrease in the inflation has been broad-based among the country groups recently and it is clearly seen in the multiple inflation measurements including the headline and core consumer prices, energy and food prices, producer prices and gross domestic product (GDP) deflator (Ha et al., 2019). Although the influence of lots of macroeconomic indicators is examined relating to inflation from past to today, we did not encounter the sufficient number of studies which are mainly focused on the correlation between the food and product prices and production values. Because of this reason, in this study, it is aimed to examine the effects of the food and product production values on inflation. Conceptual Framework According to the inflation theory, the individuals with lower incomes typically bear greater inflation burdens than their colleagues with higher incomes. Because of the fact that the rich individuals are inclined to have not only knowledge but also resources, they are generally in better positions to avoid taxes. They may invest a greater part of their incomes in the assets with higher profits. Thus, they may protect their saving capabilities against a general decrease of their purchasing powers. On the other hand, the individuals with lower incomes may follow similar strategies to the extent of existence of the index funds or inflation-indexed bonds. However, because they could use smaller part of their incomes for the investment purposes, they would balance a larger part of their incomes which are negatively affected by inflation on the prices of the commodities which they purchase. While there are some financial products which are utilized in order to minimize the effects of the inflation on the investments of the individuals, it is quite difficult to implement a similar strategy in order to protect someone from the increasing prices of the consumables (Bagus et al., 2014). Inflation could be defined as a kind of commodity and service pricing mechanism. However, we should underline here that this is not the only factor of the price and that of the purchasing power. It depends on the interests, rates, production capacity and capability, population and politics. Most of the time, on account of the shocks which occur in economy, the effects on the interests, investments and production and the quantity of the money and services existing in the market decrease and this causes the prices to go high. This increase means the raw material purchasing problems, needs of the people, less production and less employment. Under such unbalanced quotations, the investors, producers, banks and the people are afraid from spending their habits and, in turn, this influences the prices. Not only the changing prices may result from inflation but also it could create the inflation itself (Durmuş, 2019). High inflation is usually correlated with the low growth and financial crisis. The increasing price levels are also dependent on the weaker investor confidence, underlining the incentives for savings and the abrasion in the balance sheets of the financial and public sectors. Moreover, the poor people may suffer from the higher inflation in a disproportional manner because poorer households are dependent on the wage proceeds more, they have less access to the interest-yielding accounts and there is a lower probability to have financial or real assets in important amounts other than cash. Because of these reasons, the low and stable inflation is correlated with the better growth and developmental results, financial stability and decreased poverty (Ha et al., 2019). Method The variables and the World Bank codes which are used in the study are given in the Table 1. The World Bank provided the TOFE data for the years 1991-2013, GUE and UUE data for the years 1991-2018 and TUFE and UKD data for the years 1991-2019. Consequently, the data range of the study is specified as between the years 1991-2019. In the study, the measurement data are defined as the average, standard deviation, minimum and maximum values. The compatibility of the measurement data to the normality distribution is analyzed by the Kolmogorov Smirnov Test. The Pearson's Moments correlation analysis is performed for the correlation between the data conforming to the normal distribution. In addition to this, the year controlled partial correlation analysis is also performed. Before the econometric analysis, it is analyzed whether the variables contain unit root or not by means of the Augmented Dickey-Fuller unit root test (ADF). Because all of the variables are proportional and since the deflator transformation is already performed, they do not contain any unit root. The linear regression analysis has been used in the correlational scanning between the research data. All of the analyses have been realized in 95% confidence interval and 0.05 relevance level, in Eviews 7.0 for Windows and SPSS 17.0 for Windows programs Findings Identifying information of the variables which are used in the study are given in the Table-2. The results of the Pearson's correlation and the partially controlled correlation analysis which are carried out for the correlation between the TUFE and TOFE and the research parameters are given in the Table 3. According to the results of the correlation analysis, there is a statistically significant correlation between the variables of not only TUFE but also TOFE and GUE, UUE and UKD (p<0.01). The correlation direction is positive for GUE and UUE and negative for UKD. However, according to the controlled correlation analysis results, the effects of the food and product production indexes on the consumer and wholesale inflation level are not statistically significant (p>0.05). The Augmented Dickey Fuller (ADF) unit root test results for the research variables are given in the Table-4. No unit root is found in all of the variables which are used in the study and, because of this reason, no advanced unit root tests are carried out (p<0.05). In addition, the autocorrelation results as performed in the study indicated that the study data are in compliance with the regression analysis. According to this, the following model has been established for TUFE. According to the regression model results, the effect of the UKD and GUE parameters on the inflation is statistically significant (p<0.05). The explanatory power of the model has a considerably higher value (R2: 0.956593). According to the regression coefficients, the UKD has negative effect and the GUE has positive effect. The GUE has a greater effect as a coefficient. TUFE= β0 + β1 x (GUE) + β2 x (UUE) + β1 x (UKD) The following model has been established for TOFE. Similar to the TUFE model results, the effect of the UKD and GUE parameters on inflation is statistically significant (p<0.05). The explanatory power of the model has a considerably higher value (R2: 0.964582). According to the regression coefficients, the UKD has negative effect and the GUE has positive effect. The GUE has a greater effect as a coefficient. Discussion In this study carried out, it is aimed to examine the effects of the food and product production values and the production indexes on the inflation. In this framework, in the study, the analysis is performed on the production and inflation indicators of Turkey between the years 1991 to 2019 and then the correlation between the production and inflation is revealed. In general, the studies which are carried out in the literature concluded not only the inflation is dependent on the increase of the demand in comparison with the supply but also the excessive production shall decrease the demand and the excessive supply shall cause the inflation to decrease. Setting out from this point, actually the production is expected to have an effect of decreasing and reducing the inflation. With a simple financial approach, as the amount of a specific product or service is abundant in a market, its price shall decrease proportionally and consequently the inflation or the price increase shall be more limited as well. (Alex, 2021;Brito and Bystedt, 2010;Kremer et al., 2009;Ahmad and Mortaza, 2005;Anoruo, 2003;Bruno and Easterly, 1998;Abizadeh et al., 1996;Barro, 1996;Dornbusch and Fischer, 1993). According to the values obtained in the study, the inflation in the wholesale and consumer prices is in a changing but generally in an increasing trend within the years. Similarly, the product and food production has been in an increase. However, whenever it is compared with the production indexes, the increase in the inflation indexes is higher and more variable. As a matter of fact, although the increase in the production is expected to show an effect of reducing the prices and consequently decreasing the inflation, on the other hand, the cost, unit price and benefit analyses of the products produced change this situation. In other words, they should be produced within a certain plan and consciously to have an effect on the inflation, but not merely the production of the products or services. At this point, the production planning is commissioned. In this framework concerning the years in which the study is carried out, it could be expressed that the production planning is not performed at the sufficient level. In the correlation analysis in which the close correlations are shown, the product and food productions are increasing the inflation. While it has many reasons, in general, it could be expressed that the higher service input in the production process of the products or foods produced is higher. On the other hand, whenever the production added value is calculated, the expected correlation is seen and as the production added value increases, the inflation decreases. However, at this point, these effects lose their significance in the year controlled correlation analysis of the variables. This situation further puts forward the rather capital-intensive market of our country as well. Whenever the regression analysis and the variables are evaluated together with regard to their effects, the effects of the production added value and food productions on inflation are statistically significant. In both models (for the consumer and wholesale prices) the production has decreasing effect for the inflation and the food production values have increasing effect on the inflation. Consequently, it could be expressed that the production costs are above the sustainable level. Conclusion According to the results which are obtained in this study, it is seen that the production has positive, namely increasing, and the production added value has decreasing effect on the inflation over the consumer and wholesale prices. These results indicate that the production in our country is actually high-cost and its added value is also lower. Indeed, from the agricultural production to the food production, the expenditures such as transportation, energy and taxes etc. among the fundamental inputs have been recently higher than the fundamental production raw material expenditures of the product. Because of this reason, it could be expressed that there are some important missing points in the production planning. In order to eliminate such deficiencies, it is necessary to produce the products with the unit prices having higher added values, to plan the production processes of these productions and to perform the market or trade researches accordingly. In this regard, it is useful to perform the crosswise studies in which the variables and contributing factors are extended for the advanced researches. With regard to the area application, the study findings point out that it would be beneficial to perform the production master plans and, if any, to specify the missing points of the existing plans and to support the national plans rather than local plans accordingly.
3,360
2021-06-01T00:00:00.000
[ "Business", "Economics" ]
The LA = U Decomposition Method for Solving Systems of Linear Equations A method for solving systems of linear equations is presented based on direct decomposition of the coefficient matrix using the form LAX Elements of the reducing lower triangular matrix L can be determined using either row wise or column wise operations and are demonstrated to be sums of permutation products of the Gauss pivot row multipliers. These sums of permutation products can be constructed using a tree structure that can be easily memorized or alternatively computed using matrix products. The method requires only storage of the L matrix which is half in size compared to storage of the elements in the LU decomposition. Equivalence of the proposed method with both the Gauss elimination and LU decomposition is also shown in this paper. Introduction Systems of linear equations or equations linearized for iterative solutions arise in many science and engineering problems [1]. Practical applications of systems of linear equations are many, examples of such application include applications in digital signal processing, linear programming problems, numerical analysis of non-linear problems and least square curve fitting [2]. Systems of equations are also historically reported to have provided a motivation for the development of digital computer as less cumbersome way of solving the equations [3]. Gaussian elimination is a systematic way of reducing systems of linear equations into a triangularised matrix through addition of the independent equations Method Development The method proposed in this paper is based on reducing the coefficient matrix A in the system of linear equations AX = B using a single lower triangular reducing matrix L. The original coefficient matrix A is transformed into an upper triangular matrix U that allows solution through back substitution as is usual with both LU decomposition as well as Gauss elimination methods. For the original system of n by n linear equations given as: The matrix representation of Equation (1) will be: where A is the coefficient matrix having the elements a ij of the original equations and B is the right hand side column vector containing the elements 1 2 , , , n b b b  . The proposed method establishes a solution that transforms both the coefficient matrix A and the right hand side column vector B as follows: In other words the coefficient matrix and the right hand side column vector B are transformed through the equations: The procedure, therefore, essentially centers on determining the lower triangular matrix L that reduces the coefficient matrix A to an upper triangular matrix U. Let this matrix L be given through its elements l ij so that: The operation LA = U will reduce the coefficient matrix A in to an upper triangular matrix U given by: However, this proposed method does not need storage of the U matrix as only the L matrix needs to be determined and used to reduce both the A matrix and the right hand side column vector B. This is easily seen through the matrix operation involving the reducing matrix L only, namely, In this method, the l ij elements will be written in terms of the Gauss pivot row multipliers m ij of the Gauss elimination, and, as will be shown shortly, the l ij elements are the sum of the permutation products of the m ij multipliers assembled into a tree like structure for easy memorization. The elements l ij will not remain constant during the reduction process as is normally the case with Gauss elimination or LU decomposition, but change as the reduction of A to U matrix progresses column wise or row wise as new members of the Gauss pivot row multipliers are added to the element l ij . Unlike the Gauss method which is restricted to column wise operation, in this method it is also possible to proceed row wise. In fact the row wise procedure will be followed to derive the l ij elements. Starting with row 2 of the lower triangular L matrix,, the only unknown is l 21 and in terms of the Gauss elimination pivot row multipliers m ij , the pivot operation to educe u 21 to zero is given as: 21 For a 4 × 4 L matrix, summarizing the l ij elements, expressed in terms of the Gauss pivot row multipliers m shown above, will give the L matrix shown in Equation (15). It is easy to show that the m terms in the L matrix in Equation (15) form permutation products where by the number of terms correspond to coefficients of the binomial series expansion. For any element l ij of the L matrix, the number of m-product terms is given by: The power of binomial expansion ( ) , K i j is given by; Tree-Like Structure of the m-Permutation Products It is easy to enumerate the m-permutation products of l ij as these products can be arranged in a tree-like structure. Taking the example of elements of l 51 for example, the tree structure shown in Figure 1 is formed. Formula for Calculation of the Sum of Permutation Products For the element l ij of the lower triangular matrix L, with the number of m products N m corresponding to the binomial coefficients of power K(i,j), the binomial coefficients N m (r) for 0,1, 2, , r K =  is given by: Similarly for l 51 with ( ) In general for any element l ij , the m ij sum of products can be calculated using the formula: ( ) Finally, the element l ij is computed by summing the M ij sum of products as follows: Matrix Solution to the Computation the lij Elements of the Lower Triangular Matrix L The computation of elements of the lower triangular matrix L can be easily carried out using matrix multiplication. For any element l ij the matrix multiplication takes the following form: A. T. Tiruneh et al. Equation (26) shows the l ij can be determined from already determined previous values of l kj where 1 j k i + < < and the Gauss pivot row multipliers The negative of the corresponding Gauss pivot row multipliers m rs that are already determined at this stage are given by the matrix form L LU ; The matrix L LU is simply the L matrix of the LU decomposition method. This can be verified as follows: To avoid confusion, let traditional LU decomposition method have its L matrix relabelled L LU to make it different from the L matrix of the proposed direct decomposition procedure. From the relationship A = L LU U as well as LA = U, it follows that: It follows then that: in which I is the identity matrix. Therefore the L matrix is simply the inverse of the L matrix L LU of the LU decomposition method. The L matrix elements as shown in Equation (15) This computation will be illustrated for the 4 by 4 matrix of L shown in Equation 5 and later for the example of the 4 by 4 system of linear equations solved in the section that follows. Starting with the element l 21 the matrix form of Equation (30) will take the form: Number of Operations Required The number of operations required N p are related to the determination of the elements of the L matrix only. It is apparent that similar to the LU decomposition, the order of operations is of power 2, i.e., for n by n matrix the number of operations required grows proportional to n 2 . This is clearly seen as the number For example for a 4 by 4 L matrix ( ) The l ij elements of the L matrix shown in Equation (5) show the six elements to be determined. Compared to the LU decomposition, the proposed method requires only half of the operations required for the LU decomposition. The reason is, unlike the LU method the LAX LB B′ = = method does not require storage of the U elements, i.e., only the L matrix is needed to solve the system of linear equations. Procedure for Determining Elements of the L Matrix The computation of the l ij elements of the lower triangular matrix L can be carried out either row wise or column wise using more or less the same procedure as outlined in the following step by step procedure. Step 1: Initially set all the Gaussian pivot row multipliers m rs of the element l ij to zero values. During computation of a particular value of m rs the most recent values of the other pivot row multipliers will be used. In other words, the values of m rs will be updated once their values change because of successive row wise or column wise computation. Step 2: Starting with the first column and second row and proceeding either row wise or column wise, calculate the m rs value for which r = i and s = j. For example for the element l 21 , the m value to be calculated is that of m 21 and at l 53 it would be m 53 that will be calculated. Step 4: After the computation of all the m values of the Gauss pivot multipliers is completed, form the L matrix elements l ij using the summation rules of the permutation products involving the m products as given by Equation (24) and Equation (25) or using the matrix product given in Equation (30). Step 5: Once the L matrix is formed compute the solution X vector of the system of equations AX = B using the formula shown in Equation (3), namely, In other words, the product LA results in the upper triangular matrix U which will allow the computation of the solution vector elements of X using back subs-A. T. Tiruneh et al. titution. As in the Gauss method, it is possible to check if a zero appears on the diagonal of the U = LA matrix, i.e., to check if u ii = 0 for a given row i during the computation of the l ij elements. In other words, for a given row i, a check can be made for the value of u ii using the formula: If the condition u ii = 0 becomes true, row interchange can be made with rows from below in the equation. Figure 2 shows a flow chart of the steps outlined above in solving a system of linear equations using the LA = U method. The procedure stated above will be illustrated with an example given below which is a 4 × 4 system of linear equations. Two methods are given, Method 1 using column wise operations and Method 2 using row wise operations. Application Examples Example 1: The 4 × 4 system of linear equation shown below will be used to illustrate the x Method 1 (Column wise operation) Column 1 operations: Initially all the m values will be set to zero as outlined in the steps for solving the system of equations. Starting with column 1 and at row 2, the equation The reduced matrix LA becomes; Similarly, the operation LB B′ = becomes; Finally the reduced equation LAX LB B′ = = takes the form: The elements of the solution vector X can now be determined by back substitution. Starting from the fourth row, x 4 is determined; Finally, the solution vector X is computed from LAX LB B′ = = : The elements of the solution vector X can now be determined by back substitution. Starting from the fourth row, x 4 is determined; Discussion The proposed method, developed and demonstrated with examples so far, shows that solution to linear systems of equation can be obtained through direct decomposition of the A matrix using the operation LAX LB B′ = = . The method provides a clear procedure for direct computation of the L matrix, the only matrix that is needed to transform the original equation AX = B in to a reduced form, i.e., LAX = BX unlike for example the LU method which requires that both the L and U matrix be stored to find the solution through AX LUX B = = . The elements l ij of the lower triangular matrix L are shown to be sums of permutation products of the Gauss pivot row multipliers m rs . The relationship between l ij and m rs is clearly established through a formula and it is easy to visually construct this relationship using a tree diagram that will assist in easy memorisation of the relationship. In addition (and as an alternative procedure) the relationship so established between elements l ij of the lower triangular matrix L and the Gauss pivot row multipliers m rs enables construction of the L matrix directly from the Gauss elimination steps. The characteristic of Gauss elimination method is that the reduction to an A. T. Tiruneh et al. upper triangular matrix can only proceed column wise. It is not possible to proceed row wise in the Gauss method. On the other hand, the LU decomposition requires alternate transition between the L and U elements for determining the LU compact matrix. By contrast, the proposed LA = U reduction method can proceed either column wise or row wise essentially giving the same result. This flexibility is demonstrated in the example shown above where it is easily seen that the computation of the Gauss pivot row multipliers remains more or less the same for both the row wise and column wise operations. The storage requirement during the reduction process is related to the generation of the L matrix. Unlike the LU method, storage is needed only for the L matrix since the solution directly proceeds from the reduction LAX LB B′ = = in which there is no need to store the U matrix. The number of elements that need change is of the order O(n 2 ) as shown in Equation 38 and is typically half the number of operations required for the LU decomposition because in the LU decomposition both the L and U elements need to be determined and stored. Conclusions A direct decomposition of the coefficient matrix forming part of a system of linear equations using a single lower triangular reducing matrix L has been demonstrated as shown in this paper. The method allows solution to the system of linear equations to proceed through storage of a single lower triangular matrix L only, through which both the coefficient matrix A and the right hand side column vector B are transformed. Elements of the reducing matrix L are shown to be sums of permutation products of the pivot row multipliers of the Gauss elimination technique. These sums of permutation products, for any element of the reducing matrix L, can be easily constructed using a tree diagram that is relatively easy to memorize besides using the formula developed for the purpose. These L matrix elements can also be alternatively computed using matrix products. In the process of determining the elements of the L matrix, either row wise or column wise procedure can be followed essentially giving the same result which provides added flexibility to the proposed method. Equivalence of this newly proposed method with both the Gauss elimination and LU decomposition techniques has been established. In the case of the equivalence with Gauss elimination technique, elements of the L matrix are specified as functions of the Gauss pivot row multipliers. This also implies that it is possible to construct the reducing L matrix of the proposed direct decomposition method using the Gauss pivot row multipliers. As has been demonstrated, the L matrix can be directed constructed from the Gauss pivot row multipliers using the matrix product L Lu L = I. For the LU decomposition, the L matrix of the proposed method is simply the inverse of the L matrix of the LU decomposition. In terms of storage of computed values, it can be seen that the proposed method of direct decomposition using the transformation LAX LB B′ = = needs only storage of the L matrix elements which is half in size compared with storage of all the L and U elements in the LU de- Apart from providing added flexibility and simplicity, the proposed method would be of good educational value providing an alternative procedure for solving systems of linear equations.
3,836
2019-08-28T00:00:00.000
[ "Mathematics" ]
Prognostic impact of diabetes mellitus on hepatocellular carcinoma: Special emphasis from the BCLC perspective Background Diabetes mellitus (DM) is associated with higher incidence and poorer prognosis of hepatocellular carcinoma (HCC). The influence of DM on patient survival in different HCC stages is not known. Methods A prospective dataset of 3,182 HCC patients was collected between 2002 and 2014. Patients were divided into three groups according to BCLC stages (BCLC stage 0 and stage A, BCLC stage B, BCLC stage C and stage D). We compared the cumulative survival rate of diabetic and non-diabetic patients in different BCLC groups. The correlation between DM and overall survival was also analyzed by multivariate Cox regression model within each group. Results DM is present in 25.2% of all patients. Diabetic patients had lower cumulative survival in BCLC stage 0 plus BCLC stage A group (log rank p<0.001), and BCLC stage B group (log rank p = 0.012), but not in BCLC stage C plus BCLC stage D group (log rank p = 0.132). Statistically significant differences in overall survival are found between diabetic and non-diabetic patients in BCLC stage 0 plus stage A group (adjusted hazard ratio [HR] = 1.45, 95% confidence interval [CI] 1.08–1.93, p = 0.013), and BCLC stage B (adjusted HR = 1.77, 95% CI 1.24–2.51, p = 0.002). In contrast, the survival difference is not seen in BCLC stage C plus stage D group (adjusted HR = 1.09, 95% CI 0.90–1.30, p = 0.387). Conclusions DM is prevalent in HCC, and is associated with lower survival rate in HCC patients with BCLC stage 0 plus stage A and B, but not in those with BCLC stage C plus stage D. Introduction Hepatocellular carcinoma (HCC) is the fifth common neoplasm in men and the seventh in women. It contributed to 745,000 deaths in year 2012 and was the second leading cause of cancer-related mortality worldwide. [1] Well-established risk factors for HCC include chronic hepatitis B virus (HBV) infection, chronic hepatitis C virus (HCV) infection, aflatoxin B1, and alcohol consumption. [2,3] The pathogenic and prognostic roles of metabolic factors, such as diabetes mellitus (DM), metabolic syndrome, or obesity, had also been studied. [4][5][6] Epidemiologic studies have disclosed association between presence of DM and higher HCC incidence, suggesting that DM is an independent risk factor for development of HCC. [7][8][9][10] In addition to its role in pathogenesis, DM may also be an important predictor of prognosis. [11,12] Previous studies analyzing the relation between DM and HCC outcomes focused mainly on resectable or potentially curable diseases. However, the results were inconsistent. [13][14][15][16][17][18] DM seems to worsen HCC prognosis in some subgroups to a greater extent. Toyoda et al found that the presence of DM led to poorer prognosis only in patients with treatable diseases, and those with tumor size 3 cm in greatest dimension. [19] Wang et al reported lower overall and disease-free survival in DM patients with cirrhosis and HCC, but not in their noncirrhotic counterparts. [20] The Barcelona Clínic Liver Cancer (BCLC) classification is one of the most widely adapted classification systems for HCC. The BCLC system incorporates multiple factors, including tumor burden, liver functional reserve and performance status. With its ability to predict prognosis and guide treatment algorithm, the BCLC staging system is endorsed by the European Association for the Study of Liver (EASL) and American Association for the Study of Liver Diseases (AASLD) HCC management guidelines. BCLC system stratifies patients into several distinct prognostic groups. The association and prognostic impact of DM on HCC patients with different cancer stages remain unclear. In this study, we aim to explore the prognostic role of DM in different BCLC stages. Methods Patients We have prospectively enrolled and retrospectively analyzed newly diagnosed HCC patients admitted to Taipei Veterans General Hospital during 2002 to 2014. Baseline characteristics, including underlying etiologies for HCC, biochemistry profile, tumor extent, vascular invasion, severity of cirrhosis, performance status, and diagnosis of DM were recorded. Patient followup was arranged every 3-6 months until death or dropout from the program. Survival was defined from the date of diagnosis to death or last follow-up. Those receiving liver transplantation were censored at the date of transplantation. The study complies with the standards of the Declaration of Helsinki and current ethical guidelines and was approved by the Institutional Review Board of the Taipei Veterans General Hospital (IRB protocol number 2014-03-007AC). Waiver of consent was obtained, and patient records/information was anonymized and deidentified prior to analysis. Diagnosis and definitions The diagnosis of HCC was established in accordance with EASL and AASLD HCC management guidelines. [21,22] BCLC staging information was obtained at the time of diagnosis. We defined vascular invasion as radiological evidence of tumor invasion to intrahepatic vasculatures, portal trunk or abdominal great vessels. DM was defined as a fasting plasma glucose of 126 mg/dl or greater on at least two separate occasions, plasma glucose of 200 mg/dl or greater 2 hours after a 75 g oral glucose tolerance test, a glycated hemoglobin (HbA 1C ) level > 6.5% for once, or any prescription of hypoglycemic agents. [23] Treatments Once diagnosis was confirmed, patient data were reviewed at multi-disciplinary HCC board of Taipei Veterans General Hospital for treatment planning. We provided comprehensive information regarding risks and benefits of each treatment to patients. The final treatment modality taken was decided by shared-decision making between physicians and patients. Written informed consent was obtained prior to all management. Invasive therapies, including radiofrequency ablation, surgical resection, and transarterial chemoembolization were performed through standard procedures as previously reported. [24][25][26] Statistics The cumulative survival rates of diabetic and non-diabetic patients among different BCLC stages were examined by the Kaplan-Meier methods with log-rank tests. Cox proportional hazards regression model was performed for hazard ratio evaluation. Prognostic factors that are probably associated with overall survival, including age, sex, severity and etiology of chronic liver diseases, biochemical laboratory parameters and tumoral status were included in the univariate survival analysis. Factors significant in the univariate analysis (P < 0.1) were introduced into the multivariate Cox model to determine independent predictors of prognosis. The proportional hazard assumption was assessed graphically before being analyzed with Cox model. We used two-tailed χ 2 test to compare categorical data, and Mann-Whitney U test to evaluate continuous variables. Interaction between DM and other predictors were assessed using likelihood ratio tests comparing the final model and the final model with the interaction terms. Statistical analyses were conducted with IBM SPSS version 21 (IBM, NY) and SAS version 9.4 (SAS Institute, NC). Statistical significance was set as P value less than 0.05 in a twotailed test. Patient characteristics The median age of the study patients is 65 years old. The median follow-up duration is 17 months for the entire cohort, being 19 (Table 1; p = 0.081). However, a significant trend toward increasing prevalence of DM in more advanced BCLC stages were noted (p for trend = 0.048, Fig 1). Survival analysis As a whole, diabetic HCC patients had significantly lower overall survival compared with nondiabetic patients (p = 0.017, Fig 2A). In subgroup analysis, diabetic patients also had decreased overall survival in very early/early and intermediate HCC (p<0.001 and 0.012, respectively, Fig 2D). Risk factor analysis All patients. Univariate survival analysis show that DM, age, gender, HBV, alcoholism, albumin, bilirubin, INR, Na, AFP, eGFR, variceal bleeding, total tumor volume, vascular invasion, presence of ascites, and platelet count were significant predictors for survival in HCC patients (Table 2, all p< 0.05). Factors significant in the univariate analysis were introduced into the multivariate Cox model. MELD score and CTP class were not included in the final model because of they contain potentially confounding predictors, such as albumin and Table 5). Discussion In this longitudinally followed-up study from a large patient cohort, notably, a quarter of HCC patients were diabetic. We demonstrate a trend toward increasing prevalence of DM in HCC patients with higher BCLC stages. In addition, DM may differentially affect overall survival in HCC patients with BCLC stage 0, stage A, and stage B subgroups, but not in those with BCLC stage C and D groups. Relations between DM and HCC prognosis have been studied extensively but show discrepant results. It has been noticed in several epidemiologic studies that the predictive value of DM on HCC prognosis was limited to specific patient subgroups. Wang et al. reported a meta-analysis including 21 studies with total 9,767 HCC patients, and showed that DM is an independent predictor for decreased overall survival (HR 1.55; 95% CI 1.27-1.91; p = 0.001) and disease-free survival (HR 2.15; 95% CI 1.75-2.63; p = 0.001). [12] However, subgroup analyses in that study further disclosed that the effect was only seen in patients receiving hepatic resection (HR 1.91; 95% CI 1.21-3.00; p = 0.005), but not in subjects treated with other modalities. Whether differences exist in tumors with diverse baseline characteristics is less well established. Our group previously conducted an analysis comparing how DM influenced cumulative survival in patients with small (defined as 5 cm) and large (> 5 cm) hepatic tumors. We found that non-diabetic patients had significantly better survival after surgical resection compared to diabetic patients when the tumors are small. In contrast, there was no significant difference in survival between DM and non-DM individuals when tumor sizes were large at presentation. [27] HCC is a highly heterogeneous disease entity, and the size of tumor size does not necessarily correlate well with disease severity, tumor stage, or prognosis. The BCLC classification system involves not only tumor size, but also performance status and liver functional reserve. Thus it may serve as a better disease indicator. In the current study, we divided patients into a total of three subgroups according to their BCLC staging. Our aim is to more precisely evaluate how the presence of DM changes the prognosis of HCC under different disease entities. We found a poor overall survival in diabetic patients with early BCLC stages but not in those with advanced diseases. This phenomenon could be explained by the hepatocarcinogenesis effect of insulin. In type 2 diabetic patients, increased insulin resistance and resulting hyperinsulinemia might upregulate the production of insulin-like growth factor-1 (IGF-1) and insulin receptor subtrate-1 (IRS-1). Elevating IGF-1 stimulates cell proliferation and inhibits apoptosis, thereby inducing carcinogenesis. [28] Consequently, patients with DM may suffer from accelerating tumor growth and poorer survival. However, in chronic and advanced liver diseases, especially in fibrotic liver, insulin resistance increased. [29][30][31][32] We consider that in BCLC stage C and stage D populations, the underlying liver disease alone causes insulin resistance and hyperinsulinemia that resemble diabetes status. Therefore, whether or not patients have true clinical diabetes may not significantly influence the prognosis. This assumption was further supported by our demographic data showing that the CTP class was more advanced in higher BCLC stages, representing more severe cirrhosis. In the meanwhile, the same hypothesis could also explain the gradual increasing trend of DM prevalence from BCLC stage 0 to stage D as observed in our cohort, which stems from progressively worsening insulin resistance. DM is known to cause multi-system complications. Diabetic patients with early stage HCC usually have a better survival, which allows diabetic complications and diabetes-associated death to develop. Thus the increased mortality in diabetic group might come from diabetic complications instead of cancer burden. Alternatively, in later stages, treatment for HCC is greatly limited. According to the suggestions from the AASLD guideline, patients with BCLC stage C disease can only be treated with sorafenib, and stage D patients are less likely to tolerate any treatment except for palliative care. There is a high possibility that patients die before significant diabetic complications take place. The study has a few limitations. This is a single-center study and the results may not be generalizable to other geographical areas. With more than half of the patients having evidence of HBV infection, our data require validation from other study groups. Secondly, as a tertiary center, referral bias cannot be avoided completely. Thirdly, our primary outcome was all-cause mortality. We could not evaluate the association between DM and its potential morbidities, such as surgical complications and cardiovascular death. Lastly, our study lacks of the information about DM treatment. Recently, the protective effect of several classes of anti-diabetic agents in cancer development and progression, especially metformin and thiazolidinedione, has been emphasized. [33,34] On the contrary, insulin analogue seems to have mitogenic effects. [35] Therefore, choice of glucose lowering drugs may have its own role in affecting the outcome of HCC patients. In conclusion, DM is highly prevalent among patients with HCC across different cancer stages. DM worsens overall survival in these patients with early BCLC stages from stage 0 to B. However. For patients with stage C and stage D, the long-term survival is not significantly influenced by the presence of DM.
2,973.6
2017-03-23T00:00:00.000
[ "Medicine", "Biology" ]
Design and Implementation of a Low-Complexity Multi-h CPM Receiver with Linear Phase Approximation Synchronization Algorithm Multi-h continuous phase modulation (CPM), with extremely high spectral efficiency, involves the plague of high demodulation complexity with a large number of matched filters and a complex trellis. In this paper, an efficient all-digital demodulator for multi-h continuous phase modulation (CPM) is proposed based on a low-complexity decision-directed synchronization algorithm. Based on the maximum-likelihood estimation of the carrier phase and timing errors, we propose a reduced-complexity timing error detector with linear phase approximation (LPA) to the phase of the multi-h CPM. Compared with the traditional synchronization methods, it avoids derivative matched filtering and reduces about 2/3 of matched filters. The estimated accuracy and bit error rate (BER) performance of the LPA-based synchronization algorithm have no loss, as shown by the numerical simulation. Its stability is verified by the derived S-curve. Then, the receivers with the LPA-based synchronization for the three kinds of promising multi-h CPM are implemented on a Xilinx Kintex-7 FPGA platform. The experimental results show that the onboard tested BER of the proposed design has an ignorable loss in the numerical simulation. The implementation overhead on FPGA is significantly reduced by about 27% slices, 64% DSPs, and 70% block RAMs compared with the conventional method. Introduction Continuous phase modulation (CPM) is a family of nonlinear modulation schemes with phase continuity and constant envelope.It has been widely used in mobile communication [1], aeronautical telemetry standard [2], industrial communications standards [3], and future satellite broadcasting [4].It has the advantages of a high-frequency spectrum and power efficiency, as well as a high power of resistance to channel nonlinearity.On the other hand, CPM suffers from the high implementation complexity of synchronization and symbol detection.Multi-h CPM, developed with more than one modulation index h, has a higher power and spectral efficiency than single-h CPM and provides three times the spectral efficiency of PCM/FM [5,6].Multi-h CPM has been selected as the Tier II waveform of the US Advanced Range Telemetry Program Organization (ARTM) [7].The multiple modulation indices change periodically, which can be coded to improve spectrum efficiency further.Thus, it promises broader application scenarios than the single-h CPM.However, more modulation indices bring higher complexity of the trellis and more matched filters, leading to an extremely high detection complexity.Meanwhile, synchronization also has high implementation complexity since the decision-directed algorithm is generally used for the coherent receiver. The maximum-likelihood sequence detection (MLSD) is used to obtain optimal detection performance for CPM signals.Some decoding algorithms are commonly used to implement the MLSD according to the trellis in the receiver, such as the Viterbi algorithm and the BCJR algorithm [8,9].It provides significant detection gain compared with the symbol-by-symbol method.However, tremendous complexity is introduced by the trellis of a large number of phase states and branches.Due to its high complexity, some reducedcomplexity methods are proposed, such as tilted phase transformation (TPT) [10,11], frequency pulse truncation (FPT) [11,12], pulse amplitude modulation (PAM) [13,14], and state-space partitioning (SSP) [11,15].All the above methods have been discussed in [16], and have been widely used in various CPM receivers [11,[17][18][19][20].The above algorithms may cause some performance loss in demodulation, except for the TPT method.In practical applications, combining several of them brings a better tradeoff between implementation complexity and performance.These methods also result in a reduction in synchronization complexity.In [21], the decision-directed synchronization for joint phase and timing recovery is introduced with ML estimation for phase and timing error.The synchronization with FPT is also proposed in [22].The Walsh signal space and PAM decompositions help reduce the synchronization complexity in [23][24][25][26], respectively.In the conventional joint carrier and timing recovery methods mentioned above, the ML estimation of timing error is always calculated by the derivative matched filters because of the nonlinear function of timing offset in the log-likelihood function.It is commonly approached by a finite difference of the outputs from two matched filter banks, of which delays of the on-time MF banks are early and late, respectively, represented as the early-late (EL) synchronizer [27,28].The EL-based synchronization algorithm is widely utilized for linear [29][30][31] and nonlinear [21,22,26,32] modulation schemes, since it offers high estimation accuracy with low computational cost.The general methods to reduce the complexity of the synchronization depend mainly on the simplification methods of MLSD. In this paper, we derive a low-complexity synchronization for the multi-h CPM.We combine the reduced-complexity MLSD methods of LPA, TPT, and SSP to reduce the number of phase states in the detection trellis as conventional methods.Additionally, we pursue reducing the synchronization complexity of the error signal estimator based on the LPA to the phase of the multi-h CPM.With LPA, the timing delay is linearized to the phase.Thus, the derivative MF filters are removed, and the timing error is just estimated by the on-time MF filter banks without using early-and late-MF filter banks.Then, the MF filter banks can be reduced to 1/3 of the original EL-based method.To prove the stability of the proposed synchronizer, we plot the S-curve through theoretical and numerical analysis.Three commonly used multi-h CPM schemes, quaternary CPMs with h = { 4 16 , 5 16 }, { 5 16 , 6 16 }, and { 9 16 , 10 16 }, are considered to demonstrate the performance of the LPA-based synchronization method.We implement the overall receiver for the above three multi-h CPM schemes with LPA-based synchronization and MLSD on a Xilinx Kintex-7 field-programmable gate array (FPGA) platform.The tested bit error rate (BER) results show that the proposed synchronizer's performance has no loss compared with the conventional EL methods, and the slice and dedicated resource (DSPs and block RAMs) utilization are reduced by about 27% and 67%, respectively.The main contributions of this work are briefly summarized as follows: • Using LPA, we rederive the ML estimation of phase and timing error signal for multi-h CPM, which reduces the complexity of the synchronizer.We modify the LPA-based timing error detector to reduce the complexity further with the sign of the detected symbol.It is friendly to being implemented on FPGA; • We provide an analytical expression for the S-curve of the proposed error signal detector and analyze its stability through the S-curve; • We provide an architecture of the receiver with the LPA-based synchronizer and implement it for the three promising multi-h CPM schemes on an FPGA platform.The verification results demonstrate a better tradeoff between complexity and performance than the conventional EL-based method. The structure of this paper is as follows.The signal model and the derivation of the traditional synchronization algorithm and the LPA synchronization algorithm are presented in Section 2. To further reduce the LPA error signal detector, we continue to simplify the detector with the polarization of the error signal.The S-curve is derived to determine the stability of the proposed synchronization algorithm in Section 2. In Section 3, we provide the implementation details based on the receiver's diagram with the LPAbased synchronization and compare the complexity between the LPA-based algorithm and the conventional EL-based algorithm.Finally, Section 4 illustrates the onboard BER test with the low-complexity receiver for the three multi-h CPM schemes to demonstrate the performance of the proposed synchronization algorithm. System Model with Tilted Phase Transformation The general baseband CPM signal [8] is modeled as where T is the symbol interval, and E s is the energy per transmitted symbol.The phase of the CPM signal is defined as where h i = k i /p is the modulation index at ith symbol interval, while k i and p are integers.h i is selected periodically from a set as {h 0 , h 1 , ......, h N h −1 } with a symbol duration, and N h is the number of modulation index.α with α i ∈ {±1, ±3, ..., ±(M − 1)} is the sequence of M-ary information symbols.The phase pulse response q(t) is determined by the L-length frequency pulse g(t) as q(t) = t −∞ g(t)dt.Rectangular and raised cosine are commonly used as the frequency pulse shapes with the denotations LREC and LRC, respectively. Using TPT, the symbol sequence is mapped to u ∈ {0, 1, ..., M − 1} with the element u i = (α i + M − 1)/2.Then, the number of phase states N state can be reduced from 2pM L−1 to pM L−1 without performance loss [16].The number of MFs is still N h M L .For ARTM CPM as a 4-ary h = { 4 16 , 5 16 } CPM, the trellis using TPT has 256 states and 64 MFs.We rewrite the phase φ(t; α) as where ϑ n is the cumulative phase of the modulator: Using TPT, ϑ n becomes where θ n is the updated cumulative phase: and φ n is the tilted phase: Note that φ n is independent of the α or u.Thus, the number of phase states is reduced by half. η(t; l n , α n ) is the correlative phase of the modulator In Equations ( 3) and ( 8), l n is the correlative state vector and α n is the current symbol.The multi-h CPM signal is transmitted over an additive white Gaussian noise (AWGN) channel, and the carrier phase ϕ and timing offset τ are unknown to the receiver.Hence, the received signal can be modeled as where w(t) is a complex baseband AWGN with zero mean and single-sided power spectral density N 0 . Conventional Synchronization Algorithm Based on EL-Matched Filtering The matched filter (MF) output is Assuming that ϕ and τ are known, the expression of the joint log-likelihood function is [15] Λ The estimated value of the synchronization parameter can be obtained by setting the partial derivative of the likelihood function equal to zero with respect to ϕ and τ.Thus, we obtain and where Y( τ, αn ) is the derivative of Z( τ, αn ) with respect to τ.The value of αn is taken from the best survivor in the Viterbi algorithm or related decoding algorithm. The carrier phase and timing error signal can be expressed as The iterative signal expressions of time error and carrier error are as follows where γ is the step size, γ = 4BT/k p , BT is the normalized equivalent noise bandwidth, and k p is derived from the S-curve.D is an introduced delay and D = 1 produces satisfactory results in many cases (see [21]).The decision-directed (DD) joint phase and timing synchronization, following the error signal detectors of Equations ( 14) and ( 15), is shown in Figure 1a.The received signal is synchronized by the estimated φn and τn .The synchronized signal is fed to the on-time-, early-, and late-MF banks.The results of the on-time-MF bank are processed by the Viterbi algorithm (VA).The detected αn assists in estimating the phase error and timing error signals through a phase error detector (PED) and a timing error detector (TED), respectively.Note that the derivative matched filtering in Equation ( 15) is implemented by the difference between early-and late-MF banks.Therefore, three MF banks are required: the on-time-, early-, and late-MF banks.Finally, the first-order loop filters update the φn and τn , respectively from Equations ( 16) and (17).As mentioned in Section 1, such an EL-based timing synchronizer has been widely used in digital receivers for CPM. Synchronization Algorithm Based on LPA In order to reduce the complexity of the synchronization besides the phase states trellis, we use the LPA to rederive the synchronization error detector.LPA is a method to approximate the phase response of the CPM as a linear phase response (or a truncated REC phase response), expressed as where L is the length of the linear phase response.Note that L also stands for the truncated length of PT.LPA is similar to the PT method, and we use PT as one of the reduced-complexity methods for MLSD.Therefore, the number of phase states in the trellis is reduced to pM L −1 .For a 4-ary h = { 4 16 , 5 16 } CPM, N state becomes pM L −1 = 64, and the number of MFs becomes N h M L = 32.The signal s(t, α) using TPT and LPA can be rewritten as with where and Substituting the phase response with Equation (18) into Equation ( 22), we have the correlative phase with LPA as where and Note that Then, the output of the matched filter is The joint log-likelihood function is The maximum likelihood estimate takes the partial derivative of the phase error and timing error and Then, the error signal expression is and In Equation (32), L T is mainly related to the polarity of the estimated error.To further reduce the complexity of Equation ( 32 sign(x) is the function of extracting the sign of x as Equation ( 33) can be implemented on the FPGA platform more efficiently than that of Equation (32).The iterative signal expressions of time error and carrier error are as follows: Comparison between EL-Based and LPA-Based Synchronization Algorithms It can be seen from the comparison between the two timing error formulas of Equations ( 15) and ( 32) that the proposed timing error detector omits the derivative operation.The modified synchronization algorithm with LPA is constructed in Figure 1b.Compared with the EL synchronization algorithm shown in Figure 1a, the EL MF banks are saved, and the amount of MFs is reduced by 2/3.Based on the TPT, FPT with L = 2, and SSP with p -value phase state partition (p = 4), the trellis state number N state is decreased from 512 to p M L −1 = 16, and the specific complexity comparison of the above two synchronizers for the three multi-h CPM schemes is provided in Table 1.Note that EL-based synchronizer requires 3 MF banks for on-time-, early-, and late-MF paths with 3N h M L = 96 MFs.Here, L = 2 brings lower performance loss compared with L = 1 [16].Thus, the L = 2 is also set for LPA-based synchronization.Table 1 shows that under the same 16-state trellis, the LPA-based estimator from Equation (32) requires no subtractor and only 1/3 MFs of the EL-based method due to the reduction in the derivative in Equation (15).Note that the LPA-based estimator of the timing error signal, with MN state = 64 branches of the trellis, has 64 multipliers more than the EL-based estimator.To avoid those multipliers usage, we propose the simplified LPA (SLPA) estimator from Equation (33) with the sign of the estimated bn .It is much simpler than the other two estimators shown in Table 1.With a more complex trellis, the SLPA-based estimator can save more MFs and multipliers. S-Curve of the LPA-Based Timing Error Detector The S-curve is used to identify the stable lock points of the error detector and determine whether any false lock point exists.It is calculated by the mean of the error signals e(n), such as the S-curve for TED S(τ) = E(e τ (n)|δ), where δ = τ − τ is the timing offset and E() denotes expectation.The S-curve also evaluates the slope of the S-curve at δ = 0 as k p .The phase error detector (PED) has no simplification compared with the common method [21].Thus, we derive the S-curve of the timing error τ based on LPA, expressed as [26] S where Z n can be computed as The modulated sequence of α n is generally selected as a long and random vector [26].To simplify the analysis of the S-curve, we construct the complete set L of the state-vector group {l n , l n }, and its N = 2M L+L −1 elements can be enumerated easily.Thus, the S-curve is rewritten as The results of Equation ( 39) are calculated by the complete set of the state-vector group instead of a long and random sequence used in Equation (37).Note that the results of Equation (39) are more accurate than those of Equation (37).Hence, k p is obtained by k p = dS(τ)/dτ| τ=0 , which yields We also can derive the S-curve for the SLPA detector, given by with the slope of SLPA at τ = 0 The k s p values for three quaternary multi-h CPM schemes with various modulation indices as h = { 4 16 , 5 16 } h = { 5 16 , 6 16 } and h = { 9 16 , 10 16 } are calculated by Equation (42), as shown in Table 2. Figure 2 presents the S-curves for the decision-directed (DD) and data-aided (DA) timing error detectors based on the LPA of Equation ( 32) and SLPA of Equation (33).The DD S-curve is simulated by the detector with a random sequence as the transmitted data, and the DA S-curve is calculated by Equation (39).Note that the modulated sequence from set L is known, and such a curve is called the DA S-curve.Three quaternary multi-h CPM schemes with various modulation indices as h = { 4 16 , 5 16 }, h = { 5 16 , 6 16 }, and h = { 9 16 , 10 16 } are considered.The data-aided curves for the two TEDs are computed by Equations ( 39) and (41), respectively, shown as the solid line in Figure 2.This reveals the correct time at which the timing error detection locks for the three multi-h CPM schemes, i.e., τ = 0.The dotted lines are decision-directed timing error curves for the three groups of modulation index, which are the mean of the error signal estimation.It can be seen that the decisiondirected curves can lock onto the integer periodicity, so the multi-h CPM has a stable locking point for the LPA-based method.We also provide the S-curve for the original LPA-based TED for the h = { 4 16 , 5 16 } case.It also has a stable lock point, and the k p of the SLPA-based method is lower than that of the LPA-based method.From the data-aided SLPA S-curves of the three multi-h CPM schemes, it can be seen that the S-curves become narrower when the modulation indices increase, which also means higher k s p .This is consistent with the results in Table 2. System Implementation Based on LPA Synchronization Algorithm The overall implementation of the multi-h CPM system is shown in Figure 3, where the notations are listed in Table 3. Vectors are shown in bold.The subscripts (R) and (I) represent the real and imaginary parts of the outputs, respectively.Using TPT, FPT with L = 2, and SSP with 4-value phase state partition, the number of the phase state is reduced from 512 for the optimal detection to 16.The LPA-based synchronization removes the usage of EL MF banks to lower the complexity further.With the above reduced-complexity methods, the transmitter and receiver are implemented on Xilinx Kintex-7 FPGA and run at a global clock of 200 MHz and a bit rate of 50 Mbps.The carrier frequency is set to 70 MHz, and the baseband signal is sampled with 8 samples per symbol for the transmitter and receiver.Next, we introduce the implementation details for the main parts of the overall system. Multi-h CPM Transmitter Implementation In [33], the authors propose a single-h and multi-h CPM transmitter, which can be reconfigurable with an ignorable increase in memory.It provides a better tradeoff between memory and DSP operations.However, the quantization noise from computing the cumulated phase increases when the modulation indices do not have an exact representation in a given fixed-point format, e.g., h = 1 3 .To deal with this, modular arithmetic units are used to obtain the accurate signal computation for the CPM transmitter in [34].Here, the modulation indices can be represented accurately in a given fixed-point format.Thus, we use the method based on the read-only memories (ROMs) to calculate the modulated phase (See [33]), which is composed of the correlative phase calculation and cumulative phase calculation.Compared with the integration-based method, the ROM-based method brings lower quantization error and higher complexity.The increased implementation complexity of the modulator can be ignored, considering nowadays software-defined radio platforms.AWGN is generated based on the Box-Muller transform, which provides highly accurate noise samples [35].The implementation details of the AWGN generator are presented in [36].Symbol clock phase increment r(kT s ) Received signal sampled with T s interval and k is the sampling interval index s(kT s ) Modulated signal sampled with T s interval Z Z Z 0,n MF output vector with the elements calculated by ( 27) for various branch sign(b) Sign vector for various branch calculated by ( 24) and ( 34) Z Z Z 0,n Timing error signal vector with SLPA from (33) Multi-h CPM Receiver Implementation The received signal is sampled and returned to the baseband signal by the digital down conversion (DDC) module.The baseband signal is fed to the MF banks, and the outputs are used for the Viterbi detector.The Viterbi algorithm detects the transmitted sequence and provides information about the surviving path index vector M n and global winning state index G n .The proposed synchronization algorithm estimates the phase and timing error signals (e ϕ (n), e τ (n)) from the PED and TED utilizing the same matched filter.The second-order loop filters are implemented to update the estimated timing and carrier phase.Finally, the synchronized local carrier signal is feedback to DDC.Some of the primary blocks of the receiver are considered, and efficient implementations of these blocks are described.Optimization details are discussed to achieve high throughput, as follows. Digital Down Conversion (DDC) The DDC unit is used to move the received intermediate frequency (IF) signal to the baseband and consists of a direct digital synthesis (DDS), a mixer, and two consecutive filters.DDS outputs the synchronized carrier at IF 70 MHz, which is updated by the estimated carrier phase φn .The mixed complex signal is filtered by two low-pass FIR filters to eliminate the noise and interruption. Matched Filter (MF) Banks The MF banks are detailed in Figure 3.It can be seen that a total of 32 MFs are required for the odd-and even-interval modulation indices using reduced-complexity methods, such as TPT, FPT with L = 2, and SSP with 4-value phase state partition.Each MF unit is built referring to Equation (27).The ROM stores the MF coefficients e −j(η 0 (t,α)+φ n ) for α = {α n−1 , αn }, and the integral is implemented in discrete time by the complex multiplier and the integrate and dump (I and D) filter.Compared with the FIR-based MF, it reduces the usage of complex multipliers and adders and brings higher throughput.The MF outputs are selected according to the trellis with the odd-and even-symbol intervals and are corrected by the tilted phase. State-Space Partitioning (SSP) Unit The SSP algorithm is a decision feedback scheme that could reduce the trellis state according to the partitioning maps.We partition the cumulative phase states of the original trellis from p = 16 into p = 4. Thus, the branch metrics calculated by the MF banks have to be compensated by the estimated surviving phase φs (nT), which is obtained from the previous surviving phase φt ((n − 1)T) and the modified partitioned phase θ α n = 2πh n αn with respect to the estimated αn . Timing Error Detector TED is implemented by the SLPA method of Equation (33) with the sign of the estimated bn .We use the VA for TED with traceback length D = 1 to calculate the timing error signal, of which the inputs are the imaginary part of MF results.Here, the surviving path index vector M n and the global winning state index G n are reused from the VA of sequence detection.The VA for TED is implemented by two multiplexer banks.Each multiplexer bank is composed of 32 4-to-1 multiplexers.The surviving timing error signal for a phase state is selected through a multiplexer according to the 16-state trellis.The first multiplexer bank calculates the surviving timing error signals using the surviving path index vector M n−1 at the (n − 1)th symbol interval.These values are transmitted to the second multiplexer bank, and the surviving timing error signals are selected according to the M n .Finally, the estimated timing error signal e τ (n − 1) is selected by the global winning state index G n at nth symbol interval due to D = 1.Compared with the LPA-TED of Equation (32), it reduces 4 × 16 real multipliers and reserves the advantage of not using early-and late-MF banks. Timing Control Unit The symbol clock is generated using the principle of a numerically controlled oscillator without ROM, which is also similar to DDS.The clock rate is configured by the sum of updated timing error τn and the fixed clock phase increment v n .Then, the sum value is accumulated in a fixed word length (set to 32 in general).The highest bit of the accumulator output is the synchronized symbol clock t c .Thus, the synchronized timing pulse can be calculated by the following logic expression where t c (k − 1) is the symbol clock with a sampling interval T s delay. Simulation and Analysis To evaluate the performance of our proposed algorithm, the three commonly used multi-h quaternary CPM schemes with h = { 4 16 , 5 16 }, h = { 5 16 , 6 16 }, and h = { 9 16 , 10 16 } are considered in the following tests.We first analyze the spectrum of the mentioned three CPM schemes.Then, we compare the mean square error (MSE) and BER performances for the CPM with h = { 4 16 , 5 16 } between the proposed LPA-based method and the conventional EL-based method through numerical simulation, denoted as the floating-point simulation.The specifications of the simulation model are based on the implementation requirements as follows: After performing a floating-point simulation, the receivers with the proposed synchronization algorithm for the three multi-h CPM schemes are implemented on a target platform equipped with a Xilinx FPGA Kintex-7 xc7k325tffg900-2.The design is synthesized by the Xilinx synthesis tool (XST).The implementation details are discussed in Section 3. The tested results are presented as the fixed-point simulation. Power Spectrum Performance Before the BER performance comparison, we first analyze the power spectrum density (PSD) for the three promising multi-h CPM schemes to see their characteristics.The PSD is calculated by the method provided in [8], and the results are shown in Figure 4. We see that the lower modulation index CPM has substantial savings in spectral occupancy.However, it may bring lower minimum square distance, leading to worse BER performance [37].It is verified in the following test.Note that the three CPM schemes have ignorable difference in implementation complexity using the same reduced-complexity techniques.It seems that the CPM with h = { 5 16 , 6 16 } has a better tradeoff between the spectral efficiency and BER performance than the other two schemes. MSE and BER Performances Comparison We next discuss the MSEs of the proposed LPA-based methods.The timing error is estimated by Equation (36) with BT = 0.001.We use two reduced-complexity detectors.The first detector has a 16-state trellis, as we implemented in Section 3, with TPT, FPT (L = 2), and SSP.The second one only uses TPT and FPT (L = 2) to furhter approach the optimal detector with a 64-state trellis.The results are plotted in Figure 5 and compared with the modified Cramér-Rao bound (MCRB) [26], given by where , and L 0 = 1/(2BT).Our proposed estimators have ignorable MSE loss compared with the EL-based estimator used in [26,32], since the same reduced-complexity methods are used to simplify the trellis.When the 64-state trellis detector is used, the proposed SLPA-based estimator achieves results close to the performance of MCRB MSE, except for the larger values of E b /N 0 . In Figure 6, we compare the BERs for h = { 4 16 , 5 16 } between the proposed methods and EL-based method used in [26,32] under the same trellis (using TPT, FPT with L = 2, and SSP with 4-value phase state partition).This shows that the LPA-based and EL-based methods have comparable BER performances.However, the LPA-based method has much lower complexity than the conventional EL-based method, which is discussed later.The complexity of the SLPA-based method is lower than that of the LPA-based method, and they have almost the same BER performance.The BERs of the SLPA-based method in floating-point simulation and fixed-point hardware tests are also plotted, and both of them are close to the BER curve of ideal synchronization (without timing and phase error).The theoretical BER bound for the multi-h CPM with h = { 4 16 , 5 16 } is calculated by where Note that the BER of ideal synchronization has about 0.8 dB degradation to the theoretical BER bound of MLSD at a BER of 10 −5 , which is consistent with the conclusion in [16].Next, we show the onboard BER performances of the mentioned three multi-h CPM schemes.The LPA-based synchronizer is adopted to reduce the implementation complexity.The system is implemented according to Figure 3.The BERs of the three CPM schemes are plotted in Figure 7.It can be seen that all three CPM schemes have BER performances close to the ideal detection without phase and timing errors.Note that the multi-h CPM with higher modulation indices has better BER performance.This is because the higher modulation index brings in a larger minimum squared distance, which promises better BER performance [37].However, the CPM with lower modulation indices has a higher spectral efficiency, as shown in Figure 4. Thus, among the three tested multi-h CPM schemes, the CPM with intermediate-value modulation indices h = { 5 16 , 6 16 } brings better tradeoff between the BER and the spectral efficiency than the other two CPM schemes. Implementation Complexity Comparison The overall multi-h CPM systems shown in Figure 3 for the three groups' modulation indices as h = { 4 16 , 5 16 }, h = { 5 16 , 6 16 } and h = { 9 16 , 10 16 } are implemented in a platform with the Kintex-7 FPGA.We list the resource usage of the three systems and provide the complexity comparison between the LPA-based receiver and EL-based receiver using TPT, FPT with L = 2, and SSP with 4-value phase state partition in Table 4.The transmitters for the three multi-h consume almost the same FPGA resources, with data rates of 10 Mbps, 20 Mbps, and 50 Mbps.Under the same 16-state trellis (using the same TPT, FPT, and SSP), the proposed algorithm decreases the 2/3 usage of MFs compared with the conventional EL-based receiver.It brings a reduction in FPGA resources, including slice lookup table (LUT), slice registers, DSPs, and block RAMs.For h = { 4 16 , 5 16 } CPM, compared with the conventional EL-base method, the usages of slice LUT, slice registers, DSPs, and block RAMs are reduced by about 29%, 24%, 64%, and 70%, respectively.It consumes slightly more resources than the other two CPM receivers of h = { 5 16 , 6 16 } and h = { 9 16 , 10 16 } with the proposed synchronizer.Due to the much-simplified structure, it is attractive for the resource constraints tasks and also friendly to implementation and debugging in practical applications. Figure 1 . Figure 1.Comparison of synchronization algorithm: (a) Schematic diagram of conventional EL-based synchronization algorithm.(b) Schematic diagram of LPA-based synchronization algorithm. ), we use the sign of π bn−D L T instead of itself, and Equation (32) becomes e s τ (n − D) = sign π bn−D L T e ϕ (n − D) = sign bn−D e ϕ (n − D). Table 1 . Complexity comparison of the LPA-based, SLPA-based, and EL-based estimators. Table 3 . Table of symbols.
7,338.8
2023-11-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Solution of Optimal Harvesting Problem by Finite Difference Approximations of Size-Structured Population Model : We solve numerically a forest management optimization problem governed by a nonlinear partial differential equation (PDE), which is a size-structured population model. The formulated problem is supplemented with a natural constraint for a solution to be non-negative. PDE is approximated by an explicit or implicit in time finite difference scheme, whereas the cost function is taken from the very beginning in the finite-dimensional form used in practice. We prove the stability of the constructed nonlinear finite difference schemes on the set of non-negative vectors and the solvability of the formulated discrete optimal control problems. The gradient information is derived by constructing discrete adjoint state equations. The projected gradient method is used for finding the extremal points. The results of numerical testing for several real problems show good agreement with the known results and confirm the theoretical statements. Introduction The well-posedness of the continuous size-structured model has been studied in several papers (e.g., [1][2][3][4]).In Ref. [1], authors proved the local existence and uniqueness of a solution of the continuous model, where birth and mortality functions depend on total population.In Ref. [2], the authors established the local existence and uniqueness of a solution of the size-structured nonlinear population model, where also growth rate depends on total population.In the papers [3,4], the authors proved global existence and uniqueness of a solution of the continuous nonlinear population model, where all vital rates depend on total population.The total population can be described by e.g., total number of individuals (e.g., [3]), total biomass (e.g., [5]) or basal area. A continuous nonlinear size-structured population model has been used in a forest management optimization problem (e.g., [6][7][8]).In a continuous formulation, this nonlinear optimization problem cannot be solved by analytic methods.A natural approach is to solve this problem by approximating a continuous model by a discrete one and further solving a discrete optimization problem by iterative algorithms.In this paper, we focus on development of finite difference schemes to approximate the solution of a continuous nonlinear population model.Efficient schemes are essential for solving optimal control problems or parameter estimation problems as such problems require solving the model numerous times before an optimal solution is obtained. When continuous population model is approximated by a finite difference scheme, it becomes a matrix population model [9].In matrix models, trees are divided into classes with respect to their size-for instance, diameter.The matrix describes how the class division changes at one time step.Matrix population models have also been used for forest management optimizations (e.g., [10,11]). In optimization, using iterative algorithms is inevitable.Higher-order algorithms are usually sensitive to the regularity of the solution, and, therefore, they usually yield a convergence rate of first order as soon as the compatibility conditions are not satisfied.Moreover, in practice, the vital rates are determined on a statistical basis and the compatibility conditions required for high-order convergence are hardly valid with real-life data.These suggest that, in most cases, a first-order method should be the most adequate.Hence, it is desirable to have a robust scheme that can produce many useful qualitative and quantitative properties of the solutions of the differential problem but requires minimum regularity of the solution [12].Unfortunately, one could not derive the explicit formula for the optimal strategy since the strategy, the state and the costate are coupled into a complex system.The results at this stage may be regarded as a middle step to real world applications and serve as a starting point for numerical computations [13]. In our knowledge, the comprehensive theoretical investigation of the forest management optimization problem with a continuous nonlinear population model as the state equation is still lacking, and, in that sense, the problem is an open problem.Hence, in this work, the investigation of the problem in its differential form is omitted, whereas, we consider the finite dimensional counterpart of the problem constructed by finite difference approximations of the state problem and taking for cost function a finite dimensional form used in practice.We prove the stability of constructed finite difference schemes on the set of non-negative solutions and solvability of the optimization problem, and deduce the necessary gradient information for iterative solution methods.We solve several applied problems, where different approximations schemes are used, and compare the computed results.The rest of the article is organized as follows.In Section 2, a mathematical model of optimal harvesting problem for the size-structured forest is formulated.In Section 3, we construct and investigate two finite difference approximation schemes for a nonlinear boundary value problem that simulates the growth and the harvesting of a forest.A gradient method for minimizing the cost function is constructed in Section 4. The theoretical details of this method are set out in the Appendix A to the article.Section 5 is devoted to the numerical solution of a real-life problem and comparative analysis of the computing results.Finally, in Section 6, we present discussions. Formulation of the Optimal Control Problem In order to formulate the mathematical model for the optimal harvesting problem for the size-structured forest, we define the following notations.In space, we denote by x ∈ Ω := (L 0 , L] the thickness of the tree, where L 0 and L are the lower and upper bounds of the space domain, respectively.Moreover, t ∈ (0, T] is the time, where T is the upper limit.By Q, we denote the product space Ω × (0, T].We denote by y(x, t) and h(x, t) the number of trees per unit area (state) and the number of removed trees per unit area (control), respectively.Now, the optimal harvesting problem where the cost functional J(y, h) characterizes a net present value (NPV) of ongoing rotation, and d(x, t) is the discounted price function, is formulated as follows: Above K = Y ad × H ad is the set of constraints for the state and the control, where .H ad ={h | for all (x, t) ∈ Q : 0 h(x, t) h max , for all t ∈ (0, T] : From the point of real-life problems, it is obvious that there exist constants h max > 0 and B, which denote the upper limit for harvesting and lower limit for making profitable thinning of trees at time event t; otherwise, the thinning is not done.Notice that harvesting h depends on the state y (via constraint sets), which is defined by the population model ∂y(x, t) ∂t g(0, P(t))y(0, t) = 0, in (0, T], where g(x, P(t)) is growth rate, m(x, P(t)) mortality rate and y 0 (x) 0 is initial diameter distribution of the trees.Growth and mortality rates depend on diameter x of a tree and on the basal area, P(t), of the forest stand, where In the case h = 0, the problems (4)-( 6) are a particular case of the problem that have been investigated in [1][2][3][4].In these articles, the existence of a non-negative continuous solution of this problem has been proved under some "natural" assumptions for input data.They are: 1. g(x, P) is continuous and strictly positive for all x and P and continuously differentiable with respect to x; 2. m(x, P) is non-negative for x and P and integrable in x; 3. g(x, P) and m(x, P) are Lipschitzian with respect to P; 4. sup x,P m(x, P) < ∞. We also assume that these assumptions are satisfied.We use growth rate g and mortality m in a bilinear form where the constants g ij and m ij are such that g(x, P) > 0 and m(x, P) 0 for all x ∈ Ω and P 0. Obviously, because of suppositions g(0, P(t)) > 0, the boundary condition (5) reads as y(0, t) = 0.The optimal harvesting problem has been investigated in [6][7][8].The authors of these publications considered the case where the harvesting function has the form h(x, t) = c(x, t)y(x, t), where c(x, t) is the control.Thus, they investigated a coefficient identification problem while we solve an optimal control problem with distributed (on the right-hand side) control. Finite Difference Approximations In this chapter, we derive explicit and (semi)implicit finite difference approximations for the state problems (4)-( 6) and prove their stability estimates on non-negative solutions.The investigation of existence, uniqueness and convergence of approximations is beyond the scope of our article.For the size-structured population model with recruitment, the existence, uniqueness and convergence of explicit approximations is investigated in [14] and implicit approximation in [5,15]. The following notations are used throughout the paper: ∆t = T M and ∆x = L−L 0 N denote the temporal and spatial mesh size, respectively.The non-overlapping mesh intervals are (t k−1 , t k ], k = 1, . . ., M, and Let us denote by y k i and h k i the finite difference approximations of y(x i , t k ) and h(x i , t k ), respectively.Moreover, we denote g k i := g(x i , P k ) and m k i := m(x i , P k ) the discrete values of the growth rate and mortality rate, respectively, in size class Explicit Approximation of the State Equation For all meshpoints i = 1, . . ., N; k = 1, . . ., M, the explicit finite difference approximation of the size-structured population model ( 4)-( 6) reads Note that we use so-called upwind approximation for the first order derivative in space (variable x) using the positivity of coefficient g(x, P) on the set of non-negative mesh functions y.The explicit scheme ( 7) can be written in the form: Later on, we denote by Moreover, we denote by vectors of the nodal values and by the matrix of coefficients.Now, we can write explicit difference scheme (7) in the following algebraic form: Note that this scheme is just the forest growth model studied in [11].Moreover, the numerical calculation of the next temporal state involves only matrix to vector calculations.The drawback of the explicit scheme is that the following stability condition (9) must be satisfied. Lemma 1. Let the condition ∆x ∆t sup x,t g(x, P(t)) be satisfied.Then, on the set of non-negative mesh functions y, the finite difference scheme (7) is stable Proof.On the non-negative mesh functions y, the coefficients g i (P) are positive and m i (P) 0. For the mesh steps satisfying condition (9), the diagonal entries of matrix A k satisfy the inequality Because of this inequality, we have the following estimate for . 1 -norm of matrices, connected with . 1 -norm of vectors: with C m = sup x,t m(x, P(t)).Due to this estimate and condition (9), we obtain from Equation ( 8) the inequality whence stability estimate (10) follows. The condition (9) means that the length of the time step ∆t and width of the size class ∆x have to be chosen so that a tree cannot grow over one size class during one time step ∆t (compare with [16]). Using the notations we rewrite Equation (11) in a form of linear algebraic equations where is a matrix of nonlinear coefficients. Lemma 2. Finite difference scheme (11) is unconditionally stable on the set of non-negative mesh functions y: for any ∆t and ∆x the following stability estimate holds: Proof.By direct calculations, we obtain from Equation ( 12) the equality , then, from this equality, we get Because of positivity of vectors y k and y k−1 , the last inequality can be written in the form whence stability estimate (13) follows. Notice, contrary to the explicit scheme the time step ∆t and class width ∆x has no mutual dependence, hence the growth of a tree during a time step is not restricted less than one size class.This characteristic of the implicit scheme is useful in the optimal harvesting problem, covered by the models ( 4)-( 6) or parameter identification problem because such problems require solving the model many times before an optimal solution is obtained. Approximation of the Optimal Control Problem We denote d k i := d(x i , t k ) the discounted price for size class (x i−1 , x i ] at time t k , and Approximating the cost function ( 1) by the right-hand Riemann sum, we get the following approximation for the harvesting problem: Above, we denote by K = Y ad × H ad , where Y ad ={(y, h) |y 0, y is a solution for Equations (7) or (11)}, Moreover, y = (y 1 , . . ., y M ) and h = (h 1 , . . ., h M ). The following propositions show that the discrete optimal harvesting problem ( 14) has at least one solution in both cases, i.e., if models (4)-( 6) is approximated explicitly or implicitly.Proposition 1.Let the mesh steps ∆t and ∆x satisfy the inequality Then, Problem (14) has at least one solution if y satisfies Equation (7). Proof.The set K is non-empty.In fact, due to assumption (17), the solution y of finite difference scheme (7) with y 0 0 is non-negative if h = 0.This statement can be easily verified using form (8) of the difference scheme and noting that all entries a k i and b k i of the matrices A k are non-negative.Obviously, assumption (9) follows from inequality (17), so stability estimate (10) holds.Since vector h ∈ K is bounded, then, due to inequality (10), there exists a constant Y such that y 1 Y, i.e., the set K is bounded.It is closed because of the continuity of functions g(P) and m(P) with respect to P, while P is obviously continuous with respect to y.Thus, K is compact.At last, cost function J of Problem ( 14) is continuous, whence the existence of a solution to Problem (14) follows from Weierstrass's theorem.Proposition 2. Problem (14) has at least one solution if y satisfies Equation (11). Proof.Proof is very similar to the proof of Proposition 1. Namely, the set K is non-empty because, for h = 0, the solution y of finite difference scheme ( 11) is non-negative for all ∆x and ∆t.Since h is bounded, then, due to stability estimate (13), y is also bounded, so the set K is bounded.It is closed because of the continuity of functions g(P) and m(P) with respect to P, and continuity of P with respect to y.Thus, K is compact.At last, cost function J of Problem ( 14) is continuous, whence the existence of a solution to Problem (14) follows. Remark 1.Since neither the function J is strictly concave nor the set K is strictly convex, the optimization problem can have a non-unique solution. Realization of the Optimal Strategies In this section, a first order method to approximate the optimal harvesting problem ( 14) is constructed.In real-life applications, the growth rate g and mortality rate m are determined on a statistical basis and the compatibility conditions required for high-order methods can be hardly validated.Hence, a first order method, which is desirable to have a robust scheme but requires minimum regularity of the solution should be the most adequate.The first order methods require computing of the Fréchet derivatives (Jacobian matrix), which can be computationally expensive.However, when we consider the nonlinear optimization problem, only the gradient of the object function is needed, and the gradient can be computed without the Fréchet derivatives.In this work, the adjoint approach developed in the 1970s in [17] is applied for calculation of the functional gradient.The adjoint method has a great advantage against the direct method because only one linear state problem, so called adjoint state, need to be solved for obtaining the gradient information.Today, it is a well-known method for computing the gradient of a functional with respect to model parameters when this functional depends on those model parameters trough state variables, which are solutions of the state problem.However, this method is less well understood in the control of population models, and, as far as we know, no applications to distributed optimal control of harvesting is presented in literature.Duality and adjoint equations are essential tools in studying existence of the optimal pair (y, h), and, for a periodic age-dependent harvesting problem and for age-spatial structured harvesting problem, it is applied for proving the existence of the bang-bang control in [18] and in [19,20], respectively.For continuous size-structured harvesting, problem duality and adjoint equations are applied for proving the existence of the bang-bang control in [6,8]. In this work, we apply the Lagrange method and give a recipe to systematically define the adjoint state equations and gradient information.We formulate the Lagrangian of the problem (14) with respect to the state constraint (15) only, and use the projection method regarding the control constraint (16).In the projection method, if solution goes outside the constraint set (16), it is projected back to there.Let us generalize and denote by A(y, h) = 0 the operator Equation (8) (or Equation ( 12)).Moreover, A k (y, h) = 0 is the operator equation at the time level k, k = 1, . . ., M. Suppose the functional J and operator A to be differentiable in the sense that there exist the following partial derivatives: where λ k ∈ R N .Now, for all feasible pair (y, h) holds A(y, h) = 0, and, for any λ, we have: and, since λ does not depend on h, we have Above, one method to approximate ∂y ∂h is to compute N finite differences over control variable h. However, each computation requires solving the equation A(y, h) = 0, and, for large N, this method is computationally expensive.In the adjoint method, we can avoid to compute ∂y ∂h by solving the linear adjoint state equation only once.The theory of constrained optimization, see [21], says that (y, h) is the optimal pair for the problem ( 14) if (y, h, λ) is a saddle point of L. The derivatives of L with respect to y, h and λ are: ∂L(y, h, λ) ∂y Partial derivatives of J(y, h) and A(y, h) are presented in Appendix A. Gradient ∂ J ∂h we used in projected gradient method [22], which we applied for iteration of a solution of the optimal harvesting problem. Numerical Example In this section, we study numerical examples of problem (14).We compared two cases where the state constraint (4) was approximated with explicit approximation (7) and implicit approximation (11). As the discounted price for size class (x i−1 , x i ] at time t k , we used , where r is the interest rate, c p and c s are the prices of the pulpwood and sawlog, respectively, and v p i and v s i are the volumes of pulpwood and sawlog of a tree in size class (x i−1 , x i ], respectively.In the optimizations, we used the following values for parameters: price of pulpwood c p = 16.56 em −3 and sawlog c s = 58.44 em −3 , interest rate r = 3% and lower bound for harvested trees B = 50 m 3 ha −1 .The pulpwood and sawlog volumes v p i and v s i we got from [10].The optimization results of problem ( 14) are presented in Tables 1 and 2, and in Figures 1 and 2. 14) with state equation approximated by implicit scheme (11).Numbers (e.g., 5-8, 11-14) represent diameter in centimetres. Table 1.Maximum net present values, MaxNPVs, (i.e., optimal cost function values of the problem ( 14)) and mean annual increments (MAI) associated with optimal stand-level managements.Initial density is 1000 stems ha −1 .The results show that the maximum net present value (NPV) associated with the explicit approximation (7) was higher than the corresponding of the implicit approximation (Table 1).When class width or time step decreased, maximum NPV increased in both cases.The difference of maximum NPVs between the two cases decreased when class width or time step decreased.Only when time step decreased from three to two years, the difference of maximum NPVs increased.The difference was biggest (99 e) when time step ∆t = 5 years and class width ∆x = 3 cm and smallest (28 e) when time step ∆t = 3 years and class width ∆x = 2 cm. MaxNPV (eha With both approximations, three or four intermediate thinnings were made (Table 2).Number of thinnings increased when time step and class width decreased.When implicit approximation (11) was used, first thinnings were made 1-2 time steps earlier, while the last few thinnings were made 0-2 time steps later than when explicit approximation (7) was used.The thinning intensities were almost identical between the two approximations.If there was some difference, intensity was usually bigger when explicit approximation (7) was used (Table 2).The thinning pattern was in all optimal managements quite similar: in each thinning, more big trees than small ones were removed indicating a thinning from-above method (for different thinning types, see e.g., [23], pp.727, 733).Thinning from above has proven to be the best thinning type in stand-level optimizations of even-aged boreal forests (e.g., [24]).When explicit approximation (7) was used, all trees from two or three of the biggest size classes were removed (Figure 1).On the other hand, when implicit approximation (11) was applied, only part of the trees from those size classes were removed (Figure 2). Discussion This study contributes to existing literature on forest management by providing a theoretically sound framework to solve nonlinear optimization problem of even-aged stands.We compared the results of forest management optimizations, when the explicit and implicit approximations of the forest growth model was used.The optimization results show that the differences of the results between approximations are diminutive.This was expected as solutions of both approximation equations are proved to converge to solutions of continuous equation [5,14]. In numerical examples, we used data from the Scots pine (Pinus sylvestris L.) stands that were located in Northern Ostrobothnia, Finland, on nutrient-poor soil type.The data was the same as in [11].The difference is that, in [11], data was fitted directly to the matrix model, as, in this study, we first fitted data to the continuous model and then approximated it with a matrix model.In [11], the time step was five years and class width 3 cm.The results are in line with each other.Both methods gave four thinnings in optimal management and thinning from above dominated as the thinning type.In [11], the optimal net present value was slightly higher and, in the optimal management, the thinnings were made slightly earlier than in this study. The optimal harvesting problem with a continuous size-structured population model was studied in [6][7][8].In those papers, harvesting was defined as a proportion of removed trees.The maximum principle for the problem was proved in [6,8].Moreover, in [7], the strong bang-bang principle under some additional (but realistic) conditions was proved.This means that the optimal solution has the structure, where all trees bigger than some certain size are removed.In our results, the solution of the optimization problem, where state constraint was approximated with explicit approximation, was nearer that structure.In addition, the optimization results were a little better then.However, when explicit approximation is used, the time step and class width have to be chosen so that a tree cannot grow over one size class during one time step [16].We proved that only then is the explicit approximation scheme stable.For the implicit approximation scheme, we proved that it is unconditionally stable.Thus, in implicit approximation, the time step and class width can be chosen freely.In general, explicit approximation of the population model is more commonly used as a forest growth model [9,16]. Next, we calculate the partial derivatives of constraint function A(y, h).In both forms of A (constraints (8) or ( 12)), the partial derivative with respect to h is ∂A(y, h) ∂h = ∆t.Let us calculate the partial derivative of constraint function (8) (explicit approximation of the state Equation ( 4)) with respect to y Let us denote , where a k ij and b k ij are derivatives of coefficients a k i and b k i with respect to y k j defined in Equations (A2) and (A3), respectively.Thus, we can define ∂(λ, A(y, h)) ∂y = (q 1 , q 2 , . . ., q M ), where and 1 N is N × N identity matrix.
5,618.2
2018-04-26T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Short-Term Annoyance Due to Night-Time Road, Railway, and Air Traffic Noise: Role of the Noise Source, the Acoustical Metric, and Non-Acoustical Factors Field studies on traffic noise-induced annoyance have predominantly used estimated outside noise levels. We intended to complement existing knowledge with exposure–response relationships that are based on precise indoor noise measurements. Acoustic recordings inside the bedrooms of nightly road traffic and annoyance ratings in the following morning were obtained from 40 suburban residents (mean age 29.1 years ± 11.7; 26 females). We derived exposure–response functions for the probability to be “annoyed at least a little” (%LA). Further analyses compared data from the current study with those from two earlier studies on railway and aircraft noise. Annoyance increased with the number of traffic events and the equivalent sound pressure level. The inclusion of non-acoustical factors (such as assessment of road transport) improved the prediction considerably. When comparing the different traffic noise sources, %LA was higher for road than for air traffic at a given LAeq,night, but higher for road and railway than for air traffic at a given number of noise events. Acoustical as well as non-acoustical factors impact short-term annoyance induced by road, railway, and air traffic. Annoyance varies across noise sources, which may be due to differences in acoustical characteristics or in the temporal noise distribution throughout the night. Introduction Environmental noise is a widespread and intrusive phenomenon of everyday life. It is often accompanied by perceived displeasure, irritation, and discomfort [1], and is associated with health risk, sleep disturbance, and annoyance [2]. In particular, annoyance over a longer time is regarded as an effect modifier of the relationship between noise and health risks [3]. Annoyance reactions are caused by repeated disturbances by noise combined with a person's emotional and cognitive response to the sound [4]. The assessment depends on the physical characteristics of a sound, e.g., intensity, frequency, and duration. Additionally, it is influenced by the individual's attitudes, judgements, personality traits, and the context in which it occurs [5]. These non-acoustical variables are the reason why the same sound might be perceived by one person as a pleasure and by another person as noise. Numerous earlier studies affirm and highlight, in addition to the noise event itself and its acoustical characteristics, the effect of non-acoustical variables on long-term annoyance [5][6][7]. With a growing transport network and increasing mobility of modern societies, it is not surprising that noise from road traffic, aircraft, and railway causes a large proportion of annoyance in the population. To describe the relationship between traffic noise and longterm annoyance, exposure-response curves as developed by Miedema and Oudshoorn [8] are widely accepted and are used for noise assessment by the European Commission [9]. The majority of studies focusing on the relationship between traffic noise and long-term annoyance are based on large-scale field surveys assessing the effect of noise exposure across a long period, for instance, the past 12 months (e.g., References [10,11]). On the one hand, large-scale field studies on community long-term annoyance enhance the validity and representative nature of the outcome, while on the other hand they lack precise acoustical data collection. The long-term annoyance responses are usually related to estimated noise levels of noise maps or measurements outside at the house façade. The lack of precise indoor measurements is a well-known shortcoming of these studies. Laboratory studies offer the opportunity to combine precise acoustical settings with short-term annoyance assessment, but it has been shown that the effect of noise is often overestimated in these settings [12]. Here, we therefore aimed at applying precise acoustical measurement techniques in an ecologically valid field setting. Road traffic is the main source of environmental noise in cities and the overall vehicle use has increased by approximately 32% from 1991 to 2017 in Germany [13]. As a result, noise-free periods in urban areas are rare during the day and tend to diminish during night-time [14]. Since previous studies on road traffic rather focused on motorways with dense traffic, we intended to explore suburban areas with moderate traffic density at night. Previous studies have already proven that the A-weighted energy equivalent continuous sound pressure level, L Aeq in dB, which averages the sound energy of all traffic noise events over a specified time [14], is an important acoustical predictor of traffic noiseinduced annoyance [15][16][17]. It is a preferred parameter to describe fluctuating sound levels. This parameter considers frequency, duration, and intensity of all noise events [18]. Recent studies have already shown that besides the L Aeq , the number of noise events is an important acoustical predictor for short-term annoyance due to railway and aircraft noise [16,19]. Therefore, we hypothesized that apart from the L Aeq for areas with moderate traffic density-due to its intermittent nature-the number of pass-bys may also be an important predictor for road traffic noise-induced short-term annoyance. Likewise, we expected non-acoustical variables to influence annoyance. Most of the studies mentioned above refer to long-term annoyance, which describes a general feeling that has evolved over a longer period, at least several weeks. As a primary effect, noise during the night can disrupt sleep, which may lead to noise complaints on the following day as a secondary reaction [20]. Porter et al. [21] declare those next-day effects as short-term annoyance that may be an accumulation of negative feelings due to a disturbed night. Throughout this paper, we refer to annoyance as a short-term reaction which describes the feeling after a night with noise exposure. According to Guski et al. [1] and Bartels et al. [22], it is unclear whether and how respondents integrate their shortterm experience over time into a long-term retrospective statement. Porter et al. [21] have elucidated the difference between short-term annoyance as a next-day effect and long-term annoyance as a chronic effect of noise over weeks, months, or years. However, despite the different time dimensions, they claimed that both share the same causes and characteristics. Since the number of studies on short-term annoyance is very small and there is a clear research gap of field studies on annoyance induced by road traffic noise, we intended to compare our findings on short-term annoyance with those on long-term annoyance. The current study was conducted within the framework of a large program project ("Transport Development and Environment") by the German Aerospace Center, the purpose of which was to develop an instrument to project the trends of passenger and commercial transport in Germany and of the worldwide air traffic until the year 2030. Consequences for the environment and people were also aspects of the project. Within a subproject on noise propagation and effects, one work package examined the impact of acoustical param-eters on the objective sleep quality (data not shown here), and on short-term annoyance using precise measurements on site, i.e., in participants' homes. Our intention was to derive ecologically valid exposure-response relationships between road traffic noise and short-term annoyance. Furthermore, we explored the effect of the traffic noise source on short-term annoyance. To this end, we pooled datasets from the current study and two previous studies on air and railway traffic noise [16,17] and carried out additional analyses. The similar approach of these three studies allowed for a ranking of annoyance probabilities depending on the noise source. Residential Areas and Participants At the beginning of the study, we pre-selected appropriate residential areas with moderate traffic density at night in the vicinity of Cologne and Bonn in Germany. A moderate traffic density was defined such that noises were attributable to separate events during the night. We chose roads taking the following criteria into account: (1) we selected roads in suburban areas with a speed limit of 30 or 50 km/h, (2) we excluded roads with conditions that might interrupt the pass-bys of road vehicles or encourage the development of vehicle columns (e.g., traffic lights or roundabouts), (3) we chose roads with a surface made of conventional asphalt concrete in generally good condition, and (4) we ensured that road traffic was the dominant noise source (generally higher noise levels and higher number of events compared to other noise sources such as railway, industry, aircraft, etc.) to avoid acoustical interferences of the road traffic recordings and a high noise exposure due to multiple noise sources. We used Google maps and Google street view for initial selection and inspected it by local explorations and by acoustic measurements at residents' houses. The residents gave a signed informed consent for the acoustic recording at their home. Road traffic load and its composition are provided as Supplementary Data (Table S1). Following this selection process, we called for participation through advertisement on placards that were posted in the respective streets and on approximately 11,000 leaflets that were distributed in the residents' mailboxes. Additionally, information about the study was given online (Facebook, DLR homepage) for initial applications followed by a check via Google maps or on-site whether the respective roads comply with the inclusion criteria described above. After the residents applied for participation, they received a questionnaire that screened for good health, living conditions, and absence of sleep disturbances. We excluded all persons with criteria that might influence the objective sleep physiology (e.g., small children in the household). As a second step, we recorded the road traffic noise in the candidates' bedroom in a test-night to identify all possible interfering noise (e.g., the sound of an air conditioner, loud snoring of the candidate or their partner). If the noise would impair the evaluability of the acoustical data during the study and could not be prevented, the candidates were excluded. To ensure that they possessed a normal hearing ability according to their age, we performed an audiometer screening (AD226b, Interacoustics). Here, too, we obtained a signed informed consent for the test-night. From a total of 534 candidates, 42 individuals were appropriate to participate after the multi-stage selection process. The data of two persons were excluded due to the presence of periodic leg movements during sleep (PLMS). All participants had to live at least 12 months at their home address (M = 7.9 years, SD = 6.3) to avoid that an insufficient adaptation might influence the sleep or the annoyance response. Forty healthy adult individuals with normal hearing ability according to their age were enrolled to participate, including 26 women and 14 men (mean age = 29.1 years; SD = 11.7). Their age ranged from 18 to 61 years. The study was approved by the Ethics Committee of the Medical Association North Rhine. Prior to the study, all participants gave a signed informed consent according to the Declaration of Helsinki [23]. They received remuneration for every day of participation in the study. Measurements of Noise Exposure Data collection took place during summer/autumn of 2015 and 2016. Since the number of pass-bys differ with respect to the weekday, measurements were carried out at participants' homes on five consecutive nights, including the weekend. The measurements included polysomnographic recordings at night (data not shown here) and assessments of annoyance in the morning. The first night served as adaptation to the unfamiliar setting. Participants had to keep a time in bed of 7 to 8 h duration. They were free to choose a bedtime between 22:00 and 23:30 h and a rising time between 6:30 and 8:00 h fixed throughout the examination nights. They were also free to choose the position of the bedroom window (open, tilted, closed) they preferred but were requested to not change it during the night. Road traffic noise exposure was measured in 45.5% of the study nights with closed windows, in 46.8% with tilted windows, and in 7.7% with open windows (detailed information is given in Supplementary Table S1). The inside measurement of nocturnal road traffic noise was undertaken with Class-1 sound level meters (XL2 from NTi Audio). A microphone was placed inside the participants' bedroom close to the bed's headboard. The sound pressure level was recorded with Aweighting and time weighting fast-response in a one second interval. The acoustical recordings started automatically 15 min before their fixed bedtime and stopped 15 min after their rising time in the morning. In addition, a traffic counter (viacount II, via traffic controlling) was mounted at the street lamppost closest to the respective bedroom window. The radar detector technology provided information about the number of road traffic events, velocity, length, height, vehicle class (i.e., truck, transporter, motorbike, and car), and time. All participants lived in houses that directly faced a street (maximum distance to road approximately 22 m). We used the information of the traffic counter to pre-classify the recorded road traffic noise events by a specially developed acoustic program. The program marks start as well as end of a road traffic noise event and determines the vehicle class. To ensure a valid and reliable scoring, the data were double-checked by human scorers. The scorers listened to the acoustical recordings via headphones, checking whether the automatically pre-classification and markings were set correctly. If necessary, they revised or added the markings manually. To limit the analysis of the recording to the participants' time in bed, their exact bedtime and rising time were also marked manually. In addition to the L Aeq,night , we calculated 13 further acoustical variables per participant and night. The number of nocturnal road traffic noise events were derived from the traffic counter data. Measurements of Subjective Responses On each of the five consecutive mornings immediately after getting up, the participants completed a paper and pencil questionnaire which included a question on annoyance referring to the previous night: "Thinking about the last night, when you are at home, how much did noise from road traffic bother, disturb, or annoy you?" We used a five-point standardized equidistant response scale of Likert type ("1 = not at all", "2 = slightly", "3 = moderately", "4 = very", and "5 = extremely"). This scale was developed by the International Commission on Biological Effects of Noise (ICBEN) to provide internationally comparable measurements of annoyance [24]. To assess the subjectively perceived noise load due to nocturnal road traffic using the same response scale, the questionnaire also asked, "Thinking about the last night, when you are at home, how intensely did you hear the road traffic noise?" Furthermore, the participants estimated their subjective sleep quality with six different 100 mm long visual analogue scales [25] that referred to difficulties in falling asleep (easy-difficult), calmness of sleep (calm-restless), sleep duration (too shorttoo long, inversely coded for analysis), restoration (high-low), sleep depth (sound-shallow), and body movements (little-much). The values were added up and subtracted from 60 (the maximum overall score). Low values represent a decreased and high values an increased sleep quality. To obtain further non-acoustical variables that might contribute to the development of annoyance responses, we used additional questionnaires at the beginning and at the end of the study. In this way, we gathered demographical data as well as psychological data that referred among others to the participants' assessment of road transport (necessity, acceptability, eco-friendliness, economic importance: e.g., "Do you consider road transport in general as economically important?") and their general adaptation to chronic road traffic noise exposure ("How well can you adapt to road traffic noise in general?"). For the psychological parameters that were measured by Likert scales with several items (range 3 to 13), we calculated Cronbach's alpha. The internal consistency of the scale "assessment of road transport" improved appreciably after exclusion of the item "eco-friendliness". The values of the remaining three five-point answering scales (from "1 = not at all" to "5 = extremely") were added up so the final 'assessment of road transport' parameter ranged from 3 = negative to 15 = positive assessment. Cronbach's alphas of all scales ranged between 0.75 and 0.83 and were classified as acceptable to good. Statistical Analyses We excluded all adaptation nights as well as four additional nights due to absence of annoyance data. Whereas the final dataset of the acoustical recordings consisted of 156 nights, the traffic counter data were reduced by 17 nights. As a result, 41,872 road traffic noise events that occurred throughout participants' time in bed over 139 nights were taken into account. We analyzed the data with the statistical program IBM SPSS Statistics version 21 (IBM Corp., Armonk, New York). The distribution of the annoyance ratings of all nights was left skewed. Only 1% (N = 2) of the participants felt extremely annoyed, 5% (N = 8) stated that they were very annoyed, 12% (N = 18) reported a moderate annoyance, and 31% (N = 48) chose the response option "slightly annoyed", while 51% (N = 80) felt not at all annoyed. Due to the small proportion of moderately, very, and extremely annoyed persons, the widely used annoyance descriptors "highly annoyed" (%HA) and "annoyed" (%A) recommended by the European Commission [26] could not be applied. These descriptors define the percentage of persons with responses exceeding a value of 72 (for %HA) and 50 (for %A) on a 0 to 100 scale, respectively. Therefore, a binary variable was generated differentiating between subjects who were not at all annoyed (N = 80, value 0) and subjects who were slightly to extremely annoyed (N = 76, value 1). This corresponded to a statistical approach established by Miedema and Oudshoorn [8], who used a general rule to translate scales with different numbers of response categories (e.g., a 5-point scale) into the 0 to 100 scale and suggested a cutoff at the value 28 to account for the percentage of people who were annoyed at least a little (%LA). We used Generalized Estimating Equations (GEE) with logistic regression to model the annoyance data. The GEE is an extension of Generalized Linear Models (GLM) and was established by Zeger and Liang [27]. It describes associations as well as interactions between the variables, determines the strengths of the effects, controls effects of confounding variables, and gives predictive estimates of the response variable at certain values of the explanatory variable [28]. "The estimated coefficients for the independent variables represent the slope (i.e., rate of change) of a function of the dependent variable per unit of change in the independent variable." [29] (p. 47). Another parameter that describes the strength of the association is the odds ratio (OR). This parameter reflects the odds that a response occurs due to a specific exposure compared to the odds that it occurs in the absence of the exposure. The OR takes the value 1 when there is no effect of the independent variable on the odds of the outcome variable. An odds ratio less than 1 is related to lower odds, and conversely, an odds ratio of more than 1 indicates higher odds of response [30]. Repeated measurements of a subject are usually correlated and can be handled as a cluster. The GEE provides the opportunity to deal with those data [31]. The correlation within a cluster should be taken into account to avoid biased standard error estimators [28]. Based on the assumption that the relationship between the acoustical and nonacoustical parameters and the binary annoyance variable is nonlinear, we used logistic regression. Logistic regression estimates the probability for binary outcomes. Pan [32] introduced a valid criterion for GEE-the quasi-likelihood under independence model criterion (QIC) to choose the working correlation structure and to select the predictor variables. The smaller the value of this information criterion, the better the correlation structure and the model fit. Since the true correlation structure is unknown, we decided to choose the unstructured working correlation matrix according to the QIC. In a first step, a univariable analysis was performed for the binary annoyance variable and each acoustical and non-acoustical variable. We chose every non-acoustical variable on the basis of a significant correlation with the binary annoyance variable (p < 0.05) for further multivariable analyses. This approach revealed the following non-acoustical candidates: (1) residential satisfaction, (2) general perception of loudness in the residential area, (3) subjective sleep quality, (4) general adaptation to chronic road traffic noise exposure, (5) assessment of road transport, (6) concerns about harmful effects of road transport, (7) subjectively perceived noise load in the previous night, (8) long-term annoyance due to road traffic noise in general, (9) long-term annoyance due to noise by passenger car transport, (10) long-term annoyance due to noise by heavy vehicles, (11) activity disturbances due to road traffic noise, and (12) sleep disturbances. In a second step, we calculated multivariable logistic regressions for the L Aeq,night and the number of nocturnal road traffic noise events, each in a separate model to analyze the impact of road traffic noise on shortterm annoyance. We incrementally added the appropriate non-acoustical covariates in the multivariable models. We used the stepwise forward selection process and observed the QIC to select the appropriate predictors. Those covariates which had still a significant effect (p < 0.05) and decreased the QIC the most remained in the model. Only after any further variable missed significance and did not contribute to the model fit was the including completed. All possible two-way interactions between the selected variables and non-linear transformations were tested. None improved the model fit. As a final step, a multiple regression analysis was used to test for multicollinearity. The resulting variance inflation factors (VIF) were < 5, which precludes any critical intercorrelation among the independent variables in the model [33]. The DLR Institute of Aerospace Medicine had previously conducted two field studies to investigate the effect of nocturnal railway [16] and aircraft noise [17] on short-term annoyance of residents at their homes in the vicinity of Cologne and Bonn. The railway noise study [16] [17]. In both studies, the measurements were carried out on nine consecutive nights, including one adaptation night. We excluded the adaptation nights, as well as nights without acoustical recordings and questionnaire data. Although the psychological variables (i.e., traffic noise annoyance, subjective perception of noise load, general adaptation to chronic noise exposure of the traffic source, as well as perceived necessity, health hazard, and avoidability of the traffic source) of both studies were assessed by the response scale "not" to "very", we regarded them as comparable with the response scale of the current study on road traffic ("not at all" to "extremely") due to their semantic relationship. The acoustical recordings were also measured inside the bedroom with the appropriate time weighting depending on the noise source-railway noise events by the fast-response and aircraft noise by slow-response. The calculation of the L Aeq,nigh t in the air traffic study considered only those traffic noise events exceeding a maximum sound pressure level of 35 dB. We combined the data of these two studies with those from the present study on road traffic (descriptive statistics are given in Table 1) and used the identical statistical approach as mentioned above to account for the probability to be annoyed at least a little (%LA). According to the QIC, we chose the exchangeable working correlation matrix for GEE analyses. To derive exposure-response curves for every traffic noise source, the models contain a variable which refers to the respective kind of traffic noise source. Road traffic noise and railway noise were compared with aircraft noise, which served as a reference. Out of 11 congruent non-acoustical variables between the 3 studies, the following were significantly correlated (p < 0.05) with the binary annoyance variable and were considered in the stepwise forward selection process: subjective perception of noise load, general adaptation to chronic noise exposure of the traffic source, and health hazard. Descriptive Results Participants were exposed to an average of 297 passing vehicles per night, with a L Aeq,night of approximately 28 dB, as measured inside the bedroom (Table 1). Figure 1 displays the hourly distribution of road traffic noise events during participants' time in bed. The number of recorded pass-bys within the morning hour (6:00-7:00 h) was reduced since participants were free to choose an individual time in bed (between 22:00 and 8:00 h and with a duration of 7 to 8 h), such that not all participants were still in bed after 7:00 h. Prediction of Short-Term Annoyance by Road Traffic Noise The univariable analyses showed a significant effect of the L Aeq,night (p = 0.037, OR = 1.058, 95% CI (1.003, 1.116)), whereas all other acoustical metrics had no effect on the binary annoyance variable alone. According to the QIC, the best model fit was yielded by the number of nocturnal road traffic noise events (QIC = 192.414). The long-term annoyance by road traffic noise referring to the past 12 months correlated positively with the binary short-term annoyance variable (p = 0.004, OR = 2.502, 95% CI (1.337, 4.682)). Results for all 15 acoustical variables as well as the 12 non-acoustical candidates and their associations with the binary annoyance variable are shown as Supplementary Data (Tables S2 and S3). The probability to be annoyed at least a little increased with rising L Aeq,night (see Table 2 for results of GEE 1 model). Additionally, %LA rose with an increase in subjectively perceived noise load during the previous night, whereas a high subjective sleep quality was associated with reduced annoyance. The average L Aeq,night inside the bedroom of approximately 28 dB led to 53 %LA, while the maximum recorded value of approximately 45 dB was associated with a 93% probability to be annoyed at least a little by road traffic (Figure 2). Figure 2. Exposure-response curve for the probability to be annoyed at least a little (%LA) due to the A-weighted energy equivalent sound level, L Aeq,night , of nocturnal road traffic noise measured inside the bedroom. Other covariates were set at their respective median (subjectively perceived noise load = 2; subjective sleep quality = 38.8). The grey area represents the 95% confidence interval. As shown in GEE 2 (Table 2), an increase in the number of nocturnal noise events increased the probability to be annoyed at least a little (Figure 3). The model yielded the best data fit, when participants' subjective assessment of road transport was added as a non-acoustical variable. A positive assessment of road transport decreased the probability for subjects to be annoyed at least a little. Approximately 297 pass-bys per night resulted in 47 %LA. The selection process did not include subjective sleep quality and subjectively perceived noise load for model 2, possibly due to the fact that they each correlated with the number of noise events (subjective sleep quality: r = −0.307, p < 0.001; perceived noise load: r = 0.252, p = 0.003). The lower QIC value of GEE 1 indicated a better model fit due to the inclusion of the non-acoustical factors compared to GEE 2. . Exposure-response curve for the probability to be annoyed at least a little (%LA) due to the number of nocturnal road traffic noise events adjusted for the participants' subjective assessment of road transport (median = 9). The grey area represents the 95% confidence interval. Prediction of Short-Term Annoyance by Road, Railway, and Aircraft Noise The pooled dataset included 137 participants and 76,611 noise events. Participants were exposed to an average of approximately 67 trains per night, with an indoor L Aeq,night of approximately 37 dB in the railway study (Table 1). In the aircraft noise study, an average of approximately 36 airplanes per night caused a L Aeq,night of approximately 25 dB measured inside the bedroom. We visualized the stepwise forward selection process in Table 3 (GEE 3, GEE 4, GEE 5) and Table 4 (GEE 6, GEE 7) to explore the effect of the specific noise sources on the binary annoyance variable. In a simple model (GEE 3) including only the three traffic noise sources, the probability to be annoyed at least a little was highest for railway noise. The latter did not differ from road traffic noise (p = 0.925, OR = 1.034, 95% CI (0.519, 2.056)), but differed from aircraft noise, which was least annoying. In model GEE 4, the L Aeq,night measured indoors was added as a factor ( Table 3). The higher the L Aeq,night , the higher the %LA for all traffic modes (non-significant interaction between L Aeq,night and traffic noise sources). When controlling for the general adaptation to noise, the model resulted in road traffic, but not railway traffic, being more annoying than aircraft traffic. (GEE 5, Figure 4). The number of nocturnal noise events proved to be a significant acoustical predictor (GEE 6, Table 4) and its inclusion in the intermediate model explained the differences in %LA between the traffic modes. The best model fitting was achieved (GEE 7) when participants' general adaptation to noise was included as a factor (lower %LA with increasing degree of adaptation, irrespective of the traffic mode). Since an interaction between general adaptation and traffic noise source was not significant and impaired the model fit, it was not included. When controlling for the number of nocturnal noise events, the probability to be annoyed at least a little was higher for railway than for air traffic and higher for road than for air traffic, while road traffic did not differ from railway (p = 0.924, OR = 1.074, 95% CI (0.248, 4.647)). The percentage of at least a little annoyed residents as predicted by the number of nocturnal railway, road traffic, and aircraft noise events (model GEE 7) is shown in Figure 5. There was an apparent interaction between road traffic and the number of nocturnal noise events. As long as the number of nocturnal traffic noise events was below 79, road traffic noise caused more annoyance reactions than air traffic. However, with an increasing number of nocturnal noise events, air traffic became more annoying than road traffic. The interaction between railway traffic and the number of events did not differ significantly from road traffic (p = 0.299, OR = 0.994, 95% CI (0.982, 1.006)) or air traffic. Effect of Road Traffic Noise on Short-Term Annoyance For a better traceability, the main characteristics of our study and those used to back up the discussion and conclusions are summarized within the Supplementary Data (Table S4). The present study provides evidence that an increase in the L Aeq,night from road traffic measured inside the home environment results in a higher percentage of residents who are annoyed at least a little. This corresponds to previous findings on long-term annoyance based on estimated noise levels outside at the house façade (i.e., Reference [8]). In addition, our results showed that short-term annoyance probability (%LA) increased with the number of night-time vehicle pass-bys. Similarly, Jakovljevic et al. [15] have emphasized the number of noise events as an important characteristic of road traffic noise. They also highlighted the role of night-time road traffic noise exposure measured outside as a better predictor of high long-term annoyance levels in comparison to daytime exposure. Measurements inside the participants' bedroom enabled us to explore the effect of the noise levels reaching the ear. Locher et al. [34] emphasized the importance of the indoor sound levels in health studies. They criticized that the application of a constant correction factor for indoor levels " . . . is a very coarse estimate and does not take into account specific conditions of the dwelling situation, window opening behavior, and building characteristics" (p. 2). They pointed out that health effects can be biased in every direction by misclassification of the noise exposure. Bartels et al. [22] derived individualized noise metrics from the outside recordings that considered the participant's whereabouts and window position to account for outdoor to indoor attenuation. They concluded that the individualized indoor metrics predict short-term annoyance ratings more precisely than the outdoor measurements. By measuring the actual indoor level in the present study, we were able to overcome these limitations and to minimize the probability of biased outcomes. Since the participants were free to choose the preferred window position (closed, tilted, open), the sound level inside may have been a consequence of their response to the traffic noise outside, which may have skewed the relationship between inside noise and annoyance. However, the window position did not significantly influence the binary annoyance variable in our study (data not shown). Based on the results by Müller [35], we assume that the outdoor noise level was attenuated by 13 dB(A) for open windows, 14 dB(A) for tilted windows, and 27 dB(A) for closed windows. Given the mean indoor L Aeq of approximately 28 dB and that the windows were mainly closed or tilted in our study, the mean outdoor level might have been between 42 and 55 dB(A). The German night-time immission limits (L Aeq ) outside at the façade during construction or major changes to public roads are 49 dB(A) for pure residential areas and 54 dB(A) for residential areas mixed with commercial buildings [36]. Pennig et al. [16] and Quehl et al. [19] showed a significant effect of the number of railway and aircraft noise events on short-term annoyance and argued that the effect of traffic noise should not be evaluated exclusively on the basis of the L Aeq . Although the L Aeq considers the number of events, it does not give detailed information about the traffic composition and the temporal pattern. Since a few noisy events can result in the same L Aeq value as many less noisy events, it is possible that the different temporal patterns may result in different annoyance reactions. As many other authors, Gjestland and Gelderblom [37] emphasized that noise-induced annoyance is not unambiguously described by a cumulative metric such as the L Aeq . Their meta-analysis also showed a significant role of the number of aircraft movements derived from the survey reports and airport data in predicting long-term annoyance. At equal noise levels estimated for outside, annoyance increased with the number of events. The authors assumed that in comparison to many noise events at lower level, few but louder events might result in longer quiet inter-event intervals and thus lower annoyance [38]. Thus, the number of pass-bys as a separate acoustical predictor should also be taken into account for noise protection concepts and noise insulation. In our study, the unadjusted models that included only acoustical metrics and were calculated by univariable analysis revealed that the number of pass-bys yielded a better model fit than the L Aeq,night did. Suburban road traffic is typically composed of short isolated noise events during the night. In contrast, urban traffic is more dense, and is characterized by few noise-free intervals. This difference might explain why the number of events proved to be a more meaningful metric to predict annoyance in suburban residents. The consideration of non-acoustical predictors is important for the explanation of variance in annoyance among the participants. They might explain why some individuals felt annoyed even when the exposure was low, whereas others were not annoyed even though the noise load was high [22]. The current study corroborates the importance of non-acoustical factors. The predictors in our study, selected by the best model fit, differed depending on the acoustical parameters (L Aeq,night vs. number of nocturnal events). Similar observations have been reported for aircraft noise-induced short-term annoyance [17]. The current study demonstrates an important association between sleep quality and short-term annoyance and supports the assumption of Porter et al. [21] that the accumulation of sleep disturbances leads to annoyance responses the next morning. Griefahn [39] has suggested that the subjective assessment of sleep quality is predominantly determined by the consciously perceived noise exposure and does not refer to physiological reactions while asleep. A study by Öhrström [40] which investigated the impact of road traffic noise measured outside on long-term annoyance reported that sleep disturbances during the time before sleep onset and the time before getting up are more annoying than during other hours. In our study, most pass-bys of road traffic occurred at the beginning (23:00 h to 24:00 h) and near the end of participants' time in bed (5:00 h to 7:00 h). The morning hours exhibited a higher number of events without or with only short noise-free phases. The reduced sleep pressure in the morning hours might result in an increased awakening probability and therefore lead to a conscious perception of the high noise exposure. Our data are not suitable to test this hypothesis. This could be examined systematically in a laboratory setting. In accordance with Öhrström [41] and Pennig et al. [16], who found a correlation between the number of noise events and self-reported sleep disturbances, we revealed a significant relationship of the number of noise events with the subjective sleep quality. When road transport was assessed as important, necessary, and acceptable, %LA decreased in our study. This effect has also been reported in the context of railway noiseinduced long-term annoyance [42]. Importantly, this non-acoustical factor holds the potential to be improved by the respective authorities and transport operators. In the case of railway and air transport, authorities and institutions could foster transparency and involve residents in their decisions, which might change residents' attitude towards the noise source and therefore the degree of annoyance. In contrast, given that many individuals who are exposed to road traffic noise are also car drivers-in particular in suburban areas as studied here-it may make it more difficult to change an individual's attitude towards road transport. The subjective perception of the noise load turned out to have a large impact on short-term annoyance (OR = 44.6) in the current study. This supports the assumption of Lam et al. [43] that an individual's perception of loudness is a better predictor of longterm annoyance than an acoustical metric. Since the processing of sound depends on the subjective perception, this might explain why we and some other authors did not find significant effects on annoyance for the acoustical metrics alone, with exception of the L Aeq (i.e., Reference [16]). Nevertheless, it should be kept in mind that we measured a verbal expression of the residents' perception. Schreckenberg and Schuemer [44] reported two non-acoustical variables that contributed more to the prediction of long-term annoyance than the L Aeq . Hence, they concluded that non-acoustical variables together with annoyance may reflect a general form of annoyance response to traffic noise. Comparison of Road, Railway, and Aircraft Noise Effects on Short-Term Annoyance Comparing the impact of the three major traffic noise sources-road, railway, and air traffic-on short-term annoyance, the probability to be annoyed at least a little varied not only depending on the respective noise source, but also depending on the acoustical metric. One might argue that the comparability to other traffic noise studies using "%HA" or "%A" might be limited. However, an analysis by Miedema and Oudshoorn [8], exploring different cut-off points for long-term annoyance scales (%LA, %A, and %HA), resulted in the same ranking of transportation modes irrespective of the cut-off point. Based on these findings, we assume that our results on %LA are comparable with those of studies on %HA or %A. While several studies have already proven that long-term annoyance responses differ by source [11,45], the latter has only been reported by a few studies. The ranking of traffic modes regarding their impact on short-term annoyance in laboratory studies changed when the maximum sound pressure level was taken into account instead of the L Aeq [46,47]. However, to our knowledge, the current study is the first to compare the effect of road traffic with that of railway and air traffic on short-term annoyance with respect to the number of noise events. Interestingly, when the number of events was low (< 79), %LA was higher for road traffic than for air traffic, whereas the reverse was observed with higher number of events. The result, that railway traffic did not differ significantly from air traffic when the effect of number of nocturnal events on short-term annoyance was taken into account, is in accordance with findings by Elmenhorst et al. [12]. The traffic modes showed a near-linear increase in %LA with increasing L Aeq,night in the order road traffic > air traffic. Annoyance due to railway traffic was not significantly different from air traffic. These findings on short-term annoyance differ from observations on long-term annoyance by Brink et al. [11], who ranked road traffic < railway traffic < aircraft noise. Our findings also differ from the European standard curves, which ranked railway < road traffic < aircraft noise [8]. However, the acoustical parameters of these studies refer to the exposure measured or calculated by propagation models outside at the house façade. Furthermore, unlike these studies whose exposure-response curves were based on separate models for each primary noise source, we derived the curves based on a pooled model, thus allowing for more direct comparisons. In a large German survey, road traffic was rated as the most annoying traffic noise source, followed by air and railway traffic [48]. However, again, comparisons with our study are difficult since the survey data related to a random sample of the general German population, irrespective of the respondents' noise exposure. Noise researchers argued that the different acoustical properties of noise sources such as temporal pattern, predictability, and frequency composition of noise events [49,50] might be responsible for differences in annoyance reactions. Versfeld and Vos [51] have demonstrated in a laboratory study that different spectral characteristics of vehicles caused by engine type and operating behavior affected short-term annoyance. While road traffic exhibited more variability (i.e., mopeds, cars, lorries), aircraft or railway noise varied less, and occurred according to a more regular time pattern [50]. In our study, the temporal pattern of air traffic at Cologne-Bonn differed from that of road traffic, with most overflights occurring between 23:00 to 1:00 h and between 3:00 to 5:00 h [12]. In addition, air and railway traffic were characterized predominantly by single noise events and noise-free intervals in between, as opposed to the dense road traffic during the morning rush hour. The higher participants rated their general adaptation to noise, irrespective of the noise source, the lower their annoyance level was. This finding is in line with studies on railway noise by Pennig et al. [16]. The authors noted that the adaptation to railway noise did not reflect an incremental process over time, but rather the individual ability to cope with traffic noise. Limitations The burden of extensive noise exposure measurements in the present study limited the number of participants that we were able to include. Moreover, the road traffic in the chosen suburban areas induced only low levels of short-term annoyance in affected residents. Nevertheless, we considered it relevant to address the question of noise effects in suburban areas (i) since a large proportion of the population lives in these areas and (ii) since research has predominantly focused on regions with high noise exposure. We confirmed previous studies that found a direct relation between short-term and long-term annoyance [22,44]. However, since studies on long-term annoyance generally use an averaged exposure that is calculated for day and night with a 10 dB penalty (L dn ) for the night-time, a direct comparison to our findings may be limited. Even though data collection procedures were in large parts identical for the studies on road, railway, and aircraft noise, some differences in questionnaires limited the number of non-acoustical variables which were available for comparison. Slight differences also existed in calculating the L Aeq,night : in the present study on road and the earlier study on railway traffic, all events from the respective noise source contributed to the L Aeq,night , whereas in the study on aircraft noise, only events exceeding the threshold of 35 dB were taken into account. This might have resulted in slightly higher L Aeq,night values in the study on road traffic. While established acoustical parameters like the L Aeq are quite stable in their correlation with annoyance, the impact of non-acoustical variables on annoyance can differ depending on the traffic noise source and on the examined population [52]. Therefore, our results should only cautiously be generalized to areas outside the Cologne and Bonn region. Conclusions The present study fills the gap for exposure-response relationships between nocturnal road traffic noise and short-term annoyance in suburban areas, taking the L Aeq,night and the number of nocturnal events assessed by precise measurements inside the residents' home into account. Furthermore, it provides for the first time, to our knowledge, a direct comparison of the road, railway, and air traffic noise effects on short-term annoyance. The results support the assumption that both the L Aeq,night measured inside and the number of nocturnal events contribute to the development of annoyance reactions induced by different traffic noise sources. The data emphasize the significant role of non-acoustical variables for the prediction of short-term annoyance. Surprisingly, whether road, aircraft, or railway traffic was perceived as more annoying depended on the respective acoustical metric. Thus, caution is warranted when ranking traffic modes with respect to the degree of noise annoyance they cause. Since the examined sample is small and the moderate nocturnal road traffic of the investigated areas led to rather low annoyance levels, prospective investigations should be extended to a larger sample as well as include residential areas with dense road traffic.
10,458
2021-04-27T00:00:00.000
[ "Physics" ]
Direct visualization of critical hydrogen atoms in a pyridoxal 5′-phosphate enzyme Enzymes dependent on pyridoxal 5′-phosphate (PLP, the active form of vitamin B6) perform a myriad of diverse chemical transformations. They promote various reactions by modulating the electronic states of PLP through weak interactions in the active site. Neutron crystallography has the unique ability of visualizing the nuclear positions of hydrogen atoms in macromolecules. Here we present a room-temperature neutron structure of a homodimeric PLP-dependent enzyme, aspartate aminotransferase, which was reacted in situ with α-methylaspartate. In one monomer, the PLP remained as an internal aldimine with a deprotonated Schiff base. In the second monomer, the external aldimine formed with the substrate analog. We observe a deuterium equidistant between the Schiff base and the C-terminal carboxylate of the substrate, a position indicative of a low-barrier hydrogen bond. Quantum chemical calculations and a low-pH room-temperature X-ray structure provide insight into the physical phenomena that control the electronic modulation in aspartate aminotransferase. This manuscript describes a detailed structure of an enzyme that is very complicated. Since all of the chemistry resides in the cofactor, the protonation states of the various io nizable parts of the cofactor are presumable very important for the determination of the type of reaction catalyzed. C onsequently, this is an important structure to begin to make sense of these protonation states. However, the presentation of the results needs better interpretation taking into consideration what is known about this protein (pKa's, other structures of which there are many from X -ray data). For instance, I think a decision has to be made about what is presented here: models based on crystallographic and spectroscopic data, or interpretation based on biochemical intuition. They are not the same and may be inconsistent. Toward a better understanding of the active site a number of issues might be addressed. Figures should be made to illustrate the point being made. Figure 1: it is impossible to see anything. Is it possible to get the same view for both, and possibly a better one than that given? Figure S1: needs a figure legend Figure 2: what is shown are not coordinates, they are models 2B: there is no evidence for the proton from the amino acid being the one transferred to the Schiff base (see comment above). Figure S2: how were the directions of the water molecules determined? Are these waters observed in the X-ray derived model? Figure 3: what is shown are not coordinates, they are models. A and C should be in stereo Figure 4: the result from the neutron structure is that the Schiff base is unprotonated, but in this figure it is shown as protonated. What is known about the proton ation state of lysine (shown here as neutral)? Perhaps the figure could be labeled as a "suggested mechanism". Figure 5: what is shown are not coordinates, they are models. A great interpretation is given, but it needs clarification. There are several structures of AspAT's from different organisms, in both the external and internal aldimine forms. Are there torsional differences seen in them also for the O3' to Schiff base configuration? Reviewer #3 (Remarks to the Author): This paper represents a substantial accomplishment -the first neutron diffraction crystal structure of a PLP dependent enzyme. PLP-dependent enzymes make up 4% of catalogued enzymes and catalyze a vast array of amino acid transformations; to truly understand this rich chemistry and the acid-base mechanism of these enzymes requires that hydrogen atoms be located. This paper is the first example of such an accomplishment -demonstrating direct visualization of critical hydrogen bonds in the active site(s) of aspartate aminotransfe rase using neutron diffraction. Such structures hold great promise for insight into the mechanism of action and this work delivers on this promise. While a few of the results merely confirmed the protonation states inferred by earlier X-ray structures (still an accomplishment), several of the results were surprising and offer unique perspectives on the mechanism. These include the change in protonation states on H189, which is found to be neutral in the internal aldimine, but becomes protonated in the exter nal aldimine. The resultant extra positive charge is an additional counterbalance to the negative charge on C α in the ensuing carbanion. As well, in the external aldimine, a deuterium was observed midway between the Schiff base nitrogen and the substrate carboxylate. This unique structure is similar to the equilibrium proposed in reference 8 and my only substantial scientific question on this work is how well the authors can differentiate between the proposed low-barrier hydrogen bond and tautomeric exchange (this seems like a question that solid-state NMR might help address at a later point). Otherwise, this paper is ready and should be accepted for publication. Reviewer #4 (Remarks to the Author): In the manuscript entitled 'Direct visualization of cr itical hydrogen atoms in a pyridoxal 5'phosphate enzyme', Dajnowicz and co-workers report for the first time a neutron crystal structure of a PLP-dependent enzyme, an aspartate aminotransferase (AAT). Taking advantage of a particularity of the crystal, each of the two monomers in the asymmetric unit present different states of the AAT reaction: the internal and external aldimines. Hence, the neutron crystal structure reveals the protonation states of key residues in the active site, as well as of the Schif f Base and other atoms of aldimine. Unexpectedly, in the external aldimine, protonation of Nsb is observed, and not of O3' as previously supposed. The authors also a report a low pH (~4.0) X -ray structure, to assess the structural changes of internal aldim ine upon protonation of Nsb. Finally, DFT calculations have been used to understand the origin of the out-of-plane conformation of the Schiff base in the internal aldimine. Results presented suggest that such geometry originates from intramolecular electronic forces, and not from strain caused by the side chain of K258. While all these results are of capital importance to decipher the exact chemistry of such import class of enzymes, the manuscript in its actual form cannot be accepted without important comments to be adressed. Major comments: In the Results part, the neutron structure is well described and there is no major comment on this section (see below for details). 1) Regarding the low-pH structure section, there is a striking difference betwe en the angles you report for the external aldimine at pH 7.5 (-28 deg) and pH 4 (26 deg). The external aldimine is not the focus of the paper, but such a striking difference should be explained (even in Supplemental Information if space is limited). Would that come from the protonation of the substrate carboxylate? 2) For the DFT calculations, the justification for a truncated version of the PLP is the computational cost (in the Methods section). While legitimate for the phosphate group, adding the main c hain atoms for the lysine does not seem that costly (N-Ca-C for example). C ould you please comment on this point? 3) Also, the PLP pyridinium ring is Pi-stacked to a Trp residue. How this interaction would affect the orbitals of the conjugated PLP, and the conclusions of your calculations ? 4) While you give numerical values for the hyperconjugative interactions (5.2 and 12.3 kcal/mol), none is given for lone-pair repulsion. Is it possible to calculate one? If so, please provide it to strengthen your statement that "favorable conjugative and hyperconjugative interactions offsets the disruption of conjugation caused by lone pair repulsion). Please, also provide numerical values when you state that "hyper conjugative interactions are significantly decrease d in the protonated SB model". If increases/decreases of energies are discussed, please provide all values. 5) You also mention a rather small deviation between the (C 3-C4-C 4'-Nsb) angle in the pH 7.5 neutron structure (46 deg) and in the unconstrained optimized geometry of internal aldimine (42 deg), how good is agreement between the Lys 258 C hi angles in the neutron structure and the computed model ? 6) Finally, at pH4.0, with a protonated Nsb, the angle is 22 deg, compared to 0 deg in the computed model. The agreement is not as good as before, could you comment on this please? Is it possible that the out of plane configuration is maintained by the intramolecular ecletronic forces, but a completely planar configuration is impossible because of the lys ine / active site constraints? While the first part of discussion is well written, it should not end, in my opinion, with a summary of the study findings. Indeed, some of these findings have already been discussed earlier in the manuscript, and it is not necessary to mention them again. You could still follow how your findings apply to the transamination reaction, but with more in depth discussion at each point. C omments regarding the discussion: -The low-barrier hydrogen bond seems to be expected in the PLP mechanism between Nsb and O3' of PLP, a reference should point to studies mentioning it at the end of the paragraph in which authors state: "the LBHB would be expected to have formed between the between Nsb and O3' of PLP" -Authors mentioned QM/MM calculations which investigated the tautomeric equilibrium between the Nsb and O3' in AAT, but do not discuss it. If this study is mentioned, some results or conclusions should be discussed in the light of the new findings of this manuscript. In the paragraph mentioning the NMR study on the TRP -synthase, the comparison is interesting as they suggest that the protonation of N1-PLP can modulate type of chemistries. 2 comments in this paragraph: -Why mentioning the different exchange rates of the D atom between the AAT internal and external amides? It does not seem to add anything in the comparison. -Is there any clear differences in their respective active sites between the two enzymes that could account for the difference in protonation state ? Minor comments: -To ease reading of this manuscript, please provide an additional Figure of the atom labeling and torsional angle convention for the PLP. Also, reporting O3' and Nsb in all chemical representations of PLP could be useful for the reader. -You could mention that only a fraction of the deuterium have been omitted for Fo -Fc map calculation. Otherwise it can be confusing for the reader to have peaks only on some deuterium atoms represented in the figure. -You mention PLP all along the manuscript for both internal and external aldimines, while using PLA for external aldimine in the figures (without any information on the meaning of PLA). -In the second Results paragraph, when you specify that pKa estimations of Nsb vary, please give a range of values. - Figure 2B is not cited -At the end of the fourth paragraph in the Results section: you mention a stronger and less mobile HBond of PLP but you do not specify the partner. -Please rephrase the first sentence of the following paragraph: "The H bond network … changes protonation state" -At the end of the Results section, the last sentence referring to the software used should be removed. A similar sentence is already in the methods section. Different references have been used though, please correct. -In the Low-pH X-ray structure paragraph - Figure The authors present new impressive insights into the aspartate aminotransferase system through detailed neutron and X-ray structural analysis, supported by quantum chemical calculations. All of the findings are well supported and of high interest to the biochemistry of the pyridoxal phosphate cofactor. I find particularly relevant the dismissal of the ground state destabilisation hypothesis. Many works try to highlight strain effects in enzymes, pinning reactivity to destabilisation of reactants/cofactors without a thorough verification of such hypothesis. Quantum chemical calculations are often times needed to distinguish between steric hindrance and electronic effects. This is successfully done in this manuscript. Overall, I find the procedures and the discussion sound. The application of NBO for the analysis of the electronic structure, although I am often times skeptical about the method, is well warranted in this case. It is a very significant work, which I believe will have a strong impact in the community. I support publication as is. We greatly appreciate the Reviewer's enthusiasm toward our manuscript and the time and effort of the Reviewer. Reviewer #2 (Remarks to the Author): This manuscript describes a detailed structure of an enzyme that is very complicated. Since all of the chemistry resides in the cofactor, the protonation states of the various ionizable parts of the cofactor are presumable very important for the determination of the type of reaction catalyzed. Consequently, this is an important structure to begin to make sense of these protonation states. However, the presentation of the results needs better interpretation taking into consideration what is known about this protein (pKa's, other structures of which there are many from X-ray data). For instance, I think a decision has to be made about what is presented here: models based on crystallographic and spectroscopic data, or interpretation based on biochemical intuition. They are not the same and may be inconsistent. Toward a better understanding of the active site a number of issues might be addressed. Reply: Yes, we agree that models based on crystallographic and spectroscopic data, or interpretation based on biochemical intuition, are not the same. This comment is addressed in the manuscript. Specifically, we removed the biochemical intuition comments. Figures should be made to illustrate the point being made. Figure 1: it is impossible to see anything. Is it possible to get the same view for both, and possibly a better one than that given? Reply: We have improved Figure 1 as suggested by the reviewer. Specifically, the blow-up images of the active sites (in sticks) are made larger to clearly show the important active site residues. In the external aldimine state the small domain is closed, and the view that we present is the best we can make. We have also updated the figure caption to refer to residues Arg292 and Arg386, which move into the active site of the external aldimine due to the small domain motion and are now in contact with the cofactor/substrate, but these two residues are farther away in the active site of the internal aldimine. Reply: Yes, we agree with the reviewer. What is shown are the refined structural models based on the experimental diffraction data. We corrected the figure legend in the manuscript. 2B: there is no evidence for the proton from the amino acid being the one transferred to the Schiff base (see comment above). Reply: Yes, there is no evidence for the proton from the amino acid being the one transferred to the Schiff base. However, we propose this pathway based on the neutron structure. This pathway is chemically reasonable and feasible. The figure caption now states, "(B) Michaelis complex and proposed substrate activation with the proton transferred during substrate binding shown in red. A direct or indirect proton transfer mechanism is plausible." Figure S2: how were the directions of the water molecules determined? Are these waters observed in the X-ray derived model? Reply: Yes, the waters were observed in the X-ray data, but the positions of the hydrogen atoms were not determined. The orientations of the D 2 O molecules were determined based on the nuclear scattering length density maps. Importantly, the reported neutron structure is refined using a joint X-ray/neutron refinement approach, in which one model containing all the atoms is refined concurrently against both datasets, as described in the Methods section. The method section also states: "All water molecules were refined as D 2 O. Initially, water oxygens were positioned using their electron density peaks, and then were shifted slightly in accordance with the nuclear scattering length density." Figure 3: what is shown are not coordinates, they are models. A and C should be in stereo Reply: As is the case for Figure 2, we corrected the caption in the manuscript. The stereo view does not help the quality of the figure, and also it would take considerably more space in the manuscript. Also, our experience is that many readers are not able to visualize stereo images. Thus, we believe that stereo figures would not provide significant improvement of the structure presentation. Figure 4: the result from the neutron structure is that the Schiff base is unprotonated, but in this figure it is shown as protonated. What is known about the protonation state of lysine (shown here as neutral)? Perhaps the figure could be labeled as a "suggested mechanism". Reply: The reviewer, perhaps, refers to the internal aldimine, where indeed the Schiff base is not protonated. To make Figure 4 not redundant with Figure 2, we started the reaction scheme from the external aldimine. In the external aldimine we observe protonation of the Schiff base in our neutron structure. We changed the caption to "Proposed mechanism". Figure 5: what is shown are not coordinates, they are models. A great interpretation is given, but it needs clarification. Reply: The caption has been changed to, " Figure 5. Extended hydrogen bond network near N1-PLP in the internal aldimine (A) and external aldimine (B). (C) Proposed Grotthüss proton hopping mechanism, leading to the protonation of H189 during the formation of the external aldimine." There are several structures of AspAT's from different organisms, in both the external and internal aldimine forms. Are there torsional differences seen in them also for the O3' to Schiff base configuration? Reply: Yes, the manuscript now states, "The out-of-plane geometry for the SB observed in our neutron structure was also identified in other aminotransferase enzymes, with the torsion angle range of 43-96° 13 ." Reviewer #3 (Remarks to the Author): This paper represents a substantial accomplishment -the first neutron diffraction crystal structure of a PLP dependent enzyme. PLP-dependent enzymes make up 4% of catalogued enzymes and catalyze a vast array of amino acid transformations; to truly understand this rich chemistry and the acid-base mechanism of these enzymes requires that hydrogen atoms be located. This paper is the first example of such an accomplishment -demonstrating direct visualization of critical hydrogen bonds in the active site(s) of aspartate aminotransferase using neutron diffraction. Such structures hold great promise for insight into the mechanism of action and this work delivers on this promise. While a few of the results merely confirmed the protonation states inferred by earlier X-ray structures (still an accomplishment), several of the results were surprising and offer unique perspectives on the mechanism. These include the change in protonation states on H189, which is found to be neutral in the internal aldimine, but becomes protonated in the external aldimine. The resultant extra positive charge is an additional counterbalance to the negative charge on Cα in the ensuing carbanion. As well, in the external aldimine, a deuterium was observed midway between the Schiff base nitrogen and the substrate carboxylate. This unique structure is similar to the equilibrium proposed in reference 8 and my only substantial scientific question on this work is how well the authors can differentiate between the proposed low-barrier hydrogen bond and tautomeric exchange (this seems like a question that solid-state NMR might help address at a later point). Otherwise, this paper is ready and should be accepted for publication. Reply: We greatly appreciate the Reviewer's interest and passion toward neutron crystallography. To answer the question concerning LBHBs and tautomerism, we emphasize that, indeed, neutron crystallography can differentiate between the two phenomena. If there was tautomeric exchange, let's say 50/50, then we would observe two peaks in the nuclear density map. One peak would correspond to the N-D tautomer and the second peak would correspond to the O-D tautomer. Thus, this situation essentially corresponds to a hydrogen with a small but significant energy barrier. In an LBHB, that energy barrier is about the same height as the vibrational energy levels for the hydrogen. Thus, the proton can move almost freely between N and O atoms. Quantum mechanics then requires that the highest probability for the proton position is halfway between N and O. That is exactly what we observed in the neutron structure. Reviewer #4 (Remarks to the Author): In the manuscript entitled 'Direct visualization of critical hydrogen atoms in a pyridoxal 5'phosphate enzyme', Dajnowicz and co-workers report for the first time a neutron crystal structure of a PLP-dependent enzyme, an aspartate aminotransferase (AAT). Taking advantage of a particularity of the crystal, each of the two monomers in the asymmetric unit present different states of the AAT reaction: the internal and external aldimines. Hence, the neutron crystal structure reveals the protonation states of key residues in the active site, as well as of the Schiff Base and other atoms of aldimine. Unexpectedly, in the external aldimine, protonation of Nsb is observed, and not of O3' as previously supposed. The authors also a report a low pH (~4.0) X-ray structure, to assess the structural changes of internal aldimine upon protonation of Nsb. Finally, DFT calculations have been used to understand the origin of the out-of-plane conformation of the Schiff base in the internal aldimine. Results presented suggest that such geometry originates from intramolecular electronic forces, and not from strain caused by the side chain of K258. While all these results are of capital importance to decipher the exact chemistry of such import class of enzymes, the manuscript in its actual form cannot be accepted without important comments to be addressed. Major comments: In the Results part, the neutron structure is well described and there is no major comment on this section (see below for details). 1) Regarding the low-pH structure section, there is a striking difference between the angles you report for the external aldimine at pH 7.5 (-28 deg) and pH 4 (26 deg). The external aldimine is not the focus of the paper, but such a striking difference should be explained (even in Supplemental Information if space is limited). Would that come from the protonation of the substrate carboxylate? Reply: We have found and corrected two typographical errors in the Method Section for the Low-pH X-ray structure. The pH 4 structure is that of the internal aldimine state only. To make this point clearer, the manuscript now states, "To probe structural changes that occur upon internal aldimine SB protonation, we obtained a low-pH X-ray structure of AAT in the internal aldimine state. In the other monomer of the pH 4.0 structure (chain A with restricted motion), the corresponding torsion angle and the (Y225)O … O(O3') distance are 26° and 3.0 Å, respectively." 2) For the DFT calculations, the justification for a truncated version of the PLP is the computational cost (in the Methods section). While legitimate for the phosphate group, adding the main chain atoms for the lysine does not seem that costly (N-Ca-C for example). Could you please comment on this point? Reply: Yes, we agree with the reviewer that adding the main chain atoms will not significantly increase the computational cost. If the main chain atoms were added, we would need to fix their positions during the geometry optimizations to their corresponding crystallographic positions. Our intention, and the goal, was to run unconstrained geometry optimizations on the internal aldimine models to ensure that only electronic effects were probed; thus, we truncated the models at Cβ. We have indeed shown that intramolecular electronic effects have a very significant influence on the co-planarity of the PLP and SB. In addition, adding the main chain atoms would not significantly influence the intramolecular conjugation, hyperconjugation, and lone pair repulsion effects in the PLP and SB as the main chain atoms are separated from SB by several CH 2 groups. Still, when we computed these models with the main chain atoms added only small (< 2.5 kcal/mol) deviations are observed for the orbital donor-acceptor interactions. 3) Also, the PLP pyridinium ring is Pi-stacked to a Trp residue. How this interaction would affect the orbitals of the conjugated PLP, and the conclusions of your calculations? Reply: Yes, it is possible that π-π stacking of PLP with Trp could attenuate the degree of conjugation to some extent. However, the main focus of the present study is on the role of cofactor protonation in directly influencing reactivity. We view protonation and intramolecular orbital interactions as first-order effects, whereas noncovalent interactions with nearby residues are important second-order effects. Evidence for this viewpoint is the reasonable reproduction of the crystallographic geometries in minimal models that lack π-π stacking from Trp140. The DFT calculations show that intramolecular orbital interactions between the Schiff base and pyridine ring contribute significantly to conjugation, which influences non-coplanarity. 4) While you give numerical values for the hyperconjugative interactions (5.2 and 12.3 kcal/mol), none is given for lone-pair repulsion. Is it possible to calculate one? If so, please provide it to strengthen your statement that "favorable conjugative and hyperconjugative interactions offsets the disruption of conjugation caused by lone pair repulsion). Please, also provide numerical values when you state that "hyper conjugative interactions are significantly decreased in the protonated SB model". If increases/decreases of energies are discussed, please provide all values. Reply: Unfortunately, the NBO program cannot compute the numerical values for lone-pair repulsion. The NBO program has been designed to compute numerical values for electron donor-acceptor interactions. We now added in the text, "In the protonated SB model, hyperconjugative interactions between N SB -C4' and C3-C4 significantly decrease, to < 0.5 kcal/mol." 5) You also mention a rather small deviation between the (C3-C4-C4'-Nsb) angle in the pH 7.5 neutron structure (46 deg) and in the unconstrained optimized geometry of internal aldimine (42 deg), how good is agreement between the Lys 258 Chi angles in the neutron structure and the computed model? Reply: We observed only minor deviations (~5 deg or less) for the chi angles in the DFT models. For instance, the experimental χ 3 angle for Lys258 Chi is 177 deg in the neutron structure and 179 deg in the DFT model. Thus, the agreement between computation and experiment is very good. 6) Finally, at pH4.0, with a protonated Nsb, the angle is 22 deg, compared to 0 deg in the computed model. The agreement is not as good as before, could you comment on this please? Is it possible that the out of plane configuration is maintained by the intramolecular electronic forces, but a completely planar configuration is impossible because of the lysine / active site constraints? Reply: Yes, it is possible that geometric restraints in the form of intermolecular interactions contribute to the out-of-plane geometry. However, the DFT calculations show that intramolecular orbital interactions between the Schiff base and phenolic oxygen also contribute. There are parallels with the reviewer's question in comment 3 above. To make sure that this point is clear in the manuscript we added, "This finding is consistent with the reduced torsion angle of 22° in the pH 4.0 X-ray structure, in which we expect N SB to be protonated. The deviation in the torsion angle between the DFT model and the low-pH structure can be attributed to geometric restraints imposed by active site residues that are not present in the DFT models. For example, the π-π stacking interaction between the pyridine ring of PLP and W140 were excluded in the simplified model. Nevertheless, the intramolecular orbital interactions can be considered primary (first-order) effects, whereas noncovalent interactions with nearby residues are second-order." While the first part of discussion is well written, it should not end, in my opinion, with a summary of the study findings. Indeed, some of these findings have already been discussed earlier in the manuscript, and it is not necessary to mention them again. You could still follow how your findings apply to the transamination reaction, but with more in depth discussion at each point.
6,434.4
2017-10-16T00:00:00.000
[ "Biology", "Chemistry" ]
Rational design of asymmetric red fluorescent probes for live cell imaging with high AIE effects and large two-photon absorption cross sections using tunable terminal groups Donor–acceptor π-conjugated aggregation-induced red emission materials for live cell imaging. Introduction In view of the excellent photophysical properties, and the thermal and chemical stability, uorescent materials have been applied to bioimaging, 1 chemosensing 2 and organic lightemitting diodes (OLEDs). 3 However, compared to the bright emission in the solution state, the luminescence of most uorescent materials declined obviously in the solid state due to notorious aggregation-caused quenching (ACQ). 4 In 2001, the novel phenomenon of aggregation-induced emission (AIE) was rst found by Tang's group, in which the emission was very weak in solution but became intense in the aggregated state. 5 This important nding has become a new method to tackle the ACQ of conventional chromophores and has shown signicant academic value and promising applications in cell imaging, 6 uorescent sensors 7 and bioprobe materials. 8 However, traditional AIE luminophores, such as silole, 9 tetraphenylethene (TPE) 10 and tetra(naphthalen-2-yl)ethene (TNE), 11 generally emit shorter wavelengths. Relatively, among reported AIE materials, organic uorescent probe materials with excellent red emission and excellent specic staining for cell imaging are still rather limited. 12 Moreover, most one-photon absorption uorescent probes are generally excited via UV or visible light, which limits their applications due to cell damage and the photobleaching phenomenon. Compared with traditional one-photon uorescence imaging, two-photon uorescence imaging has the advantage of generating high-energy visible uorescence from low-energy irradiation in the near-infrared region, which draws great interest in the eld of materials science due to the various applications including biomedicine, biology and clinical detection. 13 Therefore, a good two-photon uorescent probe with a high uorescence quantum yield in the solid state and a large two-photon absorption (2PA) cross-section is highly desired. 14 Although intramolecular charge transfer (ICT) could afford large 2PA cross-sections, most 2PA chromophores exhibit weak uorescence emission in the solid state, suffering from the ACQ effect and strong donor-acceptor interactions. 15 Recently, some novel symmetric 2PA chromophores were reported which exhibited both large two-photon activities and high brilliant emissions in the solid state by incorporating AIE units. 16 However, whether the spatial symmetry of 2PA chromophores could affect the application of two-photon uorescence imaging is lacking systematic and detailed study. Therefore, to enrich the pool of excellent two-photon uorescent probe materials, it is of great signicance to synthesize novel red emissive AIE materials with an in-depth understanding of how to combine large 2PA cross-sections and high uorescence quantum yields in the solid state at the same time. In this work, we design and synthesize a family of donoracceptor (D-A) p-conjugated aggregation-induced red emission materials (TPABT, DTPABT, TPEBT and DTPEBT) with different spatial symmetries and different strengths of electron-donating terminal moieties (Chart 1). They have the same diphenylamine terminal group and the same four branched core, 2,2-(2,2diphenylethene-1,1-diyl)dithiophene (DPDT), which showed stronger electron-donating ability and easier modication than the TPE unit. 17 To modulate the donor-acceptor interaction and get red emission, TPE or triphenylamine mono-functionalized benzo [c][1,2,5]thiadiazole (BT) was employed as the other terminal group. Interestingly, TPABT and TPEBT, which have asymmetric structures, give obviously higher solid uorescence quantum efficiencies in comparison with those of the corresponding symmetric structures, DTPABT and DTPEBT, respectively. Moreover, TPEBT and DTPEBT with TPE groups showed very large two-photon absorption cross-sections of (d) 1.75 Â 10 3 GM and 1.94 Â 10 3 GM at 780 nm, respectively, which are obviously higher than the other two red uorescent materials with triphenylamine groups. The one-photon and two-photon uorescence imaging of MCF-7 breast cancer cells and Hela cells, and cytotoxicity experiments were carried out with these red uorescent probes. Intense intracellular red uorescence was observed for all the molecules by one-photon excitation and for TPABT by two-photon excitation in the cell cytoplasm. Finally, the in vivo imaging experiments of TPEBT are demonstrated in blood vascular imaging in the brain of a living mouse. Results and discussion The synthetic routes to the red AIE materials of TPABT, DTPABT, TPEBT and DTPEBT are presented Scheme 1. The dibromide intermediate 1 was prepared through the Corey-Fuchs reaction. Then 2 was obtained through the Suzuki coupling reaction between the dibromide 1 and (4-(diphenylamino)phenyl)boronic acid as a yellow solid in 65% isolated yield. Then 2 was lithiated by n-BuLi followed by quenching with trimethyltin chloride to afford a mixture of monotin 3 and ditin reagents 4 that was directly used in the next step without any further purication. 5 was prepared through the Suzuki reaction between (4-(diphenylamino)phenyl)boronic acid and 4,7-dibromobenzo [c][1,2,5]thiadiazole as an orange solid. 18 Similarly, 6 was synthesized from (4-(1,2,2-triphenylvinyl) phenyl)boronic acid and 4,7-dibromobenzo [c][1,2,5]thiadiazole as a yellow solid. 19 Finally, TPABT and DTPABT were obtained through a Stille coupling reaction between 5 and the mixture of 3 and 4 as a red solid in 45% and 42% isolated yields, respectively. Similarly, TPEBT and DTPEBT were achieved from 6 and the mixture of 3 and 4 as a red solid in 28% and 59% isolated yields, respectively. All compounds were puried by silica gel column chromatography, and their structures and purity were veried by 1 H NMR, 13 C NMR and HR-ESI-MS. These materials both exhibit good solubility in common organic solvents such as CHCl 3 , THF, and toluene owing to their branched scaffold. The thermal properties of the four molecules were investigated by thermogravimetric analysis (TGA) (Fig. 1). Under a N 2 atmosphere, the onset temperature with 5% weight-loss by TGA was over 440 C for all the red AIE materials, which demonstrates that the thermal stabilities of these molecules are stable enough for photoelectric device application. The absorption and emission spectra of these red AIE molecules in diluted THF solution were recorded, and the absorption spectra are shown in Fig. 2A. All the molecules show two distinct absorption bands (band I: 270-430 nm; band II: 440-600 nm) in solution due to the p-p* transition of the conjugated backbone for band I and the intramolecular charge transfer (ICT) between the molecular donor and acceptor units for band II. In other words, these red AIE molecules are typical donor-acceptor (D-A) systems. More interestingly, TPABT displayed l max at 502 nm along with an obvious increase in absorbance intensity, which showed a red-shi of 12 nm in comparison with TPEBT due to the attachment of the stronger donor terminal group (TPA). This kind of situation also appears for DTPABT and DTPEBT. Then, as shown in Fig. 2B and Table 1, the emission maxima for TPABT and DTPABT are located at 643 nm and 626 nm, respectively. Compared with that of TPABT, the blue-shied maximum emission peak of DTPABT is possibly due to steric effects. In addition, the emission peaks of TPABT and TPEBT in solution are located at 643 and 600 nm, respectively. These results indicate that most of the molecules have great promising potential as high performance red uorescent materials. The respective frontier orbital distributions for these red AIE structures based on the DFT B3LYP/6-31G(d) method in the gas phase are presented in Fig. 3 and S1. † The LUMOs of all the AIE structures showed a strong contribution at the electron accepting aromatic moieties (benzothiadiazole). The HOMOs are well delocalized along the whole backbone for TPABT and DTPABT. In contrast for TPEBT and DTPEBT, a slightly stronger localization of the HOMOs around the central core and the diphenylamine terminal group is observed. Moreover, DTPABT presents higher-lying calculated HOMO and LUMO levels and slightly lower bandgaps compared to those of DTPEBT due to the relatively stronger ICT effect of DTPABT (see Table S1 †). These results are consistent with the obvious red-shi of the emission peak of DTPABT in comparison to that of DTPEBT. To quantitatively evaluate the AIE effect of the uorescent materials, the emission spectra and uorescent quantum efficiencies (F F ) of the thin lms and in THF solutions were measured using an integrating sphere method. The F F,f values were determined to be 10.1%, 4.2%, 38.2% and 2.3% for the TPABT, DTPABT, TPEBT and DTPEBT thin lms. We also investigated the F F,s of TPABT, DTPABT, TPEBT and DTPEBT in THF, the values were determined to be 9.3%, 4.0%, 1.8% and 1.3%, respectively. It is interesting to note that TPEBT with an asymmetric structure and TPE as a terminal group exhibited the highest F F,f of 38.2%, which might be related to the introduction of TPE hindering the molecular internal rotation. However, a sharply low F F,f (2.3%) was measured for the lm of DTPEBT. Compared with the similar uorescent quantum efficiencies of TPEBT and DTPEBT in solution, the much lower uorescent quantum efficiency for the DTPEBT thin lm is probably associated with the symmetric conformation of DTPEBT, which leads to much more efficient interchain interactions in the solid state. The AIE-effect is dened as a AIE ¼ F F,f /F F,s and can be used to evaluate the emission contrast ratio between the solid state and solution state. We should note that TPEBT showed the biggest a AIE of 21. Meanwhile, TPABT and DTPABT with TPA Scheme 1 Synthesis of the red AIE molecules TPABT, DTPABT, TPEBT, and DTPEBT. terminal groups exhibited higher uorescent quantum efficiencies than that of the other two materials in solution, but they showed a negligible AIE-effect from solution to solid state. These results indicate that spatial symmetry and the introduction of TPE or TPA groups have a signicant inuence on the uorescence properties in the solution and solid states. For an in-depth understanding of the AIE effect, we measured the emission spectra of the four molecules in THF and THF-water mixtures with different water fractions (f w ) to study the emission change ( Fig. 4 and S2 †). To our surprise, the emission changes for these four molecules were similar. For example, TPEBT showed nearly sustained decrease until the f w reached 60%, accompanied by a red-shiing of the emission peaks by about 60 nm. This phenomenon could be reasonably explained by an intramolecular charge-transfer (ICT) mechanism. Once the f w was increased beyond 60%, there was obviously enhanced emission at about 630 nm, demonstrating AIE activity. In this stage, TPEBT began to aggregate because the solvating power of the aqueous mixture decreased, making the emission enhanced due to the restriction of intramolecular rotation (RIR) effect. Meanwhile, the ICT effect is efficiently weakened. TPABT showed similar emission spectra to that of TPEBT. However, the emission band of TPABT in the solvent mixture with f w ¼ 60% was red-shied only about 20 nm, compared with that in pure THF solution. That may be because TPABT has weaker ICT. It's easy to see the AIE effect in Fig. 4C. The AIE behavior of TPEBT is best, due to the introduction of the strong AIE luminophore (TPE). Interestingly, the AIE effects of TPABT and DTPABT seem to be weak with the introduction of TPA moieties because more TPA groups might generate a larger steric hindrance, in which the rotation of the molecules could not be restricted efficiently aer aggregation. To our surprise, DTPEBT has the worst AIE property. That may be the joint effect of symmetric conformation and stronger intramolecular charge transfer. This trend looks to be the same as the absolute quantum yield (F F ) measurements. Due to the strong red uorescence of these materials, they are used as one-photon uorescent probes for live cell imaging. Human breast cancer cells (MCF-7 cells) and Hela cells were chosen as the model cell-lines for the uorescence imaging study by confocal laser scanning microscopy (CLSM). The cells were incubated for 3 h in a culture medium containing 20 mM of these molecules, and then were used for one-photon confocal imaging. Fig. 5, 6, S3 and S4 † show the images of the cells, taken under 405 nm with a 560-660 nm band pass lter. Intense intracellular red uorescence was observed from all the molecules in the cell cytoplasm. This indicates that these molecules can be used as a specic stain for cell imaging. 20 The cytotoxicity of the four materials was evaluated through the investigation of the metabolic viability of human breast cancer cells (MCF-7 cells) and Hela cells in Fig. 7 and S5. † The result turns out to be satisfying, showing that all the materials are nontoxic as the stained cells are in good health conditions. At last the cell viability remains above 85% within 36 h and 40 mM under the experimental conditions, indicating the low cytotoxicity of the molecules. Therefore, these AIE molecules are expected to be promising candidates for cell imaging with some advantages such as AIE characteristics, intense red emission and excellent biocompatibility. The two-photon absorption (2PA) spectra of the molecules were studied using a two-photon-excited uorescence (TPEF) technique with a femtosecond pulsed laser source. According to the laser availability, the spectra were collected in the 740 nm to 840 nm range at 20 nm intervals. The relative TPEF intensities of the molecules were measured using 2 0 ,7 0 -dichlorouorescein in THF as the standard. The 2PA cross-sections (d) of the molecules were measured in the wavelength range from 740-840 nm, as shown in Fig. 8. The maximum d is 0.39 Â 10 3 GM at 800 nm for TPABT, 0.70 Â 10 3 GM at 800 nm for DTPABT, 1.75 Â 10 3 GM at 780 nm for TPEBT and 1.94 Â 10 3 GM at 780 nm for DTPEBT. Notably, DTPABT and DTPEBT with their symmetric conformations showed obviously larger 2PA cross-sections than those of TPABT and TPEBT. Such results indicate that an extended p-system could enhance the 2PA cross-section of a whole molecule. Moreover, very interestingly, the 2PA crosssection of DTPEBT is much enhanced in comparison with that of DTPABT due to the extended p-conjugated length from the double bonds. Such results indicate that incorporating TPE as a terminal group is an effective approach for enhancing the 2PA cross-section of a molecule. 21 Fig. S6 † shows a comparison of the one-photon excited uorescence (OPEF) spectra with the TPEF spectra of TPABT, DTPABT, TPEBT and DTPEBT excited at 430 and 780 nm in THF solution at 10 À5 M. The TPEF spectra resembles the OPEF spectra except for a red-shi, suggesting a volume reabsorption effect of the uorescence with the solution, and different spectral selection rules. 22 Overall, the 2PA cross-sections of our red emission AIE materials are impressive and can be modulated by different spatial symmetries and the strength of the electron-donating terminal moieties, which encourages us to further explore their potential application in two-photon uorescence imaging. To evaluate the effect of these molecules in living cells, twophoton uorescence microscopy images were obtained from MCF-7 incubated with TPABT and TPEBT, owing to their lower cytotoxicity toward live cells and two-photon behavior, together with the higher uorescence quantum efficiencies. To minimize the side effects of the organic solvent toward live cells, each molecule was dissolved in DMSO. To our surprise, the twophoton excited uorescence images ( Fig. 9 and S7 †) of the TPABT live cells were successfully taken and clearly displayed the cytoplasm structure. Nevertheless, the TPEBT live cell images were unsuccessful and seldom displayed the live cells, owing to the weak 2PA of TPEBT at such long wavelengths. 23 In view of this, the red-uorescent TPABT molecule is more advantageous than TPEBT for two-photon excited uorescence imaging. In order to study the effect of these molecules in an animal model, two-photon uorescence imaging microscopy was further discussed for real-time in vivo two-photon uorescence imaging with TPEBT nanoparticles (NPs) owing to its best twophoton behavior. In vivo imaging of the blood vasculature of a mouse ear was conducted using the NPs as the blood vessel visualizing agent. The TPEBT NPs were excited at 780 and 980 nm and emitted signals that were collected at 542 AE 27 nm. Fig. 10 and S8 † show the 3D reconstructions and the distinct blood vasculature at different depths. Aer injection of the NPs, it is obvious that the blood vascular network including the major blood vasculature, small capillaries, and even arteries located deeply over 80 mm could be clearly observed in blood vessels with red uorescence. The high-resolution, 3D reconstructed image further illustrates the applicability of the TPEBT NPs for TPEF in vivo imaging. This indicates the excellent stability and biocompatibility of these molecules in a living biological system. Conclusions In summary, a series of donor-acceptor (D-A) p-conjugated red uorescent materials (TPABT, DTPABT, TPEBT and DTPEBT) with the same branched core and different spatial symmetries and strengths of electron-donating terminal moieties (TPE or TPA) was synthesized and characterized. The thin lm of TPEBT with an asymmetric structure and TPE as a terminal group exhibited the highest uorescence quantum efficiency of 38.2% with the highest a AIE . Moreover, our molecule design strategy means that the two compounds with TPE as a terminal group have very large two-photon absorption cross sections in comparison with the two red emission materials with triphenylamine groups. What's more, our investigation demonstrates that these molecules act as one-photon and two-photon uorescent probes and can be successfully applied for the uorescence imaging of MCF-7 breast cancer cells and Hela cells, and in the corresponding cytotoxicity experiments of live cells imaging. Notably, when TPABT was used as one-photon and two-photon uorescent probes, intense intracellular red uorescence was observed in the cell cytoplasm. Red emissive biocompatible TPEBT was applied in blood vascular imaging of a mouse ear, demonstrating its great potential as a two-photon excited contrast agent in biological systems. It is indicated that such asymmetric and TPA terminal group materials can be used as a specic stain uorescent probe for live cell imaging. This study provides fundamental structure design guidelines for further novel red AIE molecules with large 2PA crosssections as promising candidates for live-cell imaging in clinical trials. General procedures All air and water sensitive reactions were performed under a nitrogen atmosphere. Tetrahydrofuran and toluene were dried over Na/benzophenone ketyl and were freshly distilled prior to use. The other materials were of the common commercial level and were used as received. Thin layer chromatography (TLC) was conducted on exible sheets precoated with SiO 2 and the separated products were visualized by UV light. Column chromatography was conducted using SiO 2 (300 mesh) from Fisher Scientic. 1 H and 13 C NMR spectra were recorded on a Bruker ARX-400 (400 MHz) or ARX-500 (500 MHz) spectrometer, using CDCl 3 . All chemical shis were reported in parts per million (ppm). The 1 H NMR chemical shis were referenced to TMS (0 ppm), and the 13 C NMR chemical shis were referenced to CDCl 3 (77.23 ppm). HR-ESI-MS data were recorded on a Bruker APEX IV mass spectrometer. Thermal gravimetric analysis (TGA) was carried out on a TA Instrument Q500 analyzer. Absorption spectra were recorded on a PerkinElmer Lambda 750 UV-vis spectrometer. Photoluminescence was recorded on a Perkin-Elmer LS 55 spectrouorometer and a HORIBA JobinYvon Nanolog FL3-2iHR spectrometer. Cell culture MCF-7 (human breast cancer cells) cells were obtained from the Institute of Basic Medical Sciences (IBMS) of the Chinese Academy of Medical Sciences (CAMS). All cell lines were maintained under standard culture conditions (atmosphere of 5% CO 2 and 95% air at 37 C) in RPMI 1640 medium, supplemented with 10% FBS (fetal calf serum). Cell uorescence imaging MCF-7 cells were grown in the exponential phase of growth on 35 mm glass-bottom culture dishes (F 20 mm) for 1-2 days to reach 70-90% conuency. These cells were used in co-localization experimentation. The cells were washed three times with RPMI 1640, and then incubated with 1 mL RPMI 1640 containing red AIE molecules (20 mM) in an atmosphere of 5% CO 2 and 95% air for 3 h at 37 C. The cells were washed three times with 1 mL PBS at room temperature, and then 1 mL PBS was added to the culture medium to observe under a confocal microscope (Olympus FV1000). Channel 1: excitation: 405 nm, emission collected: 560-660 nm. Cytotoxicity. The metabolic activities of the MCF-7 breast cancer cells and Hela cells were evaluated using methylthiazolyldiphenyl-tetrazolium (MTT) assays. The cells were seeded in 96-well plates (Costar, IL, USA) at an intensity of 4 Â 10 4 cells mL À1 . Aer 24 h incubation, the medium was replaced by the TPABT, DTPABT, TPEBT and DTPEBT suspensions at different concentrations in DMEM containing 10% FBS and 1% penicillin streptomycin, and the cells were then incubated for 12, 24 and 36 h. Aer the designated time intervals, the wells were washed three times with 1Â PBS buffer, and 100 mL of freshly prepared MTT (0.5 mg mL À1 ) solution in culture medium was added into each well. The MTT medium solution was carefully removed aer 3 h incubation in the incubator. Dimethyl sulfoxide (DMSO, 150 mL) was then added into each well and the plate was gently shaken for 10 min at room temperature to dissolve all the precipitates formed. The absorbance of MTT at 490 nm was monitored by the microplate reader (Genios Tecan). Fabrication of TPEBT NPs The TPEBT-loaded DSPE-PEG 2000 NPs were prepared through a modied nanoprecipitation method. Briey, 1 mL of THF solution containing 1 mg of TPEBT and 2 mg of DSPE-PEG 2000 was poured into 9 mL of water. This was followed by sonicating the mixture for 60 s at 10 W output using a microtip probe sonicator (XL2000, Misonix Incorporated, NY). The mixture was then stirred at room temperature overnight to evaporate the THF. The obtained solution was ltered using a 0.20 mm syringe-driven lter to collect the products. Brain blood vascular imaging. The experimental set up for brain imaging is described elsewhere. 24 The small 2 mm circular piece of parietal bone was excised using a dental drill, exposing the meninges and the brain of the immobilized mouse. For the TPEF experiments, the mice were anesthetized (150 mg kg À1 ketamine and 10 mg kg À1 xylazine) and placed on a heating pad to maintain a core body temperature of 37 C throughout each imaging procedure. 200 mL of TPEBT NPs at 50 Â 10 À6 M TPEBT was administered via retro-orbital injection prior to imaging. All procedures were performed under the institution's IACUC (Institutional Animal Care and Use Committee) guidelines. A TriM Scope II single-beam twophoton microscope (LaVision BioTec) with a tunable 680-1080 nm laser (coherent) was used to acquire the images. The TPEBT NPs and second harmonic generation were excited at 780 and 980 nm, and the emitted light was split by 520 and 640 nm long pass mirrors and detected through 542/27 nm lters.
5,272.4
2016-03-18T00:00:00.000
[ "Physics", "Biology" ]
Search for boosted diphoton resonances in the 10 to 70 GeV mass range using 138 fb$^{-1}$ of 13 TeV $pp$ collisions with the ATLAS detector A search for diphoton resonances in the mass range between 10 and 70 GeV with the ATLAS experiment at the Large Hadron Collider (LHC) is presented. The analysis is based on $pp$ collision data corresponding to an integrated luminosity of 138 fb$^{-1}$ at a centre-of-mass energy of 13 TeV recorded from 2015 to 2018. Previous searches for diphoton resonances at the LHC have explored masses down to 65 GeV, finding no evidence of new particles. This search exploits the particular kinematics of events with pairs of closely spaced photons reconstructed in the detector, allowing examination of invariant masses down to 10 GeV. The presented strategy covers a region previously unexplored at hadron colliders because of the experimental challenges of recording low-energy photons and estimating the backgrounds. No significant excess is observed and the reported limits provide the strongest bound on promptly decaying axion-like particles coupling to gluons and photons for masses between 10 and 70 GeV. Introduction 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the -axis along the beam pipe. The -axis points from the IP to the centre of the LHC ring, and the -axis points upwards. Cylindrical coordinates ( , ) are used in the transverse plane, being the azimuthal angle around the -axis. The pseudorapidity is defined in terms of the polar angle as = − ln tan( /2). Angular distance is measured in units of Δ ≡ √︁ (Δ ) 2 + (Δ ) 2 . Signal event samples were generated for a hypothetical resonance produced in gluon-gluon fusion in association with up to two additional jets for resonance masses between 10 and 80 GeV. The decay width Γ was set to 4 MeV, negligible compared to the experimental resolution, to describe a hypothetical resonance in the narrow-width approximation (NWA). The samples were generated using the effective-field-theory approach [36] implemented in MadGraph5_aMC@NLO [37] with the NNPDF2.3lo PDF set [38], and using the A14 set [39] of tuned parameters and Pythia 8.240 [40] to simulate parton showering, hadronization and the decay of the resonance into a pair of photons. Background events with two prompt photons and associated jets were simulated using the Sherpa 2.2.4 [41, 42] event generator. Matrix elements were calculated in perturbative QCD (pQCD) at next-to-leading order (NLO) for up to one additional parton, and at LO for two or three partons, and merged with the Sherpa parton-shower simulation using the MEPS@NLO prescription [43][44][45][46]. The NNPDF3.0nnlo PDF set was used in conjunction with a dedicated parton-shower tune in the Sherpa generator. Interference effects between the resonant signal and all background processes are expected to be small for narrow-width signals and are neglected in this analysis. The effects of multiple interactions in the same bunch crossing as the hard scatter and in neighbouring ones (defined as pile-up) are included using simulated events generated with Pythia 8. Simulated events were weighted to reproduce the distribution of the average number of interactions per bunch crossing observed in data. All simulated signal events were processed using a full simulation of the ATLAS detector [47] based on Geant4 [48]. The background events were processed using a fast simulation of the ATLAS detector [49], where the full simulation of the calorimeter is replaced with a parameterization of the calorimeter response. All simulated events were reconstructed with the same reconstruction algorithms as those used for data. Object and event selection Photon candidates are reconstructed from topological clusters of energy deposited in the EM calorimeter, as well as from charged-particle tracks and conversion vertices reconstructed in the inner detector, and they are calibrated as described in Ref. [30]. The event selection requires at least two photon candidates with transverse energies larger than 22 GeV and | | < 2.37, excluding the barrel-to-endcap transition regions of the calorimeter, 1.37 < | | < 1.52. The transverse energy requirement is chosen to mitigate the effect of the trigger efficiency turn-on from the trigger thresholds discussed in Section 3. The properties of the EM clusters associated with the two highest-T photons and additional information from the tracking systems are used to identify the diphoton production vertex [50], which is used to correct the photon direction, resulting in improved resolution. To reduce the background from jets, photon candidates are required to satisfy tight identification criteria based on the shape of EM showers in the LAr calorimeter and energy leakage into the hadronic calorimeter [30]. Events with one or both photon candidates passing a looser identification are kept for background estimations. The tight identification is optimized in ranges of photon T and | |, and has an identification efficiency that increases with T from 70% at 22 GeV to 90% above 50 GeV. To further improve the rejection of jets misidentified as photons, the candidates are required to be isolated using information from both the calorimeter and tracking subsystems. The calorimeter isolation transverse energy iso,calo T is required to be smaller than 0.065 T , where iso,calo T is defined as the sum of the transverse energies of positive-energy topological clusters [51] within a cone of size Δ = 0.2 around the photon candidate, excluding the photon transverse energy T and correcting for pile-up and underlying-event contributions [52][53][54]. The track isolation transverse energy iso,trk T is required to be less than 0.05 T , where iso,trk T is defined as the scalar sum of the transverse momenta of tracks with T > 1 GeV in a Δ = 0.2 cone around the photon candidate, and which satisfy some loose track-quality criteria, are not associated with a photon conversion, and originate from the diphoton production vertex. The combined isolation efficiency for pairs of photons fulfilling the identification requirement in simulated signal samples increases with from 80% at 10 GeV to 90% at 90 GeV. The diphoton invariant mass is computed using the transverse energies of the leading and subleading photon candidates and their angular separation in both azimuth and pseudorapidity , determined from their positions in the calorimeter and the diphoton production vertex. An additional kinematic selection is placed on the transverse momentum of the diphoton system, T , requiring events to have a diphoton pair with T > 50 GeV. This requirement is motivated by the fact that the analysis targets diphoton pairs with low masses, down to about half the trigger energy thresholds, and such pairs are typically highly boosted with respect to the ATLAS detector rest frame. The T requirement is chosen in order to reach the best compromise between the statistical uncertainty in the lowest part of the spectrum and sculpting effects on the background shape from the trigger efficiency turn-on, the modelling of which would result in large systematic uncertainties. In total, 1 166 636 data events with < 80 GeV are selected. Following the detector-level selection, the measurement of the signal production cross-section is performed in a fiducial volume defined from the simulated samples by requiring two photons at particle level with T > 22 GeV, | | < 2.37 and T > 50 GeV. The particle isolation, defined as the scalar sum of the T of all the stable particles (except muons and neutrinos) found within a Δ = 0.2 cone around the photon direction, is required to be less than 0.05 T . This isolation requirement is chosen to reproduce the detector-level selection. Signal modelling The shape of a possible signal in the diphoton invariant mass distribution is modelled by a double-sided Crystal Ball (DSCB) function, composed of a Gaussian core with power-law tails [1,55], whose parameter values evolve linearly with respect to the mass . The parameters of the DSCB function are extracted from fits to the MadGraph simulated signal samples. The width of the Gaussian core is entirely determined by the resolution of the detector and ranges from 0.2 to 1.2 GeV, as shown in Figure 1(a). Good agreement between the signal parameterization and the simulated signal samples is found, with differences below 1% of the fitted signal yield. An example of the simulated resonance overlaid with the signal model parameterization is shown in Figure 1 Background estimates The dominant background components consist of continuum production, and of photon-jet ( and ) 2 and jet pair ( ) events where one or more jets are misidentified as photons. Other backgrounds arising from electrons faking photons in boson decays are found to be negligible in the mass range of this search and are not considered. The analysis makes use of a data-driven background estimate in which the continuum background shape is parameterized by an analytic function. The chosen analytic functional form is described in Section 6.1. The uncertainty arising from the choice of background model is based on signal-plus-background fits to background-only template histograms with a binning of 100 MeV, following the methodology described in Ref. [56] and further described in Section 6.1. The background modelling uncertainty is found to be dominated by the limited size of available simulated event samples. In order to reduce the impact of the background modelling uncertainty on the analysis, the background templates are smoothed using a Gaussian Processes fit. This technique is described further in Section 6.2, and its impact on the analysis is detailed in Section 6.3. Background template modelling The background-only template has two components. The component is built from the simulated samples described in Section 3, and the and components are built from control samples obtained from data, in which one or both photons must fail the tight identification requirements while passing a looser set of identification cuts. The two components are combined according to their relative fractions. The relative contribution of each of these processes is shown in Figure 2 and is estimated using the two-dimensional sideband method described in Ref. [57]. The purity of the diphoton sample, defined as the fraction of events, increases with from 50% at 10 GeV to 70% at 80 GeV with an overall uncertainty of 3% dominated by its statistical component arising from the limited size of the data sample collected with the prescaled triggers described in Section 4. No significant difference in the diphoton purity is observed between the various LHC data-taking periods of Run 2. The goal of this analysis is to reach the lowest invariant mass possible, including the 'turn-on' region for masses below 20 GeV. The resulting background shape needs to be described by a more complex analytic form than in previous diphoton resonance searches [2,26] and is constructed as the combination of two pieces: one capable of describing the turn-on shape and a second used to describe the smoothly decreasing part. The two-parts analytic function described below was found to adequately model the background shape across the full invariant mass range of the search. The turn-on region (TO) is described by the following function: where 0 corresponds to the value of the function at = 0 and TO is the length scale of the turn-on. The smoothly falling region beyond 30 GeV is described by a power-law function multiplied by an 'activation' function to increase its flexibility in the high mass region (above 50 GeV). For this activation term, an exponential function times a 'Fermi-Dirac-like' function is chosen, and the total function is: where 1 , 0 and 0 are the parameters of the power-law term, and tail , tail , thresh and thresh describe the activation function. The power-law part is qualitatively described by its endpoints, being 1 at = 0, and 0 at = 1 , and a fixed value of 1 = 115 GeV can be set with no impact on the flexibility of the complete model. The activation function only plays a role above thresh , below which its value is practically 1. The complete functional form is obtained by adding the two components and has ten parameters in total: where TO is the parameter that describes the relative contribution of the turn-on component and are the sets of parameters belonging to ℎ High and ℎ TO . To reduce the potentially large correlations between the ten parameters, a subset of them are fixed to ensure the convergence of the fit. The choice of free parameters is based on the results of stability tests using generated pseudo-datasets from the best fit to the background template. The chosen configuration has the largest number of floating parameters, with only three fixed parameters ( 1 , thresh , thresh ) while the remaining seven parameters are free to vary. Variations of the nominal background template are built to validate the flexibility of the chosen functional form and subsequently to estimate background modelling systematic uncertainties in Section 6.3. They are constructed by i) modifying the fractions of the background components, referred to as variations of the fraction ( ); ii) varying the identification criteria used to define the and control regions, referred to as variations of the control region; and iii) altering the templates by varying the T cut by 10%, referred to as T variations. These variations change the steepness of the turn-on by up to 20%, and the slope in the high mass region by ±5%, with respect to the nominal background template. Gaussian Processes to mitigate statistical fluctuations in background templates The bias arising from the choice of background model is evaluated from signal-plus-background fits to background-only templates: any fitted signal yield SS is referred to as a 'spurious signal' and it is considered as a systematic uncertainty in the modelling of the background shape. The functional form of Eq. (1) provides an acceptable maximal spurious signal that is below 30% of the statistical uncertainty in the mass range between 10 and 75 GeV, and therefore it is chosen as the background model. The estimation of this systematic uncertainty requires the shape of the background template to be as close as possible to the shape of the data distribution, but with an event count large enough for the statistical fluctuations of the template to be negligible. When evaluating the uncertainty, if the background model perfectly describes the representative background sample, then the number of signal events fitted by the signal-plus-background model will be zero. However, the representative background-only sample for this analysis is constructed using a limited number of simulated diphoton events, and the presence of statistical fluctuations in the sample introduces large statistical fluctuations in the number of fitted signal events, regardless of the quality of the background model. This issue was addressed previously [26] by using simulated datasets with much larger event counts than in data, which leads to smaller statistical fluctuations in the background shape but is computationally expensive. In order to meet the aforementioned requirements, an alternative approach to easing statistical effects on the background modelling uncertainty by using Gaussian Processes is followed instead. A Gaussian Process (GP) is a flexible Bayesian machine-learning technique which may be used to obtain a non-parametric fit to an input dataset [58]. The analysis uses the scikit-learn GP implementation [59] to fit a GP to the representative background sample histogram; the posterior mean of the GP fit is used as a smoothed background template. The combined signal and background model fit is then performed on the smoothed template instead of the original representative background sample. The degree of smoothing applied is controlled through the choice of kernel and its hyperparameters. A radial basis function (RBF) kernel [58] with an additional constant noise component is utilized here. The RBF kernel includes a length-scale hyperparameter that encodes the correlation between the event counts for different bins in invariant mass. The contents of bins which are less than the length scale apart in invariant mass are highly correlated, while the contents of those which are much further apart than the length scale are essentially uncorrelated. Physically, the length scale encodes a minimum feature size expected in the background shape, which in this analysis corresponds to the 1-2 GeV width of the trigger efficiency turn-on region and thus is also greater than the 100 MeV bin width of the original background histogram. The kernel hyperparameter values are determined in the fit to the representative background sample. Notably, because GPs are non-parametric in nature, the GP smoothing technique is not expected to significantly bias the shape of the resulting smoothed template towards a specific choice of analytic background model. GPs may introduce mis-modelling at the edges of the diphoton invariant mass distribution, since edge points are only constrained by their correlation with other data points on one side. To mitigate these edge effects, the GP fit is performed using an extended invariant mass range of 7-80 GeV, while the combined signal and background model fit is performed using the nominal analysis invariant mass range of 9-77 GeV. Figure 3 shows an example of a pseudo-dataset generated from the nominal background modelling function, as well as the GP-smoothed pseudo-dataset. The smoothed pseudo-dataset is observed to reproduce the nominal background modelling function shape with relatively high accuracy, and without the bin-scale statistical fluctuations of the original pseudo-dataset. The smoothed pseudo-dataset shows some oscillatory behaviour beyond the turn-on region; these features are an artefact of the steep slope in the turn-on region pulling the fitted length scale to a smaller value than would otherwise be needed to model the remaining mass range. Similarly, some mis-modelling of the smoothed pseudo-dataset is observed in the turn-on region because of the length scale being pulled to a larger value by the higher mass region. The magnitude of the fluctuations in the smoothed pseudo-dataset is significantly smaller than that of statistical fluctuations in the unsmoothed pseudo-dataset. Impact of smoothing on the background modelling uncertainty In order to verify that the GP smoothing technique does not introduce any significant bias into the background histogram shape, its effect is checked on an ensemble of pseudo-datasets generated from a known background shape. Ensembles are generated using both the nominal background modelling function (provided in Eq. (1)) and a set of analytic forms capable of describing the turn-on feature in the background shape. This set is composed of functional forms built similarly to the nominal functional form in which either the turn-on component or the smoothly falling component is replaced by other analytic forms, such as different Fermi-Dirac-like functions for the turn-on or sums of exponential functions for the smoothly falling component. The parameters of the analytic forms are determined by fitting them to the original simulated background sample, and each histogram in the ensemble is generated with the same effective event count as in the original simulated background sample. The background modelling uncertainty is then evaluated for each pseudo-dataset, with and without smoothing. The aforementioned functional forms are used to probe for potential smoothing bias in both the cases where the analytic background model did or did not properly describe the pseudo-dataset. The bias that arises from the GP smoothing technique is defined as the difference between the observed spurious signals in the smoothed template and the unsmoothed template. The uncertainty associated with the GP smoothing technique is observed to be roughly 20% of the background modelling uncertainty for masses below 20 GeV, stabilizing at 5% for larger masses. This bias is added in quadrature to the background modelling uncertainty. The final background modelling uncertainty is computed as the envelope of the maximal fitted signal yields over all the background template variations defined previously in this section after smoothing. Figure 4(a) shows the number of spurious-signal events SS , taken as the background modelling uncertainty, relative to its statistical uncertainty for the unsmoothed and smoothed templates. Applying the GP smoothing procedure to the background template leads to a reduction of at least 50% in this background modelling uncertainty relative to the unsmoothed case. The uncertainty arising from the GP smoothing technique is found to be small compared to the decrease in background modelling uncertainty due to the reduction of statistical noise. The magnitude of the smoothing uncertainty, as well as the remaining background modelling uncertainty, is presented as a function of the diphoton invariant mass in Figure 4(b). Statistical analysis The data are interpreted by following the statistical procedure described in Ref. [60]. A binned likelihood function is built from the observed diphoton invariant mass distribution and the analytic functions discussed in Sections 5 and 6, describing the signal and background components in the 9 to 77 GeV mass range. The search is performed in the 10 to 70 GeV mass range to avoid edge effects, based on the different diphoton invariant mass resolutions at these values, as illustrated in Figure 1(a). The parameter of interest to be extracted from the likelihood fit is the fiducial production cross-section times branching ratio fid · B ( → ). Since the measurement is performed in a fiducial volume (defined in Section 4) to allow easier reinterpretation of the results, the fiducial cross-section includes a correction factor to account for the signal detection efficiency: where S is the number of signal events fitted in data, L is the integrated luminosity, det MC is the number of reconstructed and selected signal events in the simulation and fid MC is the number of simulated signal events present within the fiducial volume. The values are computed from the simulated signal samples described in Section 3 and range from 0.2 to 0.5 as a function of . The theoretical uncertainties affecting the measurement of fid · B ( → ) arise from variations of the renormalization and factorization scales affecting the signal efficiencies evaluated in simulated samples. The experimental uncertainties directly impacting the signal yield include those involved in the luminosity determination, the modelling of pile-up interactions in simulation, the trigger efficiency, and photon identification and isolation. An additional systematic uncertainty in the trigger is included to account for the capability of the trigger system to identify two closely spaced electromagnetic showers. Events containing a → decay, recorded only with electron triggers, in which the photon is close to one of the two electrons are used to evaluate the photon trigger efficiency in data and simulated radiative Z samples [35]. The observed difference is added in quadrature to the nominal trigger systematic uncertainty. Uncertainties in the signal shape parameterization from the modelling and the determination of the photon energy resolution and scale are also accounted for, with mild impact on the signal yield. The systematic uncertainties are implemented in the likelihood function as nuisance parameters constrained by Gaussian penalty terms, except for the background modelling systematic uncertainty, which is implemented as an additional signal component. All sources of systematic uncertainties are summarized in Table 1. The compatibility of the observed data and the background-only hypothesis for a given signal hypothesis is tested by estimating a local -value based on a profile-likelihood-ratio test statistic, detailed in Ref. [60]. The global significance of a given event excess is computed using background-only generated pseudo-datasets to account for the look-elsewhere effect [61]. In the absence of a signal, the expected and observed 95% confidence level (CL) exclusion limits on the cross-section times branching ratio are evaluated using the modified frequentist approach CL s [62,63] with the asymptotic approximation to the test-statistic distribution. Results The diphoton invariant mass distribution of events passing the analysis selection is shown in Figure 5, along with the background-only fit performed in the 9 to 77 GeV mass range. The result of the -value scan as a function of the hypothesized resonance mass is shown in Figure 6. The most significant deviation from the background-only hypothesis is observed for a mass of 19.4 GeV, corresponding to a local significance of 3.1 . The global significance of such an excess is 1.5 , computed Phenomenological interpretation In this section the observed limits on the fiducial cross-section for a hypothetical resonance are recast in the parameter space of an ALP . The KSVZ-ALP model, inspired by the simplest QCD-axion model [64][65][66], is chosen as a benchmark because it allows for couplings between the ALP field and gauge bosons, including a non-zero coupling to gluons and photons. It is described by the following effective Lagrangian: where and are the ALP field and its mass, and corresponds to the decay constant that governs its coupling with the SM fields. The QCD and EW field strengths are denoted by , and with = (1/2) for all field strengths. The coupling constants 3 , 2 and 1 = (5/3) (where stands for the weak SM hypercharge coupling constant) set the strength of the strong and EW interactions in the SM. The coefficients encode the anomalies of the global symmetry non-linearly realized by the ALP with the SM gauge group. These anomalies are generated by integrating out heavy fermions which are charged under the SM gauge group at the scale . This Lagrangian is equivalent to the one used to generate the simulated signal samples described in Section 3. The ALP under consideration, being the pNGB of an approximate global symmetry, remains naturally light well below the scale of new physics. Considering , the mass of the ALP, to be much smaller than , the relevant two-body decays of are to photons and to jets, with widths which can be found, e.g., in Ref. [67]. In the 10 to 70 GeV mass range and for the choice of anomalies 1 = 2 = 3 = 10, the branching ratio B ( → ) varies from 0.6 · 10 −3 to 1.6 · 10 −3 . Choosing to set the magnitude of the parameters to be the same is motivated by gauge coupling unification in a Grand Unified Theory scenario. While the specific value of 10 is arbitrary, the rescaling of the results to a different anomaly (2)). The observed and expected lower bounds on the ALP decay constant derived from this analysis are shown in black solid and dashed lines respectively. BABAR bounds on → derived in Ref. [72] are shown in purple; in green the LHC bounds on boosted dijet resonances [73] and in blue the LHC searches for diphoton resonances taken from Ref. [67]. The red bounds are derived from Tevatron [74] and LHC [57,75,76] diphoton cross-section measurements, following the method described in Ref. [67]. Weaker constraints covering lower invariant masses are obtained from LHCb diphoton measurements [77] and from LEP searches for → ( ) [78], in cyan and yellow respectively. On the right, the -axis shows the ALP-photon coupling ≡ em / ( = 2 + 5 3 1 ), a standard QCD axion notation. parameter choice would be trivial. The ALP Lagrangian in Eq. (2) is implemented in Feynrules [68], and the production cross-section at the LHC for the process → is computed at leading order with MadGraph [69], where the gluon is explicitly required to boost the ALP. A constant -factor = 2 is applied to this cross-section to account for NLO corrections, which were computed for a similar signal topology in Ref. [70]. ALPs that couple to gluons decay promptly over the entire mass range of interest for this study (recent studies of displaced ALP decays can be found in Refs. [70,71]). Because of the large hierarchy between and , and the loop suppression of the coupling, the ALP total width is dominated by its coupling to gluons and is always small compared to its mass. As a consequence, the narrow-width approximation always applies and finite-width effects can be safely neglected. The recasting is done by comparing the theoretical signal yield obtained from the ALP model of Eq. (2), after applying the particle-level selection described in Section 4, with the bounds on the fiducial cross-section in Figure 7. The signal cross-section times branching ratio can be written as 1/ 2 times a weakly varying function of the ALP mass. The upper limit on the cross-section then results in a lower limit on , which is shown in Figure 8 for a specific choice of the coefficients. Figure 8 shows how the sensitivity of the search presented here covers a large portion of the unexplored ALP parameter space where the heavy colour states generating the ALP coupling to gauge bosons are in the multi-TeV range and therefore unaccessible at the LHC. Any production mechanism other than gluon-gluon fusion suffers from a smaller production cross-section, and the decoupling of the heavy states inducing the ALP coupling to SM states would require further study. Constraints from Υ → ( ) [79], constraints from boson width measurements [80], and ALP production in light-by-light scattering in heavy-ion collisions [81,82] are too weak to appear in the plot. Conclusion A search for new narrow-width boosted resonances is performed in the diphoton invariant mass spectrum ranging from 10 to 70 GeV, using 138 fb −1 of collision data collected at a centre-of-mass energy of 13 TeV with the ATLAS detector at the Large Hadron Collider. The data are consistent with the SM background expectation. Limits are set on the fiducial cross-section times branching ratio in a fiducial region defined to mimic the detector-level selection. The observed limits on fid · B ( → ) range from 4 to 17 fb, with variations mainly due to statistical fluctuations of the data. The dominant uncertainties arise from the limited number of collisions collected and the background modelling uncertainty. The impact of the latter is reduced by smoothing the simulated background-only sample with Gaussian Processes in order to reduce the statistical fluctuations in the sample. Furthermore, the observed limits are recast in the parameter space of an axion-like particle, covering a longstanding gap in diphoton resonance searches. The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada)
6,932.8
2022-11-08T00:00:00.000
[ "Physics" ]
A Simplified Hard-Switching Loss Model for Fast-Switching Three-Level T-Type SiC Bridge-Legs : Hard-switching losses in three-level T-type (3LTT) bridge-legs cannot be directly estimated from datasheet energy loss curves, which are given for symmetric two-level half-bridge configurations only. The commutations in a 3LTT bridge-leg occur between semiconductors with different blocking voltages and/or current ratings, and involve a third semiconductor device in the switching transition, which contributes additional capacitive losses. This paper, therefore, describes a simplifed approach to estimate a lower bound for the hard-switching losses of 3LTT bridge-legs (note that the approach is applicable to other three-level topolgies as well). In view of the very fast switching speeds of wide-bandgap semiconductors, the model neglects voltage/current overlap losses and considers only the dominating charge-related loss contributions (semiconductor output capacitances, body diode reverse-recovery charge), thus requiring minimal information from datasheets. A direct experimental verification with an 800V DC-link 3LTT bridge-leg (1200V and 650V SiC MOSFETs) operating with output currents up to 25A confirms the good accuracy of the simplified switching-loss model. Introduction Three-level converter topologies, especially in combination with wide-bandgap (WBG) power semiconductors such as SiC MOSFETs, are enabling ever more compact and more efficient power electronic converter systems [1][2][3][4], and are therefore of key importance to next-generation PFC rectifiers for battery charging, datacenter power supply modules, and inverter systems for variable-speed drives used in industry automation and electrified transport. In particular, the three-level T-type (3LTT) converter (cf., Figure 1), originally proposed in the 1970s [5], achieves very promising performance for 800 V DC-link applications, especially if modern WBG power semiconductors are employed [3,6]. Essentially, a two-level bridge-leg (with 1200 V SiC MOSFETs) is extended by a four-quadrant switch that allows the connection of the AC output terminal to the DC-link midpoint, i.e., enables three output voltage levels. The four-quadrant midpoint switch can advantageously be realized with two 650 V SiC MOSFETs connected in anti-series. Compared to other three-level topologies, such as the neutral-point-clamped (NPC) converter [7,8], or its sibling with active switches instead of clamping diodes (active NPC, ANPC) [9], the 3LTT requires, thus, fewer power semiconductors and, especially, fewer gate-drive power supplies (if the midpoint switch employs a common-source configuration); i.e., the 3LTT shows a favorable trade-off between functionality and complexity [10]. To perform a first-step comparative evaluation of different power semiconductors for a given application, there is a need to quickly estimate switching losses. Whereas datasheets usually directly provide turn-on and turn-off energy losses as a function of voltage and current for symmetric two-level bridge-legs, the situation is more complicated for 3LTT bridge-legs [6,11]. First, any commutation between two semiconductor devices also changes the blocking voltage across a third device connected to the common switching node, causing additional capacitive losses. Second, the commutations occur between semiconductors with different blocking voltage and/or current ratings (i.e., for the example mentioned above, between a 1200 V SiC MOSFET and a 650 V SiC MOSFET used as the midpoint switch). For these reasons, it is not possible to use the switching loss data available in typical semiconductor datasheets directly for estimating the switching losses of a certain device combination in a 3LTT bridge-leg. Therefore, this paper provides a simplified approach to estimate the minimum hardswitching losses of SiC-MOSFET-based 3LTT bridge-legs by considering only chargerelated losses (i.e., capacitive and reverse-recovery losses), which effectively dominate the hard-switching energy loss for fast-switching power semiconductors [12]. This paper consolidates existing contributions to semiconductor output capacitance charge/discharge loss modeling for three-level bridge-legs [6,11,13], as well as to the simplified estimation of diode reverse-recovery losses [6,11], into a compact, straightforward loss modeling approach. In contrast to the interesting switching loss estimation method developed in [13], which requires double-pulse test loss measurement results, the loss model proposed herein can be parametrized with datasheet information on the devices' output capacitances, C oss , and reverse-recovery charge, Q rr , only. Additionally, targeting GaN devices, the modeling approach outlined in [13] does not account for reverse-recovery losses, i.e., it is not directly applicable to 3LTT bridge-legs with SiC MOSFETs. The switching-loss model in [13] has only been verified indirectly at the converter level by measuring the total converter losses of a 3LTT undiriectional rectifier adopting 650 V GaN HEMTs and 1200 V SiC Schottky diodes. In this paper, a dedicated experimental verification of the proposed loss model is performed on an 800 V DC-link 3LTT bridge-leg prototype (using 1200 V and 650 V SiC MOSFETs) through accurate calorimetric loss measurements. The experimental results confirm an almost perfect prediction of capacitive charge/discharge losses and show that the proposed model, including diode reverse-recovery, achieves a maximum hardswitching loss underestimation error of 18% (i.e., due to neglecting overlap losses [12]). It is worth highlighting that even though this work focuses on the 3LTT converter topology (for reasons of clarity and conciseness), the proposed hard-switching-loss model can be applied to arbitrary three-level bridge-legs (e.g., NPC, ANPC, etc.), as explained in [6]. The paper is organized as follows. First, Section 2 discusses the modeling of capacitive losses in a 3LTT bridge-leg, clarifying also the impact of the third device connected to the switching node but not actively involved in the commutation. In Section 3, a simplified model for the estimation of the bridge-leg losses in hard-switching operation is described, taking into account both capacitive and reverse-recovery loss contributions. Section 4 provides a direct experimental verification of the proposed models, using highly accurate calorimetric measurements of the semiconductor losses. Finally, Section 5 concludes the paper and gives an outlook on future developments, highlighting the importance of comprehensive reverse-recovery information in device datasheets. Figure 1. Three-level T-type (3LTT) bridge-leg switching transitions involving T 1 and T 2 . Four different events are identified, depending on the switching sequence T 1 ↔ T 2 and the direction of the bridge-leg output current I sw . (a) T 1 ← T 2 , I sw > 0 (hard-switching event), (b) T 1 → T 2 , I sw > 0 (soft-switching event), (c) T 1 → T 2 , I sw < 0 (hard-switching event), (d) T 1 ← T 2 , I sw < 0 (soft-switching event). Blue lines represent the charge/discharge current paths of the semiconductor output capacitances, whereas pink lines indicate the diode reverse-recovery current path. The gate signals of T 1 , T 2 , T 3 , and T 4 are qualitatively shown as s 1 , s 2 , s 3 , and s 4 , respectively, and the steady-state, dead time, and transition intervals are indicated. Three-Level T-Type Capacitive Loss Analysis This section provides a detailed analysis of the losses related to the charging/discharging of the semiconductor output capacitances in a 3LTT bridge-leg, as shown in Figure 1. In particular, this analysis focuses only on the upper half of the bridge-leg (i.e., T 1 ↔ T 2 switching transitions, cf., Figure 1), since all results can be extended directly to the other bridge-leg half for reasons of symmetry. According to Figure 1, four different switching events can occur, depending on the commutation sequence (i.e., T 1 → T 2 or T 2 → T 1 ) and the direction of the bridge-leg output current I sw . For each situation, the respective figure shows the steady state before the transition or the dead time interval, the transition interval (only for hard-switching events), and the steady state after the transition. For example, Figure 1a shows a transition from T 2 to T 1 (T 1 ← T 2 ) with I sw > 0. In the initial steady state (not shown), the switching node is connected to the DC-link midpoint via T 2 and T 3 . The dead time interval starts when T 2 turns off. During this interval, the switching node voltage does not change, as the load current flows in T 2 's body diode until T 1 turns on. This turn-on process dissipates the energy E oss,T1 stored in the output capacitance of T 1 , and causes the indicated current flows to charge the output capacitance of T 2 to V dc /2 and to charge T 4 's output capacitance from V dc /2 to V dc . The turn-on process of T 1 also initiates the reverse-recovery process of T 2 's body diode, which is discussed in Section 3. Finally, in the new steady state, the switching node is connected to the positive DC-link rail via T 1 . Since a certain amount of energy is always dissipated during the switching transition (e.g., E oss,T1 ), this is considered a hard-switching event. Similarly, Figure 1b shows the transition in the opposite direction, i.e., from T 1 to T 2 (T 1 → T 2 ) with I sw > 0. Once T 1 turns off, the load current charges/discharges the involved output capacitances until the switching node is finally connected to the DC-link midpoint via T 3 and T 2 's body diode. The turn-on of T 2 at the end of the dead time interval is, thus, lossless, and accordingly, this transition is a soft-switching event. The transitions in Figure 1c,d follow analogous steps. The indicated charging/discharging currents give rise to losses. To quantify these capacitive switching losses, all four switching events shown in Figure 1 are analyzed using the method reported in [14], which is based on the energy balance expression where E initial and E final are the total stored energies in all device capacitances before and after the commutation, respectively, while E source and E load are the energies provided by the DC-link and absorbed by the output load during the transition, respectively. Note that instantaneous switching transitions are assumed (i.e., no V-I overlap across the MOSFET channel); hence, no energy is transferred to the load during the hard-switching events (a) and (c) (i.e., E load = 0), whereas no loss is generated during the soft-switching events (b) and (d) (i.e., E loss,cap = 0, assuming a sufficient dead time to complete the voltage transition [14]). Therefore, the energy balance terms for the two hard-switching events (a) and (c) are derived as I sw < 0 : where Q oss and E oss refer to the charge and the energy stored in the semiconductor output capacitance C oss , respectively: For reasons of clarity and compactness, we define the energy terms which are graphically illustrated in Figure 2 for a Wolfspeed 1200 V 32 mΩ SiC MOSFET and V dc = 800 V. By inserting Equations (2) and (3) in Equation (1) and leveraging Equations (5)-(8), straightforward capacitive loss expressions are obtained: Note that different expressions are obtained for I sw > 0 and I sw < 0, as the power semiconductors involved in the respective commutations are different (i.e., T 1 = T 2 ; for example, in a bridge-leg with 800 V DC-link voltage, T 1 is typically a 1200 V MOSFET, whereas T 2 is typically a 650 V MOSFET) and the charging/discharging of C oss,T 4 (i.e., the output capacitance of the third power semiconductor not actively involved in the commutation) is affected by the current direction, as it is charged from V dc /2 to V dc in the hard-switching transition with I sw > 0, shown in Figure 1a, but discharged from V dc to V dc /2 in the hard-switching transition with I sw < 0, shown in Figure 1c. Simplified Hard-Switching Loss Model The total losses generated by a hard-switching commutation in an arbitrary SiC MOSFET bridge-leg can be expressed as [12] where V sw and I sw are the switched voltage and current, respectively, E loss,cap is the capacitive loss contribution (depending on the switched voltage and the direction of the switched current, cf., Equations (9) and (10) in Section 2), and Q rr is the reverse-recovery charge of the MOSFET body diode involved in the commutation process (e.g., the body diode of T 2 in Figure 1a and the body diode of T 1 in Figure 1c). The last two terms of Equation (11) represent the V-I overlap losses and depend on the voltage and current time derivatives during the overlap time. It is worth noting that Equation (11) only represents turn-on losses, as the switching losses during the turn-off transition can typically be neglected if the MOSFET is assumed to be turned off fast enough [12]. A simplified switching-loss model, only accounting for the unavoidable charge-related losses [6,11], can be obtained by assuming infinitely fast transitions, such that the V-I overlap loss contributions in Equation (11) can be neglected. This assumption also allows to express the diode reverse-recovery charge as a linear function of the switched current [15]: infinitely fast current transitions force the complete diode forward bias injected charge (which is proportional to the conducted current) to be swept away as Q rr , because no time is left for charge recombination to take place. Thus, we obtain where τ is the charge carrier recombination lifetime. Therefore, a simplified linear switchingloss model with respect to the switched current is obtained from Equation (11) as which represents a theoretical lower limit (as overlap losses are neglected) for hardswitching losses in arbitrary SiC MOSFET bridge-legs. Remarkably, Equation (13) solely depends on typically available manufacturer datasheet information, since E loss,cap can be extracted from the C oss (v) curve (cf., Section 2), and τ can be obtained from the reverse-recovery charge data (i.e., Q rr , I sw ) by inverting Equation (12). In particular, with τ being approximately linearly dependent on the semiconductor junction temperature T j [16], two Q rr values at different temperatures are sufficient to roughly estimate the reverse-recovery losses for an arbitrary T j . It is worth noting that datasheet values for Q rr typically include the semiconductor's Q oss (V sw ), as the bipolar and capacitive charge components are indistinguishable during reverse-recovery charge measurements [17]. Therefore, Q oss must be first subtracted from datasheet Q rr values before using them in Equation (12). Experimental Validation This section aims to validate and assess the accuracy first of the capacitive loss analysis described in Section 2, and then its combination with the simplified hard-switching-loss model proposed in Section 3. We use the 3LTT bridge-leg prototype shown in Figure 3, which employs third-generation 1200 V 32 mΩ (for T 1 , T 4 ) and 650 V 25 mΩ (for T 2 , T 3 ) SiC MOSFETs from Wolfspeed in four-pin TO-247-4 packages (i.e., featuring a Kelvin source pin for faster switching). To obtain accurate switching loss results, we employ a transient calorimetric measurement method [18], specifically, the variant presented and validated in [19]. With this approach, the semiconductor devices are mechanically connected and thermally coupled to a brass block acting as a heat sink. By measuring the time required for the brass block temperature to increase by a defined amount (i.e., by 10°C in the present case), and by subtracting the estimated conduction losses (the on-state resistance of the devices under test is measured for different temperatures during the calibration phase of the calorimetric measurement setup), the semiconductor switching losses can be extracted. Driver Board Power Board Heat Sink Thermal Interface Material Semiconductor Device Figure 3. Overview of the 3LTTC bridge-leg test board and brass heat sink used for calorimetric loss measurements. No-Load Operation (I sw = 0) To verify the capacitive loss model described in Section 2, loss measurements at zero output current (i.e., no-load) are performed for different switched voltages (i.e., different DC-link voltages). The no-load operation allows to avoid all current-dependent terms in Equation (11), and thus, to accurately determine the 3LTT bridge-leg capacitive losses, which are defined by the sum of Equations (9) and (10) as Figure 4 compares the calorimetrically measured no-load losses and the datasheet-based estimations using the proposed capacitive switching-loss model. To quantify the additional capacitive energy contribution E c,T 4 + E d,T 4 coming from the presence of a third switch that is not actively involved in the commutation (i.e., T 4 for the case at hand), two sets of measurements are performed, with T 4 electrically connected or disconnected to the circuit. The results show excellent correspondence between measurements and estimations, supporting the validity of the described capacitive loss model. Measured Estimated T 4 connected T 4 disconnected energy difference Figure 4. Comparison between estimated and measured zero output current losses in the 3LTT bridge-leg (T 1 = T 4 : C3M0032120K, T 2 = T 3 : C3M0025065K) as a function of the DC-link voltage V dc . The results are obtained by switching T 1 ↔ T 2 : the additional energy loss related to the charging/discharging of C oss,T 4 (i.e., E c,T 4 + E d,T 4 ) is indicated in black. The estimated energy losses take into account the measured parasitic capacitance C σ ≈ 35 pF between the switching node and the DC-link as E σ = 2 · 1 /2 C σ V 2 sw . Operation under Load To assess the accuracy of the simplified hard-switching-loss model proposed in Section 3, loss measurements for positive and negative bridge-leg output currents (I sw , cf., Figure 1) are performed. Due to the temperature dependency of the reverse-recovery time constant τ, the bridge-leg duty cycle and switching frequency are adjusted to always achieve an estimated semiconductor junction temperature of around 125°C (±10°C). The switching losses are estimated according to the simplified loss model in Equation (13), i.e.: As the Q rr information of both the 1200 V and 650 V MOSFETs is only provided at T j = 175°C; the datasheets belonging to the same semiconductor devices in a different, surface-mount TO-263-7L package are used to extract τ at T j = 25°C, enabling a linear interpolation between the τ(T j ) values. Figure 5 and Table 1 compare the experimental results and the estimations obtained with the proposed datasheet-based switching-loss model for the 3LTTC, whereby Figure 5 also provides a breakdown of the the capacitive loss contributions from Equations (9) or (10), respectively, and of the reverse-recovery losses. Considering its simplicity, the model predicts the measured hard-switching losses well, achieving a maximum underestimation error of 18 % in the considered load current range. The model accuracy reduces with increasing |I sw |, as the unaccounted-for V-I overlap (caused by the finite dv /dt and di /dt values) increasingly affects the overall losses. The deviation between the measured and estimated losses is, in fact, well reflected by the approximate evaluation of the V-I overlap losses with Equation (11), assuming reasonable values of dv /dt ≈ 100 V /ns and di /dt ≈ 10 A /ns (i.e., according to measurements and datasheet information). As the switching speeds of nextgeneration WBG power transistors are expected to show an increasing tendency, driven by the trend towards further integration of power electronic converters and, especially, by integrated gate-drive circuits [20], the accuracy of the proposed loss model is expected to improve further. Measured Estimated 25°C 175°C 125°C Figure 5. Comparison between estimated and measured hard-switching losses in the 3LTT bridgeleg (T 1 = T 4 : C3M0032120K, T 2 = T 3 : C3M0025065K) as a function of the switched current I sw at V dc = 800 V and T j ≈ 125°C. The estimated energy losses take into account the measured parasitic capacitance C σ ≈ 35 pF + 50 pF (between the switching node and the DC-link, and the winding capacitance of the load inductor), as E σ = 1 /2 C σ V 2 sw . The estimated losses for T j = 25°C and T j = 175°C are indicated with dashed lines. Table 1. Comparison between estimated and measured hard-switching losses in the 3LTT bridge-leg as a function of the switched current I sw at V dc = 800 V and T j ≈ 125°C. The losses are estimated with (15) for I sw > 0 and with (16) for I sw < 0. Conclusions This paper describes a simplified method to estimate the hard-switching losses in SiC-based three-level T-type (3LTT) bridge-legs. Remarkably, the proposed method is applicable to other three-level topologies as well. Neglecting voltage/current overlap losses and considering only charge-related loss components, the proposed approach requires minimal information from datasheets. It can not only account for the capacitive charge/discharge loss caused by the third semiconductor device that is subject to the voltage transient during hard-switching events, but it is also especially applicable to commutations between semiconductor devices with different blocking voltage and/or current ratings. The proposed loss model is verified experimentally by calorimetrically measuring the switching losses of an 800 V DC-link 3LTT bridge-leg prototype employing 1200 V and 650 V SiC MOSFETs. The results show that, with the proposed model, the basic semiconductor information provided in the datasheet are sufficient to predict the switching losses of the 3LTT with reasonable accuracy, resulting in a maximum underestimation error of 18% with respect to calorimetric measurements. Whereas a certain loss underestimation must be expected due to not considering the overlap losses (whose importance reduces with increasing switching speeds enabled by future integrated gate drivers), it is worth highlighting that the estimation accuracy strongly depends on the quality of the Q rr information provided by the semiconductor device manufacturers in their datasheets. Unfortunately, this information is often unreliable (e.g., possibly including unwanted high-frequency ringing effects [17]) and/or is only provided for one operating point (i.e., a single combination of I sw , T j , di /dt). Therefore, better Q rr data quality, considering, for instance, the measurement procedure proposed in [17], and the availability of more data points in datasheets could further improve the accuracy of the proposed straightforward modeling approach for hard-switching losses in 3LTT (and other three-level) bridge-legs. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
5,093.2
2022-05-25T00:00:00.000
[ "Engineering" ]
Vibration protection of sensitive components of infrared equipment in harsh environments This article addresses the principles of optimal vibration protection of the internal sensitive components of infrared equipment from harsh environmental vibration. The authors have developed an approach to the design of external vibration isolators with properties to minimise the vibration-induced line-of-sight jitter which is caused by the relative deflection of the infrared sensor and the optic system, subject to strict constraints on the allowable sway space of the entire electrooptic package. In this approach, the package itself is used as the first-level vibration isolation stage relative to the internal highly responsive components. It was predicted analytically, and confirmed experimentally, that the proposed vibration isolation system would be capable of a sixfold reduction of the dynamic response of the infrared sensor as compared to the case of rigid mounting of the entire package. Introduction Infrared (IR) imagers enhance tremendously the ability to detect and track ground, sea and air targets, and also to navigate at nighttime [2,6].Their operating principle is based on that simple fact that warmer objects radiate more and cooler objects radiate less.Since their noise figure strongly depends on the operating temperature of the IR detector, a high-resolution imager requires cryogenic cooling down to 80 K and a high level of optic stabilisation. Modern sophisticated airborne thermal imagers, which require compact design, low input power and long life-times, often rely on closed cycle cryogenic coolers.Stirling coolers are especially suitable for such applications.Compared to Gifford-McMahon and Joule-Thomson cycles, Stirling offers more than twice the cooling performance in the cooling power range 1-100 W. The application of new technologies allows the life-time figures for Stirling coolers to be well beyond 40000 hours [5]. Stirling coolers, which may be of both split and integral types [6,11], typically comprise two major components: a compressor and an expander.In a split cooler these are interconnected by a flexible gas transfer line (a thin-walled stainless steel tube of a small diameter) to provide for maximum flexibility in the system design and to isolate the IR detector from the vibration interference which is produced by the compressor.In the integral cooler these components are integrated in a common casing. The reciprocating motion of a compressor piston provides the required pressure pulses and the volumetric reciprocal change of a working agent (helium, typically) in the expansion space of an expander.A displacer, which is located inside a cold finger, shuttles the working agent back and forth from the cold side to the warm side of the cooler.During the expansion stage of the thermodynamic cycle, heat is absorbed from the cold finger tip (cold side of a cycle), and during the compression stage, heat is rejected to the ambient from the cold finger base (warm side of a cycle) [6,11]. It is a modern tendency to mount the IR sensor directly upon the cold finger tip.Such a concept, which is known as Integrated Dewar Cooler Assembly (IDCA), allows for practical elimination of the integration losses and better temperature uniformity across the IR sensor, as compared, for example, with the old-fashioned sleep-on design [6]. The typical layout of an IDCA, which relies on the integral Stirling cryocooler RICOR model K508A, is shown in Fig. 1. Figure 2 shows the schematics of the integrated electro-optic package containing the integral cryogenic cooler 1 which carries the IR sensor 2 upon cold fin- ger tip 3 .Both cold finger and IR sensor are located inside the vacuum dewar envelope 4 .The cryogenic cooler, along with the appropriate optics 5 , is mounted upon the single rigid structure (optic bench) which is required for proper optic alignment and stabilisation and also for placement of accompanying electronics.Figure 3 shows a typical electro-optic device which relies on the integral RICOR model K508A cryogenic cooler. The well-known drawback of the electro-optic devices which rely on the IDCA concept is their high sen-sitivity to external broadband random vibration.This is because, since it is desirable to decrease the heat conductivity of the cold fingers, they are typically thinwalled and manufactured of low-conductive alloys such as stainless steel or titanium.A cold finger which carries an IR sensor may be treated as a cantilever beam with an end lump mass.As a result of the low stiffness and damping intrinsic in such a structure, the IR sensor behaves as a lightly damped dynamic system the principal natural frequency of which falls into the frequency range of external disturbances.Wideband random excitation, therefore, may give rise to a quasiresonant dynamic response of the sensor relative to the rest of the optic system.As the level of line-of-sight jitter becomes comparable with the spatial resolution of a particular sensor, the vibration components contaminate the video signal causing the drastic degradation in performance of the IR imager. Figure 4 shows the experimentally measured universal relative transmissibility of the IR sensor in a typical IDCA design.From curve-fitting based identification, the natural frequency is 816 Hz and the loss factor is 3.2%.Such a low loss factor and natural frequency explains high vulnerability of the IDCA design to external random wideband vibration, the excitation spectrum of which usually contains essential frequency components up to 2000 Hz. The known approaches to ruggedizing of vibration sensitive components involve different combinations of stiffening and damping treatments.These methods aim to either reduce the resonant amplitudes by damping, or avoid them altogether by increasing the relevant resonant frequencies to above excitation frequency. However, the traditional methods of ruggedizing become inadequate for the vibration control of the cold finger of a cryogenic cooler.An increase in the wall thickness of a cold finger and application of additional supports for extra stiffness lead to an excessive growth in heat loading through the increase in conductivity and shuttle losses.Since outgassing inside the high vacuum envelope is a concern, the application of the typical polymer materials for damping becomes practically impossible. To combat the problem of excessive response of the IR sensor the designers are now looking at using vibration isolators.As the static and dynamic alignment of the IR sensor relative to the rest of the optical system is a concern, the vibration isolation of the entire electrooptic package is the only option.It is important to note now that large dynamic response of the entire isolated package involves "solid body" modes of motion and takes place at typically low frequencies.Since the deflection of the IR sensor relative to the optic system is not involved in such motion, the quality of the IR imaging of the remote targets is not affected.Figure 5 shows the schematics of such an isolated IR device. Vibration isolation is the simplest, most widespread and well-studied method of vibration protection [4,7].As is known, the best isolation in the typical high frequency span may be achieved when the natural frequency and loss factor of the vibration isolator are low.Unfortunately, such an isolator is only feasible in such applications where the intensive shock, random excitation and constant accelerations (g-loads) are not typical.As the equipment containing such a low frequency and lightly damped vibration isolator involves an exposure to the aforementioned harsh environmental conditions, which are typical for airborne applications, the problem of excessive deflections becomes a serious concern.These conditions encounter wideband random excitation (e.g.flight through turbulent flow) and high g-loads which are experienced by the airborne vehicle at take-off, climb, high-speed turn, speedup, etc.It was recently revealed that the newly designed jet fighters (e.g.Eurofighter) might develop accelerations up to 12 g. Vibration isolators As a result, plenty of free "rattle space" must be allowed around the equipment, and the vibration isolators, thermal and electrical interfaces need to be of special design.Such an approach complicates the entire system and makes it both unreliable and costineffective. An increase in the natural frequency and loss factor of an isolator allows for the close control of the above deflections.However, even where analysis of the isolation system is carried out, it is traditionally the response of the entire package that is optimised [1,4,7].Such a design eventually requires the application of a heavily (critically) damped vibration isolator.It is widespread opinion, which is supported by the leading manufacturers of vibration isolators, that only highly damped isolation materials provide the only choice for adequate protection in electronic equipment [1]. Such a concept completely misses the purpose of using isolators -to protect the sensitive internal components, and eventually calls for the application of in- adequate highly damped vibration isolators with poor vibration isolation in the high frequency range which typically contains the natural frequencies of the above critical components. The novel approach developed in this paper is the use of vibration isolators with properties to minimise the relative dynamic response of the internal sensitive components (IR sensor relative to optic system, in this instance), subject to the restraints imposed on the peak deflections of the entire IR package.Such a design approach uses the existing equipment as the first level vibration isolation stage with respect to the internal sensitive components, and is based on the authors' ideas [8][9][10] in application to vibration protection of critical components in electronic equipment. In this article, the authors develop the mathematical model of the two-degree-of-freedom (TDOF) vibration protection system and the procedure of its optimisation. The results of numerical analysis are backed up by experiment.Under the "white noise" harsh random vibration test (12 g rms; 10 to 2000 Hz) the dynamic response of the IR sensor was reduced sixfold (from 14.4 µ rms to 2.3 µ rms) as compared with the case of rigid mounting of the entire package.Such a level of sight-of-jitter meets the customer specification of 3 µ rms. The use of such an isolation system is very attractive across the industry, as no design is required in altering the sensitive internal components of existing equipment.An additional benefit is that a considerable vibration protection of the accompanying sensitive electronics and optics may be attained. Experimental study of dynamic properties of cold finger The experimental rig shown in Fig. 6 was created to study the dynamic properties of the system. The cryogenic cooler 1 (RICOR, model K508A) which carries the IR sensor dummy upon the cold finger tip is mounted over the vibration exciter 2 (Ling Dymanic Systems, model V550).The control accelerometer 3 (Bruel & Kjaer, Type 4393) is mounted on the fixture.Its signal is fed through the charge amplifier 4 (Bruel & Kjaer, Type 2635) to the dual-channel vibration analyser 5 (Signal Calc Ace, Data Physics Corporation) and simultaneously to the vibration controller/power amplifier 6 (Ling Dymanic Systems, model DVC 48/PA550L).The dual-beam fibre laser vibrometer 7 (Polytec, model OFV 502) measures the velocity of the IR dummy relative to the cold finger basis.In the experiments, the vacuum envelope covered the cold finger, and measurements were taken through the transparent quartz window. Figure 7 shows the layout of the experimental rig (a) with and (b) without the vacuum envelope. The cryogenic cooler was subjected to the "white noise" harsh random vibration test (12.4g rms, 10 to 2000 Hz).It is important to note that prior to the experiment the cryocooler was pre-cooled to simulate the actual damping properties intrinsic to the cold finger.The experimentally measured relative transfer function (transmissibility) of the cold finger is shown in Fig. 4.This curve indicates that the dynamic system under investigation behaves as a single-degree-of-freedom (SDOF) system in the frequency range encountered.From curve-fitting, the modal natural frequency Ω and loss factor ζ were estimated to be Ω 2π = 816 Hz and ζ = 0.032, respectively. Figure 8 shows the actual PSD of dynamic deflection of the IR sensor relative to the base and this indicates an overall level of 14.4 µ rms.From the spatial resolution of the typical IR sensor, only 3 µ rms of the overall relative deflection of the cold finger tip are allowed for smooth IR imaging. Mathematical model of cold finger As the cold finger with the mounted IR sensor behaves as SDOF system in the frequency range encountered, the mathematical description of its dynamic behaviour under the wideband random excitation may be carried out using complex universal absolute transmissibility [3,4]: and universal relative complex transmissibility where ω is the angular frequency and j = √ −1 is the imaginary unity. The PSD of the relative deflection of the IR sensor which is defined by the function Z(t) may be calcu-lated [3,4] in the form: where S Ÿ (ω) is the single-sided PSD of the base acceleration, given by the function Ÿ (t). The RMS of relative deflection may be derived by the integration [3,4]: By making use of the values of the natural frequency and loss factor which are obtained from curve-fitting and the numerical value of the excitation PSD S Ÿ (ω) = 0.08 g 2 Hz @ 10-2000 Hz, we calculate the PSD of relative deflection and the RMS value of relative deflection to be 14.9 µ which is fairly close to experimentally measured value. Model of vibration protection system and general relationships Figure 9 shows the model of a TDOF vibration protection system, where the primary sub-system has the modal natural frequency Ω 1 and loss factor ζ 1 and represents the vibration isolated IR package.The secondary sub-system has the modal natural frequency Ω and loss factor ζ and represents the cold finger with the mounted IR sensor.The base vibration is given by the function Y (t).The absolute deflection of the primary and secondary subsystems are X 1 (t) and X(t).The relative deflection of the primary subsystem to the base is Z 1 (t) = X 1 (t) − Y (t), and relative deflection of the secondary subsystem to the primary system is Z It is important to note that the mass of the secondary subsystem is negligibly small as compared to the primary subsystem.In this particular case the mass of the sensor and effective mass of the cold finger was approximately 200 times lighter than the entire system.Therefore, the dynamic response of the primary system may be considered to be independent of the secondary subsystem.It may also be thought of as a vibration input to the secondary subsystem. The absolute and relative complex transmissibilities of the primary subsystem are derived similarly to (1) and (2) in the form: The PSD and RMS of absolute acceleration of the primary subsystem are calculated in the form [3,4]: Similarly, the PSD and RMS of relative deflection of the primary subsystem are obtained in the form: In applying the 3σ rule [7], which provides for the instantaneous level of normally distributed deflection to be less then 3σ with a probability of 99.73%, and accounting for the additional quasi-static relative deflection due to the g-loading, the total peak relative deflection may be derived by means of the expression: where G is the specified level of g-loading. The vibration of the primary subsystem may be thought of as the excitation to the secondary subsystem, therefore the PSD and RMS of relative deflection may be calculated as follows: and Statement and solution to optimal problem As stated above, even large motion of the entire IR package, which does not give rise to the excessive displacement of an IR detector relative to the optic system, does not affect the quality of imaging of the remote targets.Therefore, the major objective of optimal design of such a vibration protection system is the minimisation of the relative deflection of the secondary subsystem to the primary one, subject to constraints of the limited peak deflections of the primary subsystem to the base.These constraints are typically imposed by the 1.E-10 1.E-09 1.E-08 1.E-07 1.E-06 1.E-05 design of the electro-optic device enclosure, electrical harness and thermal interfaces, etc. Mathematically this may be expressed in the form: where ∆ is the allowable peak deflection of the primary subsystem. Since the natural frequency and loss factor of the cold finger are not supposed to be altered, we consider that Ω2 2π = 816 Hz and ζ 2 = 0.032 (as obtained above from the curve-fitting).The PSD of the excitation is S Ÿ (ω) = 0.08 g 2 Hz in the frequency range 10-2000 Hz.The maximum level of g-loading is 12 g.The allowable peak deflection of the electro-optic package is ∆ = 0.5 mm. The remaining two variables, namely, the natural frequency and loss factor of the primary subsystem Ω 1 and ζ 1 , may be manipulated to meet the conditions of (14).The expressions (1), ( 2), ( 5), ( 7)-( 14) were involved in the design of an appropriate MSExcel worksheet.The procedure of numerical optimisation relies on the application of the standard Solver add-in procedure. As a result of the solution to the optimal problem (14), the "optimal" primary vibration isolator was determined to have the parameters: Under the condition that the primary vibration isola- tor is chosen in accordance with (15), the overall level of vibration experienced by the electro-optic package is σ Ẍ1 = 5 g (attenuation factor 2.9, as compared with the base overall vibration level).The peak deflection of the entire package relative to the base is z peak 1 = 0.5 mm, while the dynamic component is 0.253 mm and quasi-static component is 0.247 mm. At the same time the overall level of relative dynamic deflection of the IR sensor to the cold finger base is σ Z = 2.35 µ rms which indicates a sixfold attenuation as compared to the case of rigid mounting of the entire IR package. Figure 10 compares the PSD of excitation acceleration and that of the vibration isolated package.Figure 11 shows the PSD of the relative deflection of the electro-optic package to the base showing the overall level of 0.084 mm rms. Figure 12 shows the PSD of relative deflection of the IR sensor to the cold finger base showing the mentioned overall level of 2.35 µ rms (compare with Fig. 8). Analysis of sensitivity Since the vibration protection system relies on an optimised vibration isolator, it is important to determine the sensitivity of the dynamic response of the IR sensor to the properties of the primary isolator. Figure 13 shows, for example, the variation of the dynamic responses of both primary and secondary sub-systems in response to the variation in the loss factor of the primary isolator the natural frequency of which remains constant, Ω1 2π = 110 Hz.An analysis of this Figure indicates the low sensitivity of the dynamic responses of both primary and secondary subsystems to relatively large deviations of the loss factor from its optimal value. Choice of vibration isolator In the experiments we used standard, commercially available Shock Tech 1 Cable Mounts (see also Enidine,2 Barry Controls, 3 Aeroflex International [12]) providing for the desired loss factor and natural frequency.Such cable mounts are of all-metal design, constructed out of stainless steel cable and aluminium bars, and especially intended to withstand the severe environmental conditions while demonstrating no outgassing and ageing, long fatigue life and persistence of parameters in a wider temperature range, as compared with the polymer isolators.The wire rope cables in these isolators are inherently damped through internal wire flexure hysteresis, thus providing for the loss factor to be in the range of 30% (treated cable) [12].Since the quality of the vibration protection system depends strongly on the parameters of the primary suspension, this feature is, probably, the most critical for the choice of a proper vibration isolator. Test rig Figure 14 shows the schematics of the experimental rig.In general, the notations are similar to those in Fig. 6.The cryogenic cooler is suspended from the vibration exciter table by means of two Shock Tech Cable Mounts 8 . Additional accelerometer and charge amplifiers are used for measuring the dynamic response of the cryogenic cooler, which, in this case, is definitely different from the motion of the vibration exciter.During the experiments the vacuum envelope was mounted to cover the cold finger and measurements were taken through the transparent window. Results of measurements Figure 16 shows the experimentally measured absolute transmissibility of the primary subsystem (label Experiment).From curve-fitting, the natural frequency and loss factor are estimated as 117.5 Hz and 0.26, respectively.These values are only slightly different from the optimal values (15). Figure 17 compares the PSD of excitation and acceleration of the primary subsystem.The obtained results are in a close agreement with analytical prediction (see Fig. 10). Figure 18 shows the experimentally measured PSD of the relative deflection of the primary subsystem showing the 0.24 mm peak deflection (the 3σ rule was applied to the 0.08 mm rms value).These experimental results are also in good agreement with the analytical prediction in Fig. 11. Finally, Fig. 19 shows the experimentally measured PSD of the relative deflection of the cold finger tip which indicates the overall level of 2.26 µ rms (compare this with 2.35 µ rms from the analytical prediction).The closeness of the obtained results is evident. Conclusions In this article the authors suggest that a relatively heavy electro-optic device should be used as the firstlevel vibration isolation stage relative to the sensitive internal components of the IDCA package.For this purpose they developed an approach to the optimal design of the vibration isolators with properties chosen to minimise the response of the IR sensor relative to the rest of the optic system, subjected to strict constraints on the allowable sway space of the entire electro-optic package. It was predicted analytically and confirmed experimentally, that the proposed vibration protection system would be capable of a sixfold reduction in the relative dynamic displacement of the IR sensor as compared with the case of its rigid mounting. The proposed approach may provide benefit across a wide range of applications.The vibration protection arrangement described may be applied widely when the high-quality rugged and inexpensive dynamic protection of sensitive IR equipment is required.The use of such an isolation system is very attractive across the industry, as no design is required in altering the sensitive internal components of existing equipment.Additionally, considerable vibration protection of the accompanying sensitive electronics and optics of the IDCA package may be achieved. Fig. 10 . Fig. 10.Excitation and dynamic response of the optimised vibration isolator. Fig. 16 . Fig. 16.Experimentally measured absolute transmissibility of the primary vibration isolator and estimation of its modal parameters by curve-fitting.
5,145
2001-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Automating the first and last mile? Reframing the ‘challenges’ of everyday mobilities Abstract In this article, we interrogate the utility of conceptualising the ‘first and last mile’ (FLM) as a ‘challenge’ to be addressed through automated and integrated mobility services. We critically engage with the concept through a design anthropological approach which takes two steps so as: to complicate literatures that construct the FLM as a place where automated, service-based and micro-mobility innovations will engender sustainable modal choices above individual automobility; and to demonstrate how people’s situated mobility competencies and values, shape social and material realities and future imaginaries of everyday mobilities. To do so, we draw on ethnographic research into everyday mobility practices, meanings and imaginaries in a suburban neighbourhood in Sweden. We show how locally situated mobilities both challenge the spatial and temporal underpinnings of the first and last mile concept, and resist universalist technology-driven automation narratives. We argue that instead of attempting to bridge gaps in seemingly linear journeys through automated systems, there is a need to account for the practices, tensions and desires embedded in everyday mobilities. Introduction In recent years the first and last mile (FLM) 'challenge', has become increasingly central to industry, consultancy and policy agendas for automated mobilities.In these dominant narratives, automated FLM solutions are seen to respond to a 'societal problem' whereby individuals use private cars to traverse the last miles in their everyday commutes due to lack of convenient public transport.The quest for FLM solutions echoes typical technological solutionist (Morozov 2013) approaches to creating and applying automated solutions to perceived social or societal problems, which endure in the mobilities sector (Quilty et al. 2022). In this article, we interrogate the implications of formulating the FLM of passenger travel to and from home as a challenge which automated systems, technologies and services might solve.We confront dominant, universalist automation narratives with the mobility practices, tensions and desires that unfold in the actual, immediate time-spaces around the home.That is, we focus on how people navigate the spaces presented in FLM propositions.In doing so, we contribute to, and seek to complicate, recent literatures that focus on the first and last mile as a place for implementing automated, service-based and micro-mobility transport options in order to nudge people into more sustainable modes of travel other than using individually owned cars (Doody et al. 2022;Behrendt 2020;Cohen 2006). FLM concepts are increasing part of the mobility solutions vocabulary.The 'last mile' concept has been used to refer to the final segment of freight transport journeys (Oliveira et al. 2017) and recently gained traction in the field of passenger mobility, where it is used to refer to gaps in transit coverage at the very beginning and end of commuter travel.First mile solutions are often based on an assumption that people use private cars to cover this part of their journey or indeed their entire journey, because they have insufficient public transport options, or lack incentives to use public transport when it is available (Lesh 2013;Mohiuddin 2021), especially in suburban contexts.Thus, within urban planning and transportation sustainability research it is suggested that improving access to transit networks or shared mobility options could reduce individual private car use and enable a shift towards shared mobility. Possible FLM solutions proposed to replace privately owned cars for (sub)urban passenger journeys include: automated micromobility systems using lightweight, swift-paced micro-vehicles such as e-scooters, monowheels, bicycles or skateboards (Bahrami and Rigal 2022); as well as (Shared) Autonomous Vehicles (AVs).In this context, Mobility as a service (MaaS) (Wong, Hensher, and Mulley 2020) and Mobility on demand (MoD) systems are thought to offer increased efficiency and flexibility for AV sharing (Chong et al. 2013;Jones et al. 2023). However, existing visions of automated mobility solutions are limited in several ways.First, they are often naïve in their technological-determinism and -optimism (Ke R błowski, Dobruszkes, and Boussauw 2022).Second, they tend to 'center upon on issues of utility, efficiency, and economic growth achieved through "rational" planning and decision-making' (Ke R błowski and Bassens 2017, 414), leading them to assume that commuter mobilities are linear, uncomplicated and bound to specific spatialities and temporalities, and thus to frame the FLM temporally and spatially.Third, such visions are shaped around the dominant figure of the commuter.They imply that commuters are able-bodied, independent, digitally connected and affluent, and correspond with the notion of the young middle-class professional male; a rational actor who responds to industry and policy stimuli as intended (Strengers 2013). The most prominent FLM solutions hence tend to be designed as individual, service-based, and on-demand, often including AI-powered automated decision-making functionalities (ridehailing or integrated digital mobility service platforms).These FLM visions therefore frequently fail to account for those mobilities that are the most challenging or disregard the purpose and meanings of everyday mobilities which are inherently social in nature.Moreover, in the narratives supporting these types of visions, it is unclear what the first/last mile is and implies, both spatially and temporally. We argue that to achieve the objective of enabling people to minimise individual car use, more attention needs to be paid to diverse mobilities and to the realities and contingencies of the FLM.We therefore aim to provide an understanding of the beginnings and endings of everyday journeys beyond their status as a societal problem related to unsustainable mobility patterns that can be solved by introducing automated technological solutions that, supposedly, make travel time-efficient and seamless through the physical space surrounding their homes and destinations.Instead, we analyse FLM mobilities on the terms of the localities of people's homes and destinations, which are complex lived spaces where ways of inhabiting and belonging are negotiated.As we show below, these mobilities are inseparable from wider social and material networks implicated in everyday mobilities. Therefore, we interrogate and re-frame the 'first and last mile' concept, by questioning the pertinence of the spatial and temporal category of a FLM.In doing so we take a design anthropological approach which theorises everyday spaces, temporalities and practices as always emergent and contingent, and attends to the ways in which everyday life situations and possible futures are constituted and shaped through people's everyday experiences, practices and imaginaries (Pink and Salazar 2017). Applied to future automated mobilities, design anthropology has already highlighted how human creativity and improvisation constitutes socio-technical relations and configurations that are unanticipated by dominant mobilities narratives (Pink, Fors, and Gl€ oss 2019).Our approach also accounts for how people continuously learn and adapt their mobilities as they encounter the complex social, material and technological environments in which they live.A design anthropological approach (Fors and Pink 2017) therefore highlights how people accrue mobility competencies in locally situated and changing contexts, which shape and are shaped by the social and material environments in which they live. Here, we apply this lens to ask how people actually go about solving their everyday logistics in ways that are adaptive to the contingencies that arise in their in daily routines, relationships and environments as they travel locally.We analyse the particular situated mobility competencies, values and socialities that are constructed and enacted as people do so.In doing so, our ambition with this article is to contribute to a social understanding of what happens within the immediate space around the home and destination and the implication of this for how to grapple with 'the first and last mile challenge'. To demonstrate this, we draw on ethnographic fieldwork into the spatial and temporal circumstances of the beginnings and ends of people's journeys in a peri-urban neighbourhood in Sweden.The insights from our analysis of these everyday mobilities and competencies contests and questions the assumptions underpinning dominant technological solutionist understandings of both the FLM themselves and the supposed challenges they present. In the following sections we first discuss the most prominent discourses of FLM 'challenges' and their association with automated technologies and outline our methods.We then discuss three related dimensions of our ethnographic findings.As is conventional in design anthropological analysis, we use the dialogue between the findings and the insights they produce (rather than separating them): to question the direct causal relationship between automobility/private car ownership and FLM problems; to challenge the FLM as a spatial and temporal category of daily mobility; and, in doing so, to demonstrate how situated mobility competencies, values and socialities participate in generating local and workable mobility spatialities and temporalities.To conclude, we sum up and call for a reconsideration of the automated FLM and the benefits it promises, through attention to the diverse mobilities that already constitute the space-time it represents. Context: automation and the 'last mile challenge' Autonomous vehicles (AVs) and 'shared' Mobility as a Service (MaaS) or Mobility on Demand (MoD) systems have frequently been proposed as 'solutions' to a particular version of the 'first and last mile problem' applied to lower density or peripheral suburban areas (Legacy et al. 2019;Maginn, Burton, and Legacy 2018;Ohnemus and Perl 2016;Shaheen and Chan 2016).Effectively, this means that universalised automated technology solutions are being proposed (yet not successfully implemented), on the basis of unsubstantiated assumptions about how people engage with the everyday spatialities and temporalities that characterise the sites of departure and destination in their travel.In this section, we first outline current approaches to the FLM as a 'challenge' to be solved, before identifying a series of critical issues concerning this dominant conceptualisation. Solving the FLM challenge is frequently presented as a key factor in encouraging transit use and in compensating gaps in transit coverage (Gurumurthy, Kockelman, and Zuniga-Garcia 2020).From a technologically-driven perspective, it is believed that poor connections from public transport nodes to people's homes are the main reason for people's preference of the privately owned automobile (Lesh 2013;Shaheen and Chan 2016;Mohiuddin 2021).This vision of FLM passenger mobilities has also attracted commercial interest as large transport companies seek to position themselves as multimodal door-to-door service providers (Bonneville 2018).In turn, car manufacturers and other industry actors have started investing in MaaS and MoD platforms, 'shared' mobility, automated vehicles, and micromobility technologies. Shared mobility solutions have also increasingly been proposed as a way to 'address first-and last-mile connectivity with public transit, extend the catchment area of public transportation', and encourage 'multimodality for first-and last-mile trips rather than driving alone'.On-demand shared mobility services are regarded as a more flexible alternative to 'costly feeder bus services and land-intensive parking infrastructure' (Shaheen and Chan 2016). While these solutions include less investment-intensive and more time-proven practices such as peer-to-peer ridesharing, a particular focus has been placed on automated and autonomous vehicles (AVs). 1 Autonomous 'pods', self-driving shuttles and personal rapid transit vehicles in particular have been promoted as FLM solutions (Jones et al. 2023).Beyond aspiring to provide access for elderly, low-income, and non-drivers, shared autonomous vehicles (SAV) are said to ensure the FLM of trips in low-density areas by integrating with public transport (Ohnemus and Perl 2016).Overall, despite calls to consider micromobility alternatives such as cycling (Behrendt 2020), debates involving the FLM have been largely focused on motorized, autonomous and networked vehicles and cars in particular. Technologically-driven FLM 'solutions' represent a form of 'technological solutionism' (Morozov 2013), which assumes that the societal problem is that people use private cars to drive the FLM because they have not been provided with more sustainable, efficient alternatives.Such approaches see travellers as rational actors, failing to account for the extensive evidence of the importance of sensory (Kent 2015), symbolic and socio-technical (Sheller and Urry 2016) dimensions of automobility.These solutions also fail to account for the diverse ways in which people really travel their FLMs, or what these journey segments mean to them.For example, that passengers will prioritise elements of convenience and combined activities over simple time efficiency and modal choices; preferring longer transits with fewer changes in order to allow for comfort or in-transit activities. Moreover, while the FLM 'challenge' is frequently presented as a homogeneous and technical issue, research has shown that the experience, practice and narratives of these journey segments varies widely across and within different cultures, contexts and spaces (Hickman and Vecia 2016).Thus, by overemphasizing the functional importance of the FLM (for modal choice), these approaches tend to underestimate its social, experiential and symbolic significance.Indeed, the concept of the FLM is closely associated with dominant figure of the mobile subject, most notably affluent and able-bodied male white-collar commuters executing regular to-and from work travels.However, daily mobilities are not limited to linear commuter journeys and are at least partly associated with everyday care, the responsibility for which tends to be unequally distributed, while mobility temporalities and spaces are gendered, and vehicles intended for emancipation may become vehicles for the extension of domestic work (Demoli and Gilow 2019). It is therefore crucial that we provide locally grounded, in-depth understandings of traveller practices, experiences and (shared) meanings.To do so, we begin by critically unpacking the concept of the FLM as a spatial and temporal category and account for how such mobilities are produced 'close to home'. Methods: situating the last mile through design ethnography To access local meanings and practices of mobility we developed design ethnographic (Pink et al. 2022) explorations through our engagement with a suburban area in Sweden.Design ethnography involves an approach to research which, amongst other things, brings dominant technologically solutionist narratives into relief with the ongoingly emergent realities, contingencies, ways of knowing and learning and creativity of everyday life (Pink et al. 2022).Rather than working towards automation as a script for the future, our methods were designed to interrogate the possibilities of automation; to expose the power relations it implies and surface existing and desired everyday mobilities and future mobility transitions.Our methodology was designed to foreground the experimental and experiential characteristics of people's everyday mobilities and make sense of the messiness of everyday logistics (Fors et al. 2022). We engaged this approach to reflect on the FLM concept in relation to real everyday life experience, by investigating how people living in a specific locality navigated, experienced and constituted the spatialities and temporalities associated in dominant narratives with the FLM .Such suburban spaces offer laboratories for sustainable change (Loubi e 2019), due to their complex environments, low public transport offerings, and frequently monomodal, car-centred mobility.Moreover, such peripheral spaces are undergoing profound changes (in terms of demographic composition, spatial organisation and built environment) that question the meaning and implications of suburban and peripheral spaces themselves.For instance, in processes of rural gentrification and continued suburban growth, agricultural spaces are becoming commuter territory, while simultaneously rural territory is being reinvested not as a periphery to the city, but also as place of activity, source of revenue and basis for more sustainable lifestyle. Our fieldwork was undertaken in the suburbs of one of Sweden's larger cities, with a growing population of just under 7000 in a hilly semi-rural area stretching across around 70 km 2 , where distant clusters of residential housing are connected to each other and the city by a single fast road.The specific topography, infrastructure and public transport availability make this a particularly compelling example in relation to mobility transitions. In this rural areawhich we call Monsite -20 km from the nearest city centre, where former summer houses had previously been transformed into permanent residences, new, dense housing development is challenging the existing infrastructure and demographic makeup of the area.Hedlund (2016) would term a 'middle-class countryside within the urban shadow'.Its population has high educational levels, most work in qualified sectors and form a significant part of the working population commuting to the urban centre.Simultaneously, residents and local organisations are turning their attention to local activities alongside the development of small-scale organic farming, adventure tourism and local food production.Lined on two sides by a nature reserve, lakes and wooded hills, the remaining fields and meadows are punctuated with horse stables, persisting farms and more isolated housing. Monsite is part of what Twenty-two residents were recruited using a snowball method starting from key informants from local neighbourhood organisations (a neighbourhood development committee, school, church and local sports clubs), and recruitment and outreach events held outside the local supermarket.Participants included 11 women and 11 men, aged between 14 and 77, living in variable household compositions, and 11 were parents of school-aged children.The sample was designed to reflect the importance of family logistics in local mobility and we interviewed several members of the same family (partners and/or their children) in three cases.Participants lived in different neighbourhood types, from dense housing clusters to sparsely populated areas, from remote locations to accessible neighbourhoods closer to the fast road system. To investigate how people experience and practice FLM mobilities under the restrictions of the COVID-19 pandemic, we combined individual online interviews with on-site visual ethnography.In our online semi-structured interviews of around an hour each, we focused on participants' biographical narratives, residential trajectories, perceptions of their neighbourhood and existing mobility and sharing practices that inform their everyday mobilities.We also enquired about potential improvements and future developments participants envisioned for their immediate environment and mobilities.This provided insight into participants' daily mobilities, their motivations and social context for modal choice as well as their representations of different forms of (future) mobility. Prompting participants with the question of automation moreover provided the opportunity to confront diverging narratives of future mobilities, thereby offering a template to emerge frictions in everyday mobilities and proximate spaces.These video-recorded interviews were supported with shared maps that participants could draw on, which allowed us to situate mobility practices, meanings, and projections within the geographic space and to gain insights into routine routes and meaningful connections between different places. These elements were further investigated through an innovative research method we developed to explore participants' practices in more depth: the two-car convoy (Brodersen 2021).Instead of 'in-car video ethnography' (Pink, Fors, and Gl€ oss 2019), or fixed video-cameras installed inside cars (Laurier 2013), our method involved participants driving their own cars to guide two researchers who followed them in a second car. Participants chose the starting point and determined the routes in relation to relevant places and roads identified through a set of initial questions.While driving, participants and researchers communicated via mobile phone and the encounter was video-and audio recorded.In this neighbourhood, which is too dispersed to be walkable and where car travel is the dominant practice, this technique encouraged participants to identify the spaces most relevant to (their) mobility, and to explicitly share their situated practices and embodied knowledge.This innovative combination of traditional interviews, online, and mobile, go-along methods (see Merriman 2014) allowed us to learn about the layout and meanings of local spaces and how existing and imagined mobility decision-making practices were embedded in the socio-spatial context. Living the 'first and last mile' We now draw on our ethnographic findings to demonstrate three key points.First, by showing how participants solve their everyday mobilities and examining the role played by automobility in these practices, we complicate the notion that car travel and ownership compensate for public transport accessibility in the FLM.Second, we question the FLM as a unit of space and, third, as a unit of time, by exploring how participants' mobilities unfold at the beginning and end of their everyday journeys and around their homes. Living in Monsitecomplicating automobilities Most of the participating households in Monsite rely on two or more carsprivately owned or company cars, often a combination of bothto ensure their everyday mobilities.In their neighbourhoods, using several cars per household forat first glancemonomodal car travel, appears to be the norm.While this is in part explained by ease, flexibility and greater comfort (see also Kent 2015), the motivations, rationalities and actual practices of automobility were more complex. In participants' narratives, family logistics are key in justifying car use and ownership.Gemma (49), for example, who lives in the area with her husband and two pre-adolescent children, has experimented with having one car or no car while living in the area.Having a connection to the public transport system was initially a key factor in choosing their house when they moved to the area several years ago.They are reluctantly leasing a second vehicle in addition to her company car to help solve their 'puzzle' of everyday family logistics, but Gemma and her husband would much rather avoid the complications and responsibilities that come with car ownership altogether.As for Gemma, who went through stages of having no car to needing several, a recurring justification for multiple car ownership is the need to drive children and other dependents to different destinations, particularly extracurricular activities inside and outside the area. However, trips made entirely with private cars often combine diverse activities and journeys.Some of these combined mobilities serve as an argument for the car as a single transport mode by many participants.Simon's mother (47) uses the car to drop off her son at school on her way to taking the dog to day care before work, and Felix (44) stops at the small supermarket on the main road on his way back from work.Gemma picks up parcels at the local gas station on the way to the football field or when she does the shopping on her way home from work.This 'trip chaining' (Scheiner and Holz-Rau 2017) as a strategy to handle the complexities of family logistics and care work also justifies the use of the car.Other than a 'mobility of care' (S anchez de Madariaga 2016), these combined trips join various layers of activity, social spheres and uses of local space.Travelling by car is a way not only of adjusting to the demands of different household members but also to account for the serendipity of moving through everyday life spaces. Even households with several cars routinely combine different modes of transport in creative and adaptive ways to 'solve the everyday puzzle' and to cope with the changing challenges of daily routines, seasons and environments.Residents use commuter parking spaces, but also more informal and less infrastructure-driven hubs for their coordination.Antonia (42), who lives further from the main road, drives her kids to school and drops off their bikes at the nearest bus stop on the way on days where they finish early, so they can take a tram and bus back and cycle the last 3 km home along a dirt road.In the winter, she prefers to coordinate with her husband to either pick up the kids at school or at the bus stop.These specific combinations of multimodality, coordination and adaptation serve to fit various objectives and constraints and can be part of collaboration within families and communities.Amanda (42) drops off her daughter's team-mates at the bus stop after practice so they can walk or bike home from there, but when it is dark, she tends to drive them all the way to their door.Olaf (45) and a few of his neighbours use a post-box at the bottom of his street as a collection point for children to drive to football practice together.Felix used to drop off his sons at day-care and the bus stop respectively on his way to work, then his older son would pick up his little brother when walking home from the bus stop after school.Additionally, participants also realised a number of their journeys without using their car, sometimes combining several other modes of transportation. As these examples suggest, the linear journey modelled on home-to-work commuter travel does not reflect the way (auto)mobility was understood, experienced or performed by participants in our research.Their journeys were overwhelmingly combined activities that were not limited to a single point to point journey that would take them outside the area.And while difficulties in reaching the nearest public transport stop were mentioned as part of the motivation for using the car, the journeys into which participants invested the most effort and planning were multiple short drives within the area or surrounding boroughs.Below, we show that the actual characteristics of this space within the first few kilometres around the home, its material qualities and meanings, influence how mobilities are organised and conceived of and complicates the notion of the FLM itself. The spatiality of the first and last mile What space counts as the FLM and what do we really find there?As noted earlier, recent uses of the FLM concept referring to commuter travel have defined this space as representing a deficit where transit gaps are found at the beginning and end of travel.In Monsite, this space can surpass the literal mile or kilometre; although the area is connected to the urban public transport system via an express bus route, the nearest bus stop can be more than 5 km away.The route from the main road is often up a steep uphill and winding road with little pedestrian space and low visibility.Navigating some of these sparingly lit and often unpaved streets requires intimate locally-accrued embodied knowledge, particularly in winter.This recalcitrant space makes common micro-mobilities challenging and often unlikely.Hence, this space exceeds the last mile or kilometre in scope and its particular qualities resist the most commonly proposed FLM 'solutions' while at the same time having considerable effects on everyday mobility practices. Single mode transport or multi-car ownership are in fact often motivated by both lack of infrastructure and constraints posed by the built environment.For instance, Pernilla (42) lives along a busy road about 4 km from the nearest school that she deems too dangerous for her children to bike or walk along by themselves, given that there is no cycle or pedestrian path, little to no lighting and utility vehicles driving at high speeds.This means that she is unable to take the bike and bus to work, as she would when her children had access to a school taxi, and unable to offer her children the desired autonomy: … if I could take the bicycle every day and the children could go … on their own to school, maybe we could have only one car.If … we had a bicycle path all the way to Torstorp and to Bigsby I could say we could manage with one car. While a lack of transit options is used to explain dependency on cars, it is, however, often the commuter-oriented linearity of transit options which is misaligned with everyday life.In contrast, in Monsite social relations and trips serving the domestic sphere extend into the surrounding boroughs more often than towards the city: 'You can't really go with the tram horizontally or across, you can go towards the city centre' (Amanda).Thus, it is sometimes the structure of the organisation of travel, rather than the actual distance to travel hubs that is decisive for participants. The immediate radius around participants' places of residence (be it a mile or 10 km) frequently concentrates numerous interconnected short distance travels to key destinations; it is also dense with social relations, activities, and meanings.In some close-knit housing clusters, children can roam more freely, parents give accounts of their ideal image of children knocking on each other's' doors to play.The immediate environment will contain 'favourite places' which are not always the most frequent destinations but carry special value. Rather than wishing for automated modes of leaving the locality, Monsite participants highlighted the need to improve the quality of local space, to make many trips redundant, improve safety and allow for more meaningful interactions; including needing more schools, shops and social spaces.As she guides us through the area, Lisa insists on showing us the places that function as central travel hubs in the area, which she perceives as 'boring' and lifeless, to demonstrate why she chooses to concentrate her activities elsewhere.Although local mobilitynot the commuteis usually portrayed as being the most problematic and automobile by participants, some like Mats and Linda stress the strain of commuting.However, they would rather develop their life locally than optimising their commute: Linda: 'We don't want to travel back and forward to [work]. Mats: We want to stay here.And work here; we are fed up with traffic and the stress in the morning, in the morning traffic it wears you down'. Improving local life thus appears as a higher priority than improving transport options for many of those who already have the resources to ensure and maintain their mobility despite difficulties (Ollivro 2005) and critical for those who do not. The temporality of the first and last mile In this section we critically examine the temporality of the FLM, at the outset and end of everyday journeys, to show what participants do within these times, and how they define and delimit them.For instance, the time that begins and concludes everyday journeys can be perceived as valuable time which, other than the cost of getting from A to B, is invested with multiple activities and serves a number of other functions.Amanda (42) works as a teacher in a school not far from the city centre.She lives with her husband, three adolescent children and an infant in a newly built house on top of a hill.When she travels to work, she takes the express bus to the urban hub closest to her work and then, instead of taking the tram that would take her almost directly to her destination, walks the last stretch up a slight slope towards school, repeating the same on her way back.To her, this time works as a decompression chamber, as a space in which to transition between social spheres and times (see also Hubert et al. 2005), between private and professional life.It enables her to fit exercise into her day and to wake up and wind down, listen to podcasts, reflect and anticipate.It is also the only time Amanda is alone.In contrast, Simon (13) explains during his interview how he coordinates with his friends via text message to make sure they are on the same bus so they can spend more time together. Rather than optimising the speed and efficiency with which she reaches her destination, Amanda intentionally builds a time-space into her day that is valuable in itself.This resonates with previous literature on the use of mobile times (Bissell 2010;Clayton, Jain, and Parkhurst 2016), and qualitative dimensions of mobility determining the preference of routes and modal choice rather than efficiency.This questions the idea of the FLM as a 'challenge' to overcome and repositions the beginnings and endings of journeys as part of a larger time ecology. Amanda's ritual also reflects the value attributed to active mobilities.Health and the joy of physical exercise influence the perception of modal choice, routes and the experience of everyday mobilities, often more so than considerations of efficiency and optimisation.Encouraging exercise and autonomy for children becomes an essential part of parenting and is mentioned as a reason for favouring cycling and walking, for expressing guilt or regret over single mode car travel but also as a reason to express doubts about the use of automated, motorised, on demand mobility services.During interviews, participants conceded that access to an automated vehicle or service for their children might carry advantages in terms of household efficiency but did not necessarily fulfill their desires for their children's mobilities.Felix insists that rather than having an automated on-demand service take his children wherever they need to go, he 'would prefer that my kids are walking, that they are moving and they take the bike [ … ]'. Not only do these types of mobilities carry valuethey also question the idea of the 'last mile' as travel time and effort to be eliminated.Felix's concerns about his children's activity also reflect a key dimension of modal choice and vehicle ownership: the way parenting is conceived of in relation to mobility.In fact, although ferrying children to activities is mentioned as one of the main reasons for multiple car ownership in interviews, and participants claim that being relieved of this duty would be a priority for them to improve their daily mobility routines, driving children is an elemental part of parenting in Monsite.While there are diverging perceptions about whether the time spent together in the car can be defined as 'quality time', providing mobility is a way of participating not only in children's lives but also in the local community. In Monsite, this is linked to the way life cycles and residential trajectories are connected: as people move in from elsewhere to acquire property, children are a key factor of integration and people with very young or adult children sense the difference.Hans, whose children are grown, notes he is longer as integrated in the neighbourhood as he was, while Paul, whose son is not yet in pre-school, says he has little to no activities in the area. As part of local sociality (Pink 2008), driving children or otherwise dependent people is also a key site where practices of sharing are built, negotiated and considered.While 'sharing' services are frequently proposed as 'solutions' for closing gaps in transport systems (Schaller 2021), especially in relation with SAVs (Ohnemus and Perl 2016;Moorthy et al. 2017;Huang et al. 2021), varied and complex sharing practices are already established within the area, and these are inextricably intertwined with relationships and identities within groups, organisations and family networks. Although participants did mention the potential to expand some of these sharing practices, the group of people with whom participants are willing to share is not limitless and does not necessarily accommodate anonymity; open and commercial sharing networks are unlikely to replace appropriately the existing sharing practices that are embedded in meaningful relationships, which they help to consolidate.Like driving itself, sharing the driving task appears as a part of parenting, but it also functions as a way of performing parenthood that extends beyond 'cultures of mothering' (Dowling 2000) or even processes of 'doing family' or reproductive mobilities that have been previously identified (Waitt and Harada 2016).Beyond this performative quality of mobility (potential), owning a car means being able to participate in a common effort and, by extension, in relevant local groups.Amanda shares the driving to track and field practice with other parents on the team: We have a super text group -often the night before, I write "I will drive to the practise tomorrow, I will pick up at that time at petrol station" and then the other parents write "yeah he will go with you".There's not the problem that no one wants to drive it's the other way around, so it's a good group, everybody wants to help out. For children and young adults themselves, being able to drive and having access to different vehicles is also a key part of their integration and the continued construction of their identity locally.The coordination of mobilities and access to vehicles is a key site of sharing within families and groups.Eva ( 16) borrows her father's moped since she has been allowed to drivethis not only spares her the difficult 10 km bike ride back home after a taxing sports practice, it also gives her the possibility to give her sister a lift, relieving her parents of a lot of driving duties.For Oskar ( 16), his moped is central to socializing 'on the move', since he is part of a group of a dozen teenagers cruising the area on motorized two-wheelers for fun.Thus, times around the end points of mobility become extended when integrated into wider sets of activities and relationships.When the act of driving equates with acts of caring, the functional efficiency of point-to-point mobility in the last stretch of a journey as a time period can become secondary to surrounding practices and combined mobilities. The frictions associated with mobility also extend largely beyond the actual travel time.A substantial part of producing mobilities is the time spent preparing and coordinating these mobilities, not only in the context of shared driving, but as part of everyday routines.Coordination and anticipation are a large part of maintaining both individual and collective mobility potential, yet this effort is usually not equally distributed within and among households and is often gendered (Gilow 2020).Situating the everyday experience of mobilities at the individual level, as do the FLM concepts analysed above, locates mobility coordination primarily inside the household unit, which reproduces existing inequalities.It moreover obscures the limits of automation as a solution for locally situated mobilities because it does not acknowledge these inequalities. Automating the first and last mile? Our ethnographic research revealed how the temporal and spatial categories of the efficiencyoriented FLM are misaligned with socially organised locality-based mobilities.In this section we draw further on our ethnographic findings to show, moreover, how locality-based mobilities, which are contingent on and responsive to the complex needs of everyday life, resist simplified, universal automated FLM solutions.When prompted to imagine improvements to everyday mobilities and presented with the possibility of automation, our participants considered diverse possibilities, many of which did not involve automation, and pointed out the limitations of automation in existing mobility practices and spaces.As we explain, their experiences and imaginaries of mobility furthermore suggested that automation may be missing the point of future mobility issues. Situated mobility and local ways of knowing Our ethnography showed that rather than seeking to rationalise their FLM mobility, participants had acquired locally situated knowledge and mobility competencies.The experience and expertise associated with these competences subsequently guided participants' anticipations of automated mobility solutions.Two central dimensions of situated mobility that emerged from our ethnographic analysis are key for understanding the social dimensions of FLM mobility: first, mobility is continually learned and adapted in relation to locally specific materialities and relationships, as participants incrementally developed and coordinated shared local social routines and practices. Second, the resulting mobility competence plays a key part in the human agency and autonomy that underpins locally situated mobilities.These mobilities are situated in that they are inseparable from local materialities, timelines, relationships and networks of heterogeneous actors.Mobility competence, then, is situated in the socio-material time-spaces they evolved in conjunction with and, as such, are not universally scalable and transferable.However, while imposing a limit to automation, this situatedness does not limit the competence itself.Rather, it is through this situated learning, knowing and sharing that mobilities are experienced as carrying sense, social meaning, and collective agency.Dominant visions of automated FLM mobilities are not aligned with these already-existing situated mobility competencies. For example, the particularities of local space and its recalcitrant nature were cited as reasons why residents anticipated automation to be difficult and doubted the viability of a systematic, ubiquitous AV system: I just have a hard time seeing how self-driving cars would work in real life.I would want to know the technology behind, how it works, if unpredictable things happen around the car.And if you would go on a tiny road, like the last two kilometres to the lake where I like to go … it's looking out for animals, since it's in the forest.And then also driving up the steep hills with the tiny stones in the ground I need to make sure that I can drive up safely, and not having the car getting out of my control and sliding down the hill again.(Emma,20) Beyond simple practical challenges, the anticipated difficulties Emma mentions in implementing AVs in relation to the material specificities of their local environment stress the value placed on locally situated mobility competence and learned ways of inhabiting local space.During the two-car convoys, these elements were integral to how participants guided researchers through the area; while driving, they anticipated narrow spaces and gave instructions on how to let other drivers pass or a change in road surface or an upcoming interruption in mobile networks and they instructed us on how to interact with other drivers on roads too narrow for two cars to pass. Unmapped roadblocks, shortcuts and alternative routes, gravel roads, potholes, sharp turns and abundant vegetation do not lend themselves particularly well to automation.At the same time, they confirm participants' perception that their area is neglected in terms of urban development and infrastructure.Appropriating the layout of local roads, developing knowledge about minute difficulties and informal solutions, navigating the routines and language of sharing these spaces with other users, is part of inhabiting the areaof belonging.It is also part of the specific agency associated with mobility.The value placed on this intimate and incorporated knowledge thus extends beyond more functional concepts such as 'motility' (Kaufmann, Bergman, and Joye 2004) and is not as such replaceable through the delegation of driving to automated technologies, services and systems. Participants also placed significant value in autonomynot only the ability to access destinations and services but also a mobile coming of age and a performative quality of mobility potentials.Participants do not cite a lack of information about mobility options as the central concern in their everyday mobilities; on the contrary, displaying a mastery of the different mobility systems, the particularities of local space and their interconnections was a way for people to display appreciation and appropriation of their local environment.For young people, owning and controlling their own vehicles was itself valued.For instance, as we met Antonia for a drive-along several months after her online interview, her daughter had in the meantime, gained a licence to drive her father's moped.As she explained: The moped has been like a gift from heaven because it gives her so much independence.And she really likes it.It is also great for football because if you have two hours of really intense training it is kind of hard to jump on a bike and go 7km. Even when prompted with the possibility of a 'shared' automated service that would provide the same level of access and flexibility, teenagers and young adults insisted on the importance of owning and driving a vehicle.The association of the car with freedom and autonomy for young people especially in intermediary and low-density spaces with limited transit coverage has been well established (Drevon, Ravalet, and Kaufmann 2019;Demoli and Lannoy 2019). The last mile, then, is also the site of conflicting versions of autonomy.On the one hand, automation is poised to promise greater autonomy to those who are excluded from driving or unable to access relevant destinations and activities by their own means.On the other hand, embodied and shared knowledge and practice of locally specific space and the interpersonal negotiation of ownership and sharing are part of what is considered to be a mode of autonomy that is central to the agency of the mobile subject. Situated mobility values and local socialities In addition to the local material situatedness of everyday mobilities, participants' practices and perceptions of sharing mobilities were inextricable from the relationships, roles and groups that they contribute to creating and consolidatingfor instance, when driving and sharing the responsibility for driving become local practices of parenting, as described in the previous section.The situated knowledge and practices of inhabiting local space also had a social component as our examples of sharing mobility experiences demonstrate.Oskar (16), for example, provided a detailed map of the most fun roads to cruise along with his friends on their mopeds, pointing out long passes for acceleration, challenging bends to negotiate or empty sections to practice wheelies with his friends. The frictions of the coordinating and planning such socialities (Pink 2008) require are not problems that could be solved by FLM applications proposed for automated technologies and services.Rather, practices and relationships of sociality and care shape people's everyday mobilities (see also Balcom Raleigh, Kirveennummi, and Sari Puustinen 2020) and as such fundamentally undermine the notion that a linear and automated FLM solution could deliver the social needs that they accommodate.Sociality, here, refers to the ways in which mobility functions not only as a way to connect spaces but also to produce a social dimension of 'local interconnectedness' (Pink 2008).A 'mid-range conceptualisation' (Amit 2015), sociality here helps to think the processual and contingent (see Long and Moore 2012) forms of belonging, relating, experience and engagement by which last mile mobilities produce collective agency and shared local ways of knowing. The 'last mile' is an opportunity to be 'brought together' but it is also a social, rather than a spatio-temporal concept; it is socially produced and contingent upon continuous actualisation.This implies, first of all, that it is subject to change and, second, that it inherently resists the type of change through predictability that automation requires where, in order for innovation to be dynamic, its socio-spatial counterparts must be static. Moreover, the values associated with the mobility space-times as constituted by participants meant that mobilities that might be framed as FLM mobilities were in fact experienced as personal time and moments of transition between social temporalities.Participants favoured the improvement of local services and meeting places over technology-based services to connect different modes of transportation, and they valued opportunities for exercise rather than striving towards greater comfort and efficiency.Coordinating everyday mobilities is part of how people perform meaningful social roles; it reinforces locally situated relationships, affiliations and meanings of place, but also frictions and inequalities.Hence, there are clear limits regarding the role automation could play in improving everyday mobilities that are inherently social. These insights invite us to reconsider where the 'challenges' in local mobilities lie and suggest that seeking to optimise the time people spend in the 'last mile' of everyday travel individually is not necessarily the right solution.Apprehending the extended time-spaces of mobility as an always contingent and contested 'commons' (Nikolaeva et al. 2019), we instead propose to focus on the situated socialities and values that everyday mobilities rely on. Prompting imaginaries about automated futures with participants was illustrative in that it surfaced the local frictions and the social and material relations that characterise existing mobilities.But rather than providing indications for the implementation of FLM automation it revealed the tensions that arise where there is misalignment between everyday life realities and dominant narratives, ideals and interests in automated mobility transformations.Hyperlocal infrastructure, contingencies in public service and the quality of local space all play major roles in determining modal choice and everyday logistics. As we have demonstrated, much of what is most central to everyday mobilities cannot be substituted or reproduced with automation.Moreover, focusing on automation risks detracting attention from mobility related questions relevant to participants' situated mobility competencies, values and socialities, and to the actual time-spaces they travel in and through. Conclusion In this article we have suggested re-thinking the FLM 'challenge' through the lens of the situated everyday socialities, competencies and materialities of close-to-home mobilities.The idea of automating the FLM implies a challenge which requires a technical (rather than social) solution, and it assumes mobility is an individual activity rather than collective, shared and continually negotiated.Instead, we emphasised the need for attention to: what everyday transport means to people; the social learning and coordination it entails; the particular terrains and infrastructures it navigates; and people's expectations of how automated mobility solutions should support their everyday logistics. Our findings both question the universal applicability of automated and standardised mobility 'solutions' and the central place given to technology-driven, individualised versions of automation in future visions of FLM travel, because they fail to account for the most salient factors in participants' discourses around everyday logistics and their modal choices.Understanding automation as a generalised solution and the last mile as a homogenous problem distracts attention from the factors that support participants' situated production of mobility and could at worst result in adding another layer of imperfect infrastructure to an already existing mesh of flawed physical infrastructure and mobility services. In a context where many of the costs and challenges of mobility are not just related to commuting but to the task of realizing everyday household logistics, we have shown that mobility extends beyond the act of driving and encompasses a range of sharing and coordination efforts.These mobility practices are entrenched with care work in various types of relationship and more generally serve as a vector of social integration (while also reproducing existing inequalities).Our fieldwork hence serves as a reminder that improving mobility near home is not simply a matter of improving linear access to public transport options.Automation might relieve people from driving, however, it does not replace the act of caring, risk mitigation and social integration that is implied in the production of mobilities. Moreover, dominant images of automated mobilities frequently focus on the individual commuter, travelling in a controlled inside environment, sheltered from their surroundings (Manderscheid 2014).These imaginaries reproduce dominant images of mobility and commuters and represent gendered and class-based mobility norms.They are informed by the notion that the benefits of automation are self-evident and that the barriers to these benefits lay in securing its adoption and the trust of potential users.These understandings are equally written into dominant narratives of the FLM which seek to provide technical solutions that have the potential to modify people's behaviour.Posing the last mile as a central issue demanding a technological solution reiterates these exclusive images of mobility.It moreover misses the point of where the frictionand future imaginationin everyday mobility lies. To understand the dynamics of the FLMs of any journey requires acknowledging the complexity of mobilities.It involves analysing how different mobilities connect to each other and to other elements of the material and social environment, rather than focusing on the connectivity of specific geographic locations to a particular transport network or infrastructure.Techno-solutionist responses to the so-called FLM challenge, where the technologies would solve transport problems for people within the mile around their homes, turn our attention away from how people already manage their everyday mobility logistics and contingencies. Going forward, we therefore suggest a research agenda focused on emerging practices and future imaginaries that accounts for the situated, social and uncomfortable complexities of everyday mobilities and emancipate themselves from technology-driven universalist scalabilities.This focus on the social and material time-spaces constituted by everyday mobility we propose does not solve the FLM question.Instead, it offers a new starting point from which to consider how automated technologies could enable already existing socio-spatial infrastructures to develop in sustainable directions. Note 1 'Automated' and 'autonomous' vehicles are being used almost synonymously throughout large parts of the literature irrespective of the fact that even with high levels of automation, most vehicles and mobility systems are not strictly autonomous from human control.In this paper, we focus on the wider notion of automation as general principle for addressing mobility objectives beyond AVs (Jones et al. 2023).
11,571
2023-06-08T00:00:00.000
[ "Engineering", "Environmental Science", "Sociology" ]
Ion Microprobe Study of the Polarization Quenching Techniques in Single Crystal Diamond Radiation Detectors Synthetic single crystal diamond grown using the chemical vapor deposition technique constitutes an extraordinary candidate material for monitoring radiation in extreme environments. However, under certain conditions, a progressive creation of space charge regions within the crystal can lead to the deterioration of charge collection efficiency. This phenomenon is called polarization and represents one of the major drawbacks associated with using this type of device. In this study, we explore different techniques to mitigate the degradation of signal due to polarization. For this purpose, two different diamond detectors are characterized by the ion beam-induced charge technique using a nuclear microprobe, which utilizes MeV energy ions of different penetration depths to probe charge transport in the detectors. The effect of polarization is analyzed by turning off the bias applied to the detector during continuous or discontinuous irradiation, and also by alternating bias polarity. In addition, the beneficial influence of temperature for reducing the effect of polarization is also observed. Finally, the effect of illuminating the detector with light is also measured. Our experimental results indicate that heating a detector or turning off the bias, and then applying it during continuous irradiation can be used as satisfactory methods for recovering the CCE value close to that of a prepolarized state. In damaged regions, illumination with white light can be used as a standard method to suppress the strength of polarization induced by holes. Introduction Semiconductor particle detectors based on silicon are essential in most modern high energy physics experiments, synchrotron imaging applications, etc. [1,2], however, new frontiers in physics, such as those found in nuclear reactors or in outer space, will exceed the limits or even disable some radiation diagnostics [3]. Finding an alternative to current detection systems is crucial for use in the working conditions of new hostile environments with harsh radiation and high temperature [4]. As an alternative to silicon, natural diamonds have been considered to be an optimal material for improving the performance of radiation monitors in such types of environments, but, for years, their cost and availability have limited their use as radiation detectors [5]. Synthetic single crystal diamonds grown using the chemical vapor deposition (CVD) technique [6] constitute excellent candidates for such operations due to their unique electronic properties, i.e., larger thermal conductivity, resistivity, and band gap (≈5.5 eV), which assures detectors with low leakage current even in high-temperature environments. The atom displacement energy (≈43 eV) allows long-term operations in harsh environments, presenting an excellent resistance to radiation [7]. The charge transport properties, with larger mobilities of the free carriers, result in faster responses [8] to ionizing radiation. All these properties, together with the quality, i.e., a good intrinsic diamond material has an extremely low amount of carriers in the conduction band and lower cost, have led to the use of CVD diamonds for electronic conduction band and lower cost, have led to the use of CVD diamonds for electronic applications. Nowadays, diamonds are being used as tracking detectors at the Large Hadron Collider (LHC) of CERN [9], dosimeters in medical applications for radiation therapy [10,11], etc. The capability of using this type of detector as a fast-ion loss detector in ITER [12] and as beam monitoring sensors at the SuperKEKB collider are currently being investigated [13]. The feasibility of using sc-CVDs as particle detectors relies on the principle of signal induction on metal electrodes by moving charges [14]. When a charged particle (electrons, ions, etc.) or incoming radiation (neutrons, x-rays, γ-rays, etc.) enters the sensitive volume of a diamond sample, the radiation deposits energy and ionizes the medium as it passes, generating a certain number of electron-hole pairs, known as free carriers. For diamond, the average energy to produce one electron-hole pair is 13.6 eV. These electrons and holes start to move under the presence of the applied electric field, until they are collected by the electrodes. This motion of charge carriers induces a measurable current at the readout electrode according to the Shockley-Ramo theorem [15,16] and this current can be processed with appropriate pulse processing electronics to obtain the pulse height spectrum of the radiation, therefore, allowing the spectroscopic characterization of the incoming radiation. In an ideal detector, all the charge is collected, and the pulse height spectra appear as the full energy peak. However, in wide band gap materials [17], such as singlecrystal diamonds, the free carriers can be captured ( Figure 1a) during their motion by electrically active traps, such as intrinsic defects in the lattice or defects induced by radiation [18]. The filling of the traps and the trapping/de-trapping rates of many charge carriers produce an asymmetrical space charge distribution in the sensitive volume of a diamond crystal [19]. The net effect of this asymmetry is the generation of one internal electric field, which is opposite to the electric field induced by the applied bias to the detector. A reduction in the electric field increases the probability of free carrier recombination, which can produce progressive deterioration of the charge collection efficiency (CCE) from the moment the detector is exposed to radiation. This phenomenon is known A reduction in the electric field increases the probability of free carrier recombination, which can produce progressive deterioration of the charge collection efficiency (CCE) from the moment the detector is exposed to radiation. This phenomenon is known as polarization ( Figure 1b) and constitutes a serious disadvantage for natural and synthetic diamonds used in many experimental applications [20,21]. Some authors have demonstrated that the ion beam-induced polarization in high resistivity materials such as CdZnTe and CdTe is a complex process dependent on trap density, re-emmision properties (de-trapping) of the free carrier from the trap levels, applied voltage, deposited energy, etc. [22]. In addition, some authors have reported a more powerful polarization with the level of damage accumulated in the detector. Regarding polarization, two effects must be considered: (1) bulk polarization, trapping of the charge in the traps in the bulk of material and (2) surface polarization trapping of the charge on the interface between the material and metallic contact. Surface polarization strongly depends on the type of the contact, connection, etc. Usually, it is visible when the beam is traversing through the whole detector, and therefore bulk polarization is almost negligible in such a setup. Bulk polarization occurs in deep traps of the material (around the middle of the band gap) where the charge cannot release thermally. The literature exploring the polarization phenomena in diamond is scarce and mainly explores the deterioration of spectroscopy and charge transport properties using the ion beam-induced charge (IBIC) technique and the transient current technique (TCT) performed with radiation emissions such as alpha and beta sources [23]. These studies have all reported a progressive deterioration in the spectroscopic properties of the analyzed devices induced by the polarization, however, the experimental conditions and the physics underlying the polarization mechanisms are not well understood. The IBIC technique is an analytical tool based on the creation of electrons/holes by ionizing radiation using single ions [24]. The IBIC technique using a nuclear microprobe relay to control the type of ionizing radiation, allows for the selection of the ion species and the energy, the use of focused ion beam with micrometer dimensions, and the possibility of raster scan selected areas with excellent lateral resolution. The IBIC is the most frequently used technique for studying spectroscopy and transport properties in semiconductor detectors. In our previous work, we proved that the IBIC technique [25,26] is an excellent tool for imaging and monitoring the influence of polarization on detector performance. In particular, we showed that short-range ion microbeams could be used very successfully to induce instantaneous polarization even in pristine regions [27]. This is due to the fact that bulk polarization occurs in the regions of the detector where only one carrier exists. Therefore, short ion probes are an ideal tool for polarizing material. All the physical mechanisms that can break or alter this asymmetry are good candidates as reliable methods to depolarize a detector and to recover the nominal electric field. In normal conditions, the natural release of this charge can happen in hours or even days, however, in the literature, different techniques of releasing the charge from such traps have been proposed for wide band gap semiconductors. The strength and the polarity of the applied electric field have been reported as key parameters associated with the behavior of the charge carrier transport during polarization. Holmes et al. [28] proposed a method to depolarize a metal-insulator-metal (MIM)-structured detector which used periodic forward bias pulses, where recombination took place by allowing the charge of opposite polarity to neutralize the trapped charge. As soon as both carriers were in volume, there was a high probability that the opposite charge would be attracted to the trapped charge and recombine with it. The influence of turning off the bias during irradiation with the same probe that caused polarization has been proposed as one of the easiest ways to depolarize the material [29]. Manfredotti et al. [30] demonstrated that exposing detectors to light (optical excitation) could recover the signal to its initial state, and showed that light had some influence on trap levels and charge carriers. Different studies have exhibited the effect of increasing the temperature on the performance of diamond detectors, however, the available experimental results are contradictory. Tanimura et al. [31] observed a remarkable decrease in signal output with an increase in detector temperature related to polarization. Conversely, other authors [32] have reported a decrease in polarization strength when a detector was heated due to thermal excitation, which increased the thermal de-trapping of carriers from traps filled with radiation, and therefore reduced the accumulated space charge. Andreo et al. [33] did not observe any effect on polarization due to temperature. A complete description exploring all these techniques applied to a diamond sensor has not been done in detail. In this study, we address the development of approaches to suppress or attenuate this phenomenon. In this study, the IBIC technique was used to study the properties of the polarization phenomena under the influence of these quenching techniques in different sc-CVD diamond detectors. Different settings of the ion beam species (H, C, and He), energies (tuned to explore from shallow to intermediate-range beams), pulsed bias, operating temperature, and illumination conditions were studied. To perform these measurements, a new upgrade of the Ruder Bošković Institute (RBI) [34] microprobe (including a new heating system, a high voltage (HV) alternating unit, and a light source) was carried out. This paper is organized as follows: In the Introduction, we explain the drawback of the polarization phenomena when using diamond crystal as a radiation detector; in Section 2, we describe methods to suppress this phenomenon, the detectors, and the experimental setup; in Section 3, we present the main experimental results and a brief interpretation; and finally, in Conclusions, we present the main conclusions related to determining the most efficient method to depolarize. Materials and Methods Measurements were performed on a set of two detectors of different thicknesses to study the proposed quenching techniques on the behavior of the polarization phenomena. Electronic grade single-crystal CVD diamonds used for the construction of the detectors were provided by Element Six (Didcot, UK) [35]. The crystals have dimensions of 3 × 3 mm 2 with a thickness of 65 µm (detector A) and 1.5 × 1.5 mm 2 with a thickness of 500 µm (detector B). The nominal impurity concentration of both crystals is below 5 ppb for nitrogen and 1 ppb for boron. The front and back surfaces of the crystal were coated, using the sputtering evaporation technique with tungsten electrodes of 200 nm thickness [36]. The coated samples were mounted on specially designed ceramic printed circuit boards (PCBs) with gold contacts, which could be used even at high temperatures. The experiments to irradiate the detectors were performed using the ion beam microprobe line [37] at the RBI in Zagreb using ions from the 6 MV tandem Van der Graaff accelerator (High Voltage Engineering Corporation, Wakefield, MA, USA) [38]. The CCE was measured with protons, helium, and carbon beams focused to a micrometer size and with beam currents of ≈1 kcps. Focused ions are particularly good for studying polarization since it is possible to tune the ion beam energy (fixed range) which allows one to probe the charge transport properties of detectors from shallow to intermediate depths, with excellent spatial resolution [39]. For the performance of the IBIC measurements, the detectors were mounted inside the vacuum chamber, perpendicular to the ion beam, and were irradiated through the front biased electrode, while the backside electrode was grounded. The IBIC pulses were processed using the standard electronic chain for nuclear spectroscopy [40,41]. Lower electric fields (E < 0.5 V/µm) were used to enhance polarization due to the fact that, at higher electric fields, the charge de-trapping rate increases, and thus reduces polarization induced problems [27]. For thermal excitation, a new ceramic heater [42] with a temperature control system was used. Despite the fact that the high thermal conductivity in sc-CVD crystals [43] assures a quick thermal equilibrium between the heater and the diamond, all the measurements were performed 5 min after the thermocouple stabilized to the operation temperature (±5 • C). For the experiments where the applied bias was switched on/off, a standard detector bias supply source was manually switched during the irradiation. Due to the internal capacitance and resistance of the diamond detector, the electric field is not re-established immediately after the bias is turned on, and also some time is needed for the voltage to reach a zero value when the source is switched off. For this reason, the suitable durations of these cycles were selected to be longer than the characteristic times related to the used biasing electronics. For the alternating bias experiment, an HV device was constructed in-house to switch the bias during detector operation. This device was based on the fast high-voltage push-pull switch system supplied by Behlke [44]. Two DC power supplies (one with positive voltage and the other one with negative voltage) were connected to the device and the output signal was applied to the detector as a periodic rectangular voltage pulse between the two biases applied. The system could control and vary the lengths of the pulses for each polarity and the duty cycles were controlled by a graphical interface managed by the acquisition software Spector [45]. Finally, for testing the influence of white light on the charge transport during polarization, a standard light source to illuminate the sample during the irradiation was installed in one dedicated port of the vacuum chamber. The spectral emission of the light source was measured to have a maximum emission centered around 592 nm. For the I-V acquisition, a Keithley Picoammeter with a programmable voltage source [46] was used. A schematic diagram of the experimental setup outlining the new upgrades and the spectroscopy acquisition chain is shown in Figure 2. reach a zero value when the source is switched off. For this reason, the suitable durations of these cycles were selected to be longer than the characteristic times related to the used biasing electronics. For the alternating bias experiment, an HV device was constructed inhouse to switch the bias during detector operation. This device was based on the fast highvoltage push-pull switch system supplied by Behlke [44]. Two DC power supplies (one with positive voltage and the other one with negative voltage) were connected to the device and the output signal was applied to the detector as a periodic rectangular voltage pulse between the two biases applied. The system could control and vary the lengths of the pulses for each polarity and the duty cycles were controlled by a graphical interface managed by the acquisition software Spector [45]. Finally, for testing the influence of white light on the charge transport during polarization, a standard light source to illuminate the sample during the irradiation was installed in one dedicated port of the vacuum chamber. The spectral emission of the light source was measured to have a maximum emission centered around 592 nm. For the I-V acquisition, a Keithley Picoammeter with a programmable voltage source [46] was used. A schematic diagram of the experimental setup outlining the new upgrades and the spectroscopy acquisition chain is shown in Figure 2. Experiments at different temperatures were performed by the IBIC characterization and mapping of the virgin and previously irradiated areas of the 500 μm diamond detector. Short penetrating beams of 3 MeV He ++ ions with a range of ≈5.8 μm in diamond were used. During these sets of experiments, the bias applied to the detector was ±225 V (E = ±0.45 V/μm). Before irradiation, the detector was heated to each of the selected temperatures: 24 °C (room temperature), 95 °C, and 175 °C. For switching the bias on/off and the alternating bias experiments, the 65 μm detector was tested by means of a 4 MeV C 3+ microbeam. Since the range of these ions is only ≈2.0 μm, they can also be considered to be shallow probing ions. During these experiments, the bias was set to ±12 V (E = ±0.18 V/μm) and several different time periods, where the bias was applied or interrupted, were carried out. The IBIC tests presented in this study were done in 30 s duration of bias on/off cycles, while ion beam irradiation was kept either continuous or discontinuous (off when bias is off). In the case of alternating bias experiments, two different duty cycles were used, as listed in Table 1. Experiments at different temperatures were performed by the IBIC characterization and mapping of the virgin and previously irradiated areas of the 500 µm diamond detector. Short penetrating beams of 3 MeV He ++ ions with a range of ≈5.8 µm in diamond were used. During these sets of experiments, the bias applied to the detector was ±225 V (E = ±0.45 V/µm). Before irradiation, the detector was heated to each of the selected temperatures: 24 • C (room temperature), 95 • C, and 175 • C. For switching the bias on/off and the alternating bias experiments, the 65 µm detector was tested by means of a 4 MeV C 3+ microbeam. Since the range of these ions is only ≈2.0 µm, they can also be considered to be shallow probing ions. During these experiments, the bias was set to ±12 V (E = ±0.18 V/µm) and several different time periods, where the bias was applied or interrupted, were carried out. The IBIC tests presented in this study were done in 30 s duration of bias on/off cycles, while ion beam irradiation was kept either continuous or discontinuous (off when bias is off). In the case of alternating bias experiments, two different duty cycles were used, as listed in Table 1. Table 1. Experimental configuration for the alternating bias tests. 20 40 T − refers to the period where the applied bias has negative polarity. T + refers to the period where the applied bias has positive polarity. Test V BIAS (V) T − (s) T + (s) The use of shallow probing beams allows one to study the charge transport properties induced by only one type of free carrier [47]. In our experimental configuration, the induced current was mainly governed by the holes when a positive polarity was applied, while negative polarity induced currents that were governed mainly by electrons motion. In our previous work, we observed that a strong polarization effect was associated with radiationinduced damage. Furthermore, in damaged regions, we observed a certain influence of light in the evolution of the CCE when polarization was present. To explore the influence of optical excitation on polarization in more detail, the 65 µm diamond detector was damaged by means of a traversing 5 MeV proton microbeam. For this purpose, 3 areas (scanned area of ≈100 × 100 µm 2 ) were irradiated on the microprobe line at different fluences ranging from 8.6 × 10 12 to 1.7 × 10 14 ions/cm 2 . Subsequently, the IBIC probing of the entire sample was carried out using a 2 MeV H + beam (intermediate range). The generated charge distribution of the probing ion beam (PIBs) and the vacancy profile of the damaging ion beam (DIBs) were obtained with SRIM [48]. The vacancy profile of the damaging ion beam showed a homogeneous production of points defects, with an average of 0.056 vacancies/µm × ion, throughout the whole diamond thickness, where both charge carriers could be trapped. The ionization profile was represented as the total number of free carriers generated per unit length, considering that an average energy creates one e/h pair in diamond of 13.6 eV. The profiles are both displayed in Figure 3. A ±12 20 30 B ±30 20 40 T− refers to the period where the applied bias has negative polarity. T+ refers to the period where the applied bias has positive polarity. Test VBIAS (V) T− (s) T+ (s) The use of shallow probing beams allows one to study the charge transport properties induced by only one type of free carrier [47]. In our experimental configuration, the induced current was mainly governed by the holes when a positive polarity was applied, while negative polarity induced currents that were governed mainly by electrons motion. In our previous work, we observed that a strong polarization effect was associated with radiation-induced damage. Furthermore, in damaged regions, we observed a certain influence of light in the evolution of the CCE when polarization was present. To explore the influence of optical excitation on polarization in more detail, the 65 μm diamond detector was damaged by means of a traversing 5 MeV proton microbeam. For this purpose, 3 areas (scanned area of ≈100 × 100 μm 2 ) were irradiated on the microprobe line at different fluences ranging from 8.6 × 10 12 to 1.7 × 10 14 ions/cm 2 . Subsequently, the IBIC probing of the entire sample was carried out using a 2 MeV H + beam (intermediate range). The generated charge distribution of the probing ion beam (PIBs) and the vacancy profile of the damaging ion beam (DIBs) were obtained with SRIM [48]. The vacancy profile of the damaging ion beam showed a homogeneous production of points defects, with an average of 0.056 vacancies/μm × ion, throughout the whole diamond thickness, where both charge carriers could be trapped. The ionization profile was represented as the total number of free carriers generated per unit length, considering that an average energy creates one e/h pair in diamond of 13. During the performance of all IBIC measurements, the acquisition system recorded the coordinates (x,y) where the events took place and the associated channel (energy), and the time of the event occurrence. All this information was saved in the memory of the data During the performance of all IBIC measurements, the acquisition system recorded the coordinates (x,y) where the events took place and the associated channel (energy), and the time of the event occurrence. All this information was saved in the memory of the data acquisition system and transferred to a raw file for a post-processing analysis. The quantification of the temporal evolution of the pulse height spectrum was carried out with a Matlab code that extracted from the raw file all the events that had taken place in a fixed interval of time. The full spectra were decomposed in a train of consecutive histograms in order to obtain the continuous degradation of the signal during polarization. These sequences of spectra were analyzed to obtain the most probable centroids from a Gaussian fit and the CCE values were calculated from the centroids of the peaks. A tail pulse generator and an Si detector mounted in the vacuum chamber were used for the calibration of the processing electronic chain, which allowed for the conversion of the pulse height channel in the CCE (assuming that the Si detector has 100% of the CCE for the same irradiation configuration). An example of the described data analysis processing for the 65 µm detector is shown in Figure 4. acquisition system and transferred to a raw file for a post-processing analysis. The quantification of the temporal evolution of the pulse height spectrum was carried out with a Matlab code that extracted from the raw file all the events that had taken place in a fixed interval of time. The full spectra were decomposed in a train of consecutive histograms in order to obtain the continuous degradation of the signal during polarization. These sequences of spectra were analyzed to obtain the most probable centroids from a Gaussian fit and the CCE values were calculated from the centroids of the peaks. A tail pulse generator and an Si detector mounted in the vacuum chamber were used for the calibration of the processing electronic chain, which allowed for the conversion of the pulse height channel in the CCE (assuming that the Si detector has 100% of the CCE for the same irradiation configuration). An example of the described data analysis processing for the 65 μm detector is shown in Figure 4. In this example, a 4 MeV C 3+ microbeam uniformly scanned one selected area (≈75 × 75 μm 2 ) of the pristine region of the detector. The IBIC was monitored during irradiation to record the events over the time period using the acquisition system. In the offline analysis, the pulse height histograms were constructed setting the exposure time to one second. In the previous figure, the pulse height spectra at different times were plotted together with the best Gaussian fit overlapping the spectra. As can be seen in Figure 4 above, the pulse height histograms continuously shift to lower channels as irradiation progresses, which is indicative of a continuous decrease in the CCE value due to polarization. In the XY plane, the positions of the centroids of the pulse height histograms were plotted (in black) as a function of time. This was the figure of merit for studying the temporal evolution of CCE [49]. In this example, a 4 MeV C 3+ microbeam uniformly scanned one selected area (≈75 × 75 µm 2 ) of the pristine region of the detector. The IBIC was monitored during irradiation to record the events over the time period using the acquisition system. In the offline analysis, the pulse height histograms were constructed setting the exposure time to one second. In the previous figure, the pulse height spectra at different times were plotted together with the best Gaussian fit overlapping the spectra. As can be seen in Figure 4 above, the pulse height histograms continuously shift to lower channels as irradiation progresses, which is indicative of a continuous decrease in the CCE value due to polarization. In the XY plane, the positions of the centroids of the pulse height histograms were plotted (in black) as a function of time. This was the figure of merit for studying the temporal evolution of CCE [49]. Figure 5a shows the measured detector leakage current versus the applied bias voltages (ranging from −70 to 70 V) for the 65 μm diamond detector. This detector was used only in applications without heating, so only one measurement at room temperature was performed. As can be seen, the leakage current presents a linear behavior (dashed black lines correspond to the best linear fit for each polarity) with the applied biases for both polarities, which is consistent with an ohmic device, keeping the leakage below 30 pA for biases around ±65 V (E = ±1 V/μm). Leakage Current A 500 μm diamond detector was used to study the effect of temperature on polarization phenomena. In Figure 5b, the measured leakage current as a function of the applied biases for the 500 μm detector is shown at different temperatures from room temperature (RT) to 215 °C. This temperature range covered that used during the IBIC characterization and the selected bias scanning was adapted to its particular thickness. Similar ohmic behavior was obtained for the measurement at room temperature. However, with increasing temperature, linearity was only presented for bias below 100 V. The dashed black lines correspond to the best linear fit for biases below 100 V for both polarities, showing the deviation between the experimental data and the linear relation around this voltage. In addition, an increase in the leakage value was observed with an increase in the temperature, changing the leakage from 10 pA at room temperature to around 100 nA at temperatures around 215 °C. For temperatures above 220 °C, instabilities appeared in the detector and an exponential increase in the leakage was observed, changing the leakage current more than one order of magnitude under small increases in the operating temperature. For this reason, the IBIC measurements were limited to temperatures below 200 °C and 225 V bias for a satisfactory performance of the detector. Note This detector was used only in applications without heating, so only one measurement at room temperature was performed. As can be seen, the leakage current presents a linear behavior (dashed black lines correspond to the best linear fit for each polarity) with the applied biases for both polarities, which is consistent with an ohmic device, keeping the leakage below 30 pA for biases around ±65 V (E = ±1 V/µm). A 500 µm diamond detector was used to study the effect of temperature on polarization phenomena. In Figure 5b, the measured leakage current as a function of the applied biases for the 500 µm detector is shown at different temperatures from room temperature (RT) to 215 • C. This temperature range covered that used during the IBIC characterization and the selected bias scanning was adapted to its particular thickness. Similar ohmic behavior was obtained for the measurement at room temperature. However, with increasing temperature, linearity was only presented for bias below 100 V. The dashed black lines correspond to the best linear fit for biases below 100 V for both polarities, showing the deviation between the experimental data and the linear relation around this voltage. In addition, an increase in the leakage value was observed with an increase in the temperature, changing the leakage from 10 pA at room temperature to around 100 nA at temperatures around 215 • C. For temperatures above 220 • C, instabilities appeared in the detector and an exponential increase in the leakage was observed, changing the leakage current more than one order of magnitude under small increases in the operating temperature. For this reason, the IBIC measurements were limited to temperatures below 200 • C and 225 V bias for a satisfactory performance of the detector. Note that, due to the wide range of values of the measured leakage between room temperature and 215 • C, in both graphs, the y-axes are represented in the logarithmic scale. Thermal Excitation To study the influence of polarization quenching using thermal excitation, the temporal evolution of CCE was measured for three different temperatures and for both charge carriers. For this purpose, 6 regions with the same dimensions were defined at different positions of one unirradiated area of the detector, as illustrated in the IBIC map shown in Figure 6. that, due to the wide range of values of the measured leakage between room temperature and 215 °C, in both graphs, the y-axes are represented in the logarithmic scale. Thermal Excitation To study the influence of polarization quenching using thermal excitation, the temporal evolution of CCE was measured for three different temperatures and for both charge carriers. For this purpose, 6 regions with the same dimensions were defined at different positions of one unirradiated area of the detector, as illustrated in the IBIC map shown in Figure 6. In the map above, the scanned area was close to the edge of the detector, therefore, the homogeneous pristine region (shown in red) and the region outside the detector (shown in green) were clearly distinguishable. For the measurement of each individual region, the temperature and the polarity (positive for holes and negative for electrons) were set to the values shown in the previous map. The detector was operated with the bias voltage kept at ±225 V (E = ±0.45 V/μm), close to the limit where the leakage current deviates from linearity. During IBIC characterization, the heated regions of interest (ROIs) were exposed to an ion beam of 3 MeV He ++ focused to a micrometer size to scan over the selected area at a count rate of above 1 kcps during 300 s. Because of the limits imposed by the leakage current, the working temperature in this study never exceeded 200 °C. During the processing analysis, the temporal evolution of CCE was obtained setting the acquisition time to 0.5 s per histogram. Figure 7 shows a comparison of the temporal evolution of CCE for three different temperatures from RT to around 200 °C for holes ( Figure 7a) and for electrons (Figure 7b). In the map above, the scanned area was close to the edge of the detector, therefore, the homogeneous pristine region (shown in red) and the region outside the detector (shown in green) were clearly distinguishable. For the measurement of each individual region, the temperature and the polarity (positive for holes and negative for electrons) were set to the values shown in the previous map. The detector was operated with the bias voltage kept at ±225 V (E = ±0.45 V/µm), close to the limit where the leakage current deviates from linearity. During IBIC characterization, the heated regions of interest (ROIs) were exposed to an ion beam of 3 MeV He ++ focused to a micrometer size to scan over the selected area at a count rate of above 1 kcps during 300 s. Because of the limits imposed by the leakage current, the working temperature in this study never exceeded 200 • C. During the processing analysis, the temporal evolution of CCE was obtained setting the acquisition time to 0.5 s per histogram. Figure 7 shows a comparison of the temporal evolution of CCE for three different temperatures from RT to around 200 • C for holes ( Figure 7a) and for electrons (Figure 7b). In Figure 7, the instantaneous degradation of CCE for holes at RT due to polarization can be observed. During the irradiation at RT the CCE suffers a decrease close to 60% of its initial value after 5 min. With increasing temperature, the decrease in CCE remains, but is much less apparent with respect to RT. It is noticeable that the shape of the temporal evolution of CCE is practically the same for 100 • C and 175 • C, showing no significant changes. Under these conditions of heating, the percentage of decrease in the value of CCE was reduced to ≈13% of the initial value in the same interval of time. In fact, for the case of polarization induced by electrons, a weak drop of ≈2% in the CCE was observed for the three different temperatures (clearly visible in the central zoom panel shown in Figure 7b). Therefore, no polarization (or soft polarization) appears in this detector from RT to 200 • C for electrons as described above. In Figure 7, the instantaneous degradation of CCE for holes at RT due to polarization can be observed. During the irradiation at RT the CCE suffers a decrease close to 60% of its initial value after 5 min. With increasing temperature, the decrease in CCE remains, but is much less apparent with respect to RT. It is noticeable that the shape of the temporal evolution of CCE is practically the same for 100 °C and 175 °C, showing no significant changes. Under these conditions of heating, the percentage of decrease in the value of CCE was reduced to ≈13% of the initial value in the same interval of time. In fact, for the case of polarization induced by electrons, a weak drop of ≈2% in the CCE was observed for the three different temperatures (clearly visible in the central zoom panel shown in Figure 7b). Therefore, no polarization (or soft polarization) appears in this detector from RT to 200 °C for electrons as described above. The presence of strong polarization for holes and not significant polarization for electrons has been reported in several studies and has been related to the density of traps for each type of free carrier and the quality of the sc-CVD [27]. The partial restoration of CCE for holes (between RT and 100 °C) is related to an increase in the probability of de-trapping rate due to thermal re-emission [50], which allows the liberation of the trapped free carriers partially reducing the accumulated space charge. The presence of strong polarization for holes and not significant polarization for electrons has been reported in several studies and has been related to the density of traps for each type of free carrier and the quality of the sc-CVD [27]. The partial restoration of CCE for holes (between RT and 100 • C) is related to an increase in the probability of de-trapping rate due to thermal re-emission [50], which allows the liberation of the trapped free carriers partially reducing the accumulated space charge. Bias On/Off The effect on polarization by switching on/off the bias during continuous irradiation was also explored. Following the same protocol explained in the previous paragraphs, different regions were defined in a pristine area of the detector. The temporal evolution of CCE was obtained with the IBIC technique exposing the selected areas to a 4 MeV C 3+ beam during 180 s at a count rate of around 1 kcps. During these measurements, the bias was on during 30 s and was manually turned off for another period of 30 s. This duty cycle (DC = 50%) was repeated at least three times. The ion beam was always on in the sample during these measurements. All these evaluations were carried out with the 65 µm diamond detector in ambient conditions (room temperature, no illumination), and the behavior of both charge carriers was investigated. The main results and the clock diagram of the applied biases are depicted in Figure 8. was also explored. Following the same protocol explained in the previous paragraphs, different regions were defined in a pristine area of the detector. The temporal evolution of CCE was obtained with the IBIC technique exposing the selected areas to a 4 MeV C 3+ beam during 180 s at a count rate of around 1 kcps. During these measurements, the bias was on during 30 s and was manually turned off for another period of 30 s. This duty cycle (DC = 50%) was repeated at least three times. The ion beam was always on in the sample during these measurements. All these evaluations were carried out with the 65 μm diamond detector in ambient conditions (room temperature, no illumination), and the behavior of both charge carriers was investigated. The main results and the clock diagram of the applied biases are depicted in Figure 8. In order to compare these measured values, the temporal evolution of CCE under irradiation with continuous bias and beam is also presented. The pulse height spectra were recorded setting the acquisition time to 1 s per histogram. For the measurement where the bias and the beam were applied simultaneously (continuous case), a strong decrease in the value of CCE can be observed for holes, dropping the efficiency by more than 50% after 180 s of continuous irradiation. The behavior is different for electrons where the decrease in the value of CCE is slower, reaching a maximum drop of 10% during the same irradiation time. For the measurement where the bias In order to compare these measured values, the temporal evolution of CCE under irradiation with continuous bias and beam is also presented. The pulse height spectra were recorded setting the acquisition time to 1 s per histogram. For the measurement where the bias and the beam were applied simultaneously (continuous case), a strong decrease in the value of CCE can be observed for holes, dropping the efficiency by more than 50% after 180 s of continuous irradiation. The behavior is different for electrons where the decrease in the value of CCE is slower, reaching a maximum drop of 10% during the same irradiation time. For the measurement where the bias is switched on/off, the behavior is significantly different. For the polarization induced by holes, during the first cycle of 30 s, the same evolution of CCE was observed. This behavior was expected because, until this moment, no changes with respect to the continuous case were applied. After 30 s of not biasing the detector, the bias was reapplied. A clear recovery of the CCE value with respect to the value reached before switching off the bias was obtained, however, the decrease in CCE value rapidly continued. This recovery was observed in the three different cycles applied to the detector during this measurement. For electrons, a slight recovery was observed, but due to the fact that the polarization induced by electrons was soft, the recovery was not as intense as the recovery observed for holes, or at least the effect remained negligible. The influence on the stability of the detector while closing the ion beam during the time intervals where the detector was unbiased was also studied. During these measurements, the bias and the beam were on for 30 s and the bias was manually turned off for a period of 30 s. During this period, the ion beam was stopped by closing a gate valve. Figure 9 presents the results for the temporal evolution of CCE for electrons and holes using a 50% duty cycle. In addition, a complete description of the temporal intervals at which the bias voltages were applied can be found in the clock diagram located between the graphs in Figure 9. Again, for polarization induced by hole, the typical deterioration of the output signals was measured during the time that the beam and the bias were on. applied. A clear recovery of the CCE value with respect to the value reached before switching off the bias was obtained, however, the decrease in CCE value rapidly continued. This recovery was observed in the three different cycles applied to the detector during this measurement. For electrons, a slight recovery was observed, but due to the fact that the polarization induced by electrons was soft, the recovery was not as intense as the recovery observed for holes, or at least the effect remained negligible. The influence on the stability of the detector while closing the ion beam during the time intervals where the detector was unbiased was also studied. During these measurements, the bias and the beam were on for 30 s and the bias was manually turned off for a period of 30 s. During this period, the ion beam was stopped by closing a gate valve. Figure 9 presents the results for the temporal evolution of CCE for electrons and holes using a 50% duty cycle. In addition, a complete description of the temporal intervals at which the bias voltages were applied can be found in the clock diagram located between the graphs in Figure 9. Again, for polarization induced by hole, the typical deterioration of the output signals was measured during the time that the beam and the bias were on. After 30 s of removing the bias and the beam, they were established again. It can be seen that contrary to the recovery showed with the continuous beam, no noticeable restoration of the signal was obtained by applying the bias and the beam again. As shown in Figure 9, it can be seen that the signal starts from the same position as the one reached before the bias/beam were switched off and rapidly continues to decrease, and therefore the effect seems to be less pronounced. The same recurring behavior can be observed during the three cycles. After 30 s of removing the bias and the beam, they were established again. It can be seen that contrary to the recovery showed with the continuous beam, no noticeable restoration of the signal was obtained by applying the bias and the beam again. As shown in Figure 9, it can be seen that the signal starts from the same position as the one reached before the bias/beam were switched off and rapidly continues to decrease, and therefore the effect seems to be less pronounced. The same recurring behavior can be observed during the three cycles. For electrons, the results are quite similar to the case with a continuous beam showing no significance recovery. In addition, the differences with respect to the temporal evolution of the region irradiated with continuous beam and bias are minimal. A possible explanation for the observed difference in the recovery of the CCE has been proposed: During normal operation of the detector when the bias is applied, the electric field is established between the electrodes. Without polarization effects, this electric field is homogeneous in the full volume of the device (planar geometry). When the shallow beam impinges in the top electrode, the atoms present in the lattice are ionized by ions and this process takes place close to the surface of the crystal. Under the influence of the applied electric field, the generated free carriers start to move but, depending on the polarity, only one type of free carrier will travel through the full detector being the main responsible for the induced signal. During their motion, some of these carriers will be captured, increasing the build-up of space charge. This generates an opposite electric field, which affects the biased electric field, according to the superposition principle, and the recombination rate increases leaving a deficit in the collected charge. In the experimental data, this corresponds to a continuous reduction in the detected signal over time when the bias and the beam are on during the first stage of the irradiation. Now the applied bias is switched off. Depending on whether the ion beam was also switched off or not, two different behaviors were observed: Beam On If the bias is switched off but the beam continues irradiating the sample, new free carriers are generated. Now, in the unbiased detector, only the electric field induced by the accumulated space charge exists and, since it has an opposite direction to the biased electric field, the other type of free carriers starts to move through the detector. During this motion, recombination occurs with the captured carriers, decreasing the buildup space charge inside the diamond crystal and, in consequence, the polarization electric field. When the electric field due to bias is established again by switching the bias on, the conditions inside the detector are similar to the initial conditions because a significant reduction in the polarization electric field has happened due to a reduction in the accumulated space charge. For this reason, the recovery of the CCE is more efficient when the ion beam continues irradiating the detector after the bias has been shut down. Beam Off If the bias and the beam are switched off simultaneously, no new free carriers are generated during these periods. The absence of new free carriers moving in the volume makes the recombination process much weaker, and the space charge build-up remains almost constant. When the bias is re-established again, the conditions in the detector are roughly similar (the space build-up charge is the same) and the recovery is less apparent. In addition, this explains why the CCE starts approximately at the same value as the one reached just when the bias was switched off. Schematic diagrams showing the charge carrier transportation phenomenon (for negative polarity) [51] and generation/recombination mechanisms are summarized in Figure 10a These effects are more visible when the polarization is induced by holes (positive bias) as it was for our particular detector. Alternating Bias An HV alternating unit was used to study depolarization effect through the use of periodic rectangular pulses changing the bias polarity during continuous irradiation. For this purpose, selective areas of the 65 μm detector were irradiated continuously with 4.0 MeV C 3+ as a shallow probing ion beam. During the processing analysis, each pulse height histogram was recorded for 1 s. In these measurements, the applied bias polarities in the sample changed from positive to negative with the duty cycles, as shown in Table 1 or in These effects are more visible when the polarization is induced by holes (positive bias) as it was for our particular detector. Alternating Bias An HV alternating unit was used to study depolarization effect through the use of periodic rectangular pulses changing the bias polarity during continuous irradiation. For this purpose, selective areas of the 65 µm detector were irradiated continuously with 4.0 MeV C 3+ as a shallow probing ion beam. During the processing analysis, each pulse height histogram was recorded for 1 s. In these measurements, the applied bias polarities in the sample changed from positive to negative with the duty cycles, as shown in Table 1 or in the chronogram in Figure 11. Additionally, Figure 11 also summarizes the temporal evolution of CCE at different duty cycles of the alternating bias. All the measurements were performed at room temperature and without illumination. The current design of the alternating HV device started only from voltages with negative polarity, therefore, for comparison reasons, the measurement using continuous bias and continuous beam was only performed with negative polarity, as can be seen in Figure 11. As can be seen from the figures described above, an overall continuous decrease in CCE was observed immediately after the irradiation started for the continuous case. For the measurements in which the bias is alternated, the irradiation starts with the bias set to −12 V (E = −0.18 V/μm). The temporal evolution during the first 20 s is the same as in the continuous case, starting from around 80% of CCE. After the bias switches the polarity to positive, the CCE starts from a higher value (around ≈87%), which is compatible with the initial CCE when a positive bias is applied, as shown in [27]. This difference in the CCE value is due to the different probabilities of trapping electrons and holes. With positive polarity, it is clear that polarization becomes stronger than with negative bias, as can be seen in the abrupt drop in signal. After changing polarity again, no significant restoration of the signal for electrons was reached, starting the CCE value one small fraction above the last value obtained with negative bias. In contrast to the strong polarization induced by holes, a slow drop in the CCE value was observed after polarity was established for electrons. However, according to the results, when positive bias is applied again, a significant, but not complete, recovery of the CCE can be observed. As can be seen from the figures described above, an overall continuous decrease in CCE was observed immediately after the irradiation started for the continuous case. For the measurements in which the bias is alternated, the irradiation starts with the bias set to −12 V (E = −0.18 V/µm). The temporal evolution during the first 20 s is the same as in the continuous case, starting from around 80% of CCE. After the bias switches the polarity to positive, the CCE starts from a higher value (around ≈87%), which is compatible with the initial CCE when a positive bias is applied, as shown in [27]. This difference in the CCE value is due to the different probabilities of trapping electrons and holes. With positive polarity, it is clear that polarization becomes stronger than with negative bias, as can be seen in the abrupt drop in signal. After changing polarity again, no significant restoration of the signal for electrons was reached, starting the CCE value one small fraction above the last value obtained with negative bias. In contrast to the strong polarization induced by holes, a slow drop in the CCE value was observed after polarity was established for electrons. However, according to the results, when positive bias is applied again, a significant, but not complete, recovery of the CCE can be observed. In this recovery, the CCE always reaches a higher value with respect to the last value at the same polarity, which is an indicator that a significant portion of the accumulated positive space charge is removed during the cycle with negative polarity. In general, applying an alternating bias shows that the efficiency to remove the accumulated space charge is higher for holes than for electrons, unfortunately, strong polarization rapidly resumes when the bias is applied in the case of holes. For this reason, an appropriate selection of duty cycles proves to be crucial for improving depolarization for optimal operation of a diamond detector. Optical Excitation To study the influence of optical excitation on the temporal evolution of CCE during polarization, different ROIs were defined within the damaged regions and in one clean region of the pristine area of the 65 µm detector. As was mentioned in the Introduction, standard light does not affect the spectroscopic properties of a diamond detector, however, to confirm this, the CCE value of the virgin regions of the detector was studied as a function of the applied bias for both polarities with and without illumination. Figure 12a shows the relation between the CCE values for the detector biased from −30 V (E = −0.46 V/µm) to 30 V (E = 0.46 V/µm). The error bars are defined as the full width at half maximum of each pulse height spectrum. From the figure, it is clear that the detector maintains a good spectroscopic performance, showing how the CCE rapidly reaches the saturation level (maximum CCE) for bias higher than ±2 V. In addition, the results illustrate that the applied bias polarity has little effect on the reported values of the CCE. No significant difference was observed in the CCE value during the application of light, which was a sign that white light did not have any effect on the spectroscopy behavior of the undamaged diamond detector, as would be expected due to the large band gap of the diamond crystal. To evaluate the same effect in the regions with different levels of damage, where the density of defects created in the crystal lattice increased the trapping efficiency, an analysis using the IBIC technique was carried out scanning an area of the detector of about 420 × 320 µm 2 in size. This area contains the damaged regions probed by a 2 MeV proton beam with the detector biased at ±6 V (E = ±0.09 V/µm). In the resulting IBIC map shown in Figure 12b, the CCE is evaluated in the four ROIs which are marked with solid line squares with different colors. The effect of switching the light source on/off during the continuous irradiation on the evolution of CCE is presented in Figure 13 for positive and negative bias. During the first 300 s from the beginning of the irradiation, continuous degradation of CCE was observed in all the regions except for the pristine region where the CCE remained constant. In Figure 13 above, it can be seen that polarization is clearly dependent on the amount of damage to the irradiated region, since the decrease is more remarkable in the region damaged with the highest fluence. This irradiation period was carried out in dark conditions. After the first 300 s, the light source was applied, continuously illuminating the full sample for 180 s. During this period, two different behaviors of CCE were observed depending on the applied polarity. For positive bias, a partial enhancement of CCE was observed for the damaged regions. The evolution of CCE during this cycle with illumination remained nearly flat for all the regions, which indicated that part of the accumulated space charge was removed. According to the experimental results, the recovery appears to be stronger for the most damaged region. On the one hand, no variation in CCE was observed in the pristine region, showing again that light does not have any effect in the undamaged zones of a diamond detector. polarization, different ROIs were defined within the damaged regions and in one clean region of the pristine area of the 65 μm detector. As was mentioned in the Introduction, standard light does not affect the spectroscopic properties of a diamond detector, however, to confirm this, the CCE value of the virgin regions of the detector was studied as a function of the applied bias for both polarities with and without illumination. Figure 12a shows the relation between the CCE values for the detector biased from −30 V (E = −0.46 V/μm) to 30 V (E = 0.46 V/μm). The error bars are defined as the full width at half maximum of each pulse height spectrum. From the figure, it is clear that the detector maintains a good spectroscopic performance, showing how the CCE rapidly reaches the saturation level (maximum CCE) for On the other hand, for the negative bias, an increase in the degradation of CCE due to illumination was reported. After this interval of time, the light was switched off again and the irradiation continued for another 300 s under dark conditions. For positive bias, the decrease in CCE rapidly resumed with an identical rate as that when the light was switched off for all the regions except, again, for the pristine region. However, for the negative bias, apparently, the drop did not persist, and it seemed that polarization was interrupted. This could be observed by the fact that no more degradation of CCE was observed after the light was switched off. The complete illumination cycle is represented in the chronogram shown in the top portion of Figure 13. The absence of polarization in the pristine region can be explained by the purity (electronic grade) of the analyzed detector. Without radiation damage, the number of active traps (intrinsic defects or impurities) to capture free carriers is not enough to induce a significant distortion in the internal electric field [19]. For the damaged regions, the results now confirm the evidence that light induces almost instantaneous de-trapping of the hole traps, but not the electron traps for this particular detector. A study in literature review showed that the capability of depolarization using short wavelength light (as ultraviolet light) in high resistivity semiconductors [52], but this was due to the fact that the electromagnetic radiation in this spectral region had enough energy to ionize and/or excite the medium, creating new electron/holes pairs [53]. have any effect on the spectroscopy behavior of the undamaged diamond detector, as would be expected due to the large band gap of the diamond crystal. To evaluate the same effect in the regions with different levels of damage, where the density of defects created in the crystal lattice increased the trapping efficiency, an analysis using the IBIC technique was carried out scanning an area of the detector of about 420 × 320 μm 2 in size. This area contains the damaged regions probed by a 2 MeV proton beam with the detector biased at ±6 V (E = ±0.09 V/μm). In the resulting IBIC map shown in Figure 12b, the CCE is evaluated in the four ROIs which are marked with solid line squares with different colors. The effect of switching the light source on/off during the continuous irradiation on the evolution of CCE is presented in Figure 13 for positive and negative bias. During the first 300 s from the beginning of the irradiation, continuous degradation of CCE was observed in all the regions except for the pristine region where the CCE remained constant. In Figure 13 above, it can be seen that polarization is clearly dependent on the amount of damage to the irradiated region, since the decrease is more remarkable White light can only affect de-trapping, promoting electrons by optical transitions from trapped levels close to valence or conduction bands in the diamond crystals which helps to free the charge carriers captured in traps. However, the reason is still unclear why the CCE remained constant for negative bias (signal mainly created by electrons) when the light was removed after the illumination, indicating that trapping was interrupted even under continuous irradiation. This could be explained with the filling of all the traps during the irradiation, so the space charge accumulated remains constant and an equilibrium between the polarization electric field and the applied electric field is reached. At this stage of understanding, we conclude that light has some effect activating/ deactivating the traps, but only in the new traps (defects) created in the damaged regions. Taking into consideration that space charge accumulation depends on trapping probability which is a function of the density of traps, the de-trapping time, and the transport properties of the free carriers, such as their mobilities and their lifetimes, it is clear that light affects the trapping/de-trapping released rates of these energy levels in different ways [54]. A more systematic analysis is required for understanding this behavior. It is possible to use an interlinked combination of monochromatic light (for example, using LEDs) and the transient current technique (TCT) as proposed in [55] to study the electric field profile and the level of distortion due to positive or negative accumulated charge. In addition, it is important to characterize the defect levels induced by irradiation in diamond. Conclusions In this study, the IBIC technique with an ion beam microprobe was successfully applied to study the ion beam-induced polarization phenomena in two different diamond detectors. In particular, the main aim was to provide extensive information related to different polarization quenching techniques applied to sc-CVD diamond samples. This set of techniques historically has been used for depolarization in high-resistivity semiconductors such as the CdTe. This compilation of approaches results in a definition of the optimal method to depolarize diamond detectors during working operation according to the experimental results achieved in this study. Our experimental results indicate that turning off the bias, and then reapplying it during continuous irradiation can be used as a satisfactory method for recovering CCE to a prepolarized state when polarization is induced by hole drift. This mitigation of polarization is due to a complete or partial reduction in the accumulated space charge, which re-establishes the internal electric field and the transport conditions of the holes within the sample. In contrast, the recovery that was obtained for polarization induced by electrons is weaker. For the case where the ion beam is also interrupted, the analysis leads to the following conclusions: The recovery of the signal for holes is significantly smaller as compared with the case when the ion beam is irradiating the sample. Again, no significant effect appeared for electrons. As discussed, this is because no new electron/hole pairs are being created during the period of time where the beam is off. This inhibits the probability of recombination with the trapped free carriers, which do not allow a reduction in the accumulated space charge. For this reason, the polarization effect remains after the beam is applied again. Heating of the detector, or keeping it continuously at an elevated temperature, is also a particularly promising method for reducing the adverse effect of polarization. The results obtained in this study showed that, for our particular detector, a reduction in the space charge could be achieved by heating the sample to temperatures above 90 • C. This can be explained by the release of free carriers from the trap levels induced by thermal excitation, giving the surrounding temperature enough energy to the carrier to escape and continue the drift motion reducing the density of the accumulated space charge. It is important to highlight the fact that the removed space charge is independent of the operating temperature, at least for the temperatures explored in this study, which shows that the temporal evolution of CCE has the same profile for temperatures close to 100 • C and 200 • C. Future studies should aim to confirm results in a larger temperature range reaching temperatures up to 400 • C, however in that case, increased leakage current could degrade the detector response. Furthermore, study of the temporal evolution of TCT signals during polarization at different temperatures could provide additional information about the charge transport properties. The application of alternating bias during continuous irradiation has revealed that no significant effect on reducing the polarization phenomena took place, showing no significant recovering for polarization induced by electrons and partial recovery for holes. This is due to the strongest effect of polarization induced by holes, which quickly resumes when the polarity is changed to positive. However, appropriate matching of duty cycles lengths for particular working conditions may result in a satisfactory mitigation strategy. Finally, it has also been shown that illumination with standard light can be used as a method to suppress the strength of polarization induced by hole traps in damaged regions. A widely accepted explanation for a decrease in accumulated space charge during illumination of a sample is that white light has enough energy to promote and release the captured holes from the trap levels. Unfortunately, we found the opposite behavior for electrons. The observed difference in the behavior of the two charge carriers can only be attributable to the change in the electric field profile within the detector after the illumination was applied. The suppression of polarization effects induced by electrons after illumination of the sample is still unclear and requires further investigation.
15,257.2
2022-01-01T00:00:00.000
[ "Physics", "Materials Science" ]
Aquifer Vulnerability to Surface Contamination: A Case of the New Millennium City, Kaduna, Kaduna State Nigeria The study seeks to evaluate potential environmental impact on surface and groundwater as well as aquifer protective capacity to suggests possible solutions in some section of Kaduna Millennium City. A total of thirty Vertical Electrical Soundings (VES) were carried out using Schlumberger electrode configuration with current electrode spacing of a maximum of 200 m. The Interpreted data revealed the weak aquifer protective capacity zones which were found in some parts of northeast and central parts (VES stations A4, A5, A6, B4, C3 and C4) with its longitudinal conductance range from 0.114 – 0.194 Ω which represent 20% of the study area. Average/good aquifer protective capacity zones has longitudinal conductance ranging from 0.210 – 0.559 Ω . These zones covered twenty-four (24) VES stations which represent 80% of the study area and it is characterized with high thickness and low resistivity values of the weathered and fracture basement. Hence recommended for borehole siting. Similarly, the aquifer deep zones where the adjourning rocks are highly resistive (VES stations A1, A2, A3, B1, B2, D2, and E3) were also determined for siting waste and sewage disposals. However, the work also suggests that the urban associated development programs due to anthropogenic activities should be properly planned to avoid areas that are prone to contamination. Introduction All people irrespective of their development, economics and social condition are entitled to have access to drinking water in good quality and quantities [1]. In any environment, there is a strong relationship between human activities and water pollution of that environment due to anthropogenic activities resulting from the growth of waste disposals, agrochemicals, industries and technological advancement [2]. In many parts of Nigeria, where there are large areas of cultivation practices, the contamination of water supplies due to fertilizers and pesticides used in agriculture has become a serious problem. The quality of the built environment, both natural and man-made, depends on its environmental control [3]. Often, there are factors that cause events that leads to water, soil and environmental pollution. While these factors are generally categorized into natural and man-made, their resultant effects are diverse, calamitous and disastrous. While environmental pollution is one of the world known disaster occurs on earth surface, groundwater is one of our most important sources of water for domestic and industrial purposes. Unfortunately, groundwater is susceptible to pollutants. Groundwater contamination occurs when manmade products such as gasoline, oil, road salts and chemicals get into the groundwater and cause it to become unsafe and unfit for human use. Environmental pollution and contamination are becoming a common occurrence in part of developing countries. According to [1], an estimated 103 million Nigerians still lack sanitation facilities and 69 million do not have access to improved source of water in both rural and urban communities leaving the people vulnerable to diseases. It also established that 15% semi-urban are without access to safe excreta disposal facilities, 75% uses pit latrines and 60% discharge wastewater to the environment directly. The potential sources of pollution in Millennium City Kaduna are open pits, careless waste disposal, haulage roads, degradable materials such as animal remains and agrochemicals. Groundwater and environmental pollution can also result from poor drainage system. Sustainable Drainage Systems are approaches put in place to manage the water quantity (flooding), water quality (pollution) and amenity issues in the environment. Good drainage system New Millennium City, Kaduna, Kaduna State Nigeria provides opportunities to reduce the causes and impacts of flooding, remove pollutants from urban runoff at source, and combine water management with recreation and wildlife. In the year 2012, 363 people were feared dead while 2.1 million people were displaced across Nigeria as a result of floods. According to the National Emergency Management Agency (NEMA), 30 states out of 36 in Nigeria including Kaduna state were affected by that flood experience and it was concluded as the worst that has ever happened in the past 40 years, causing damages of an estimated value of N2.6 trillion Naira. These floods also gave rise to environmental pollution problems which affected the health of citizen across Nigeria [4]. The Millennium City is planned to become a centre of attraction and may soon experience a mass influx of people. In order to accommodate this expected population expansion, a proper geophysical investigation is necessary for precise location of sewages and productive boreholes site which will help to curb environmental pollution and groundwater contamination to avert likely future disaster. Hence, while the government is largely responsible to protect life and properties of her citizenry, this research will provide good information of subsurface properties underlain in the study area in order to adequately advise government on environmental impact. The Study Area Millennium City is located within Kaduna town, the capital of Kaduna State. The area lies within the geographic coordinates of latitude and longitude of 10° 30.47' N to 10° 31.08' N, and 007° 30.06' E to 007° 30.33' E respectively. The location covers a total landmass of 200,000 square meters and is has an average height of 607m above the sea level. The terrane is drained by both surface water and groundwater. The noticeable streams in the study area include Dan-Hono Stream and Kubai Stream, both drain into river Kaduna. The relief of the area is characterized by undulating plain, gentle slopes, and consists of peneplains with eroded flat tops ( Figure 1); often capped by layers of indurated laterites. The southern boundary of the farm adjacent to the stream exhibits swampy conditions during rainy seasons, while the upper parts drain freely. The drainage pattern is dendritic while stream flow fluctuates seasonally [5]. The rocks of the area are capped by laterites; the laterites are sometimes highly consolidated especially at the surface and weathered into lateritic nodules mixed with silty and sandy clays [5]. The typical rock types underlying the entire land area consist of the Precambrian Migmatite-gneiss Complex, meta-sediments/meta-volcanics (mostly schists, quartzites, amphibolites and Banded Iron formations) [6]. The unweathered bedrock is characterized by rapid grain-size variations from micro to pegmatic regions but normal sizes are dominant [7]. The area is drained by both surface water and groundwater. The relief is characterized by undulating plain, gentle slopes, and consists of peneplains with eroded flat tops, often capped by layers of indurate laterites [5]. Materials and Method Electrical resistivity is a geophysical survey method in which an electrical current is injected into the ground in order to measure the electrical properties of the subsurface. It is based on the response of the subsurface material to the current flow through electrodes to the ground [8]. In this survey, a total of thirty (30) Vertical Electrical Sounding (VES) points were acquired with maximum current electrode spread of 200 meters with an Omega Resistivity Meter using the Schlumberger configuration. A typical arrangement with 4 electrodes is shown in Figure 2. The fundamental physical law used in resistivity surveys is Ohm's Law that governs the flow of current in the ground. According to Ohm's Law; (1) The response of the subsurface material to the current flow through electrodes to the ground known as resistivity ( ) is given by; (2) where R is resistance. The R of a conductor is related to its length, L with cross sectional area, A and ρ is the resistivity (property of the material considered). The theoretical study of the earth resistivity methods is to consider the case of completely homogenous isotropic medium but normally, the ground resistivity is related to various geological parameters such as the mineral and fluid content, porosity and degree of water saturation in the rock (Loke, 1990). A typical arrangement with 4 electrodes is shown in Figure 2. The potential difference is then given by; Hence, ' Where R is resistivity ( (. * ∆ / ), and K is term called geometrical factor which depends on the arrangement of the four electrodes. K can be defined from Figure 2, [8]. Further derivatives are involved to estimate the aquifer protective capacity using Dar Zarrouk parameters, Transverse resistance (T) and Longitudinal conductance(S). The Dar Zarrouk Parameters A geoelectric layer is described by two fundamental parameters: its layer apparent resistivity ( a) and its thickness (h) [10]. The geoelectric parameters derived based apparent resistivity and thickness, Longitudinal conductance (S) Where S is the longitudinal conductance, h is thickness and ρ is apparent resistivity of the aquiferous layer and the transverse resistance (T) is given by; Where T is the transverse resistance, h is thickness and ρ is apparent resistivity of the corresponding layer. The parameters T and S are named the "Dar -Zarrouk parameters. The longitudinal conductance (S) is the geoelectric parameter used to define target areas of groundwater potential [10]. High S and T values usually indicate relatively thick succession and should be accorded the highest priority in terms of groundwater potential. Data Processing In order to investigate the subsurface structural trends in the study area, and to reveal the lithological sequence of the subsurface formation of the study area, the field data was interpreted using the computer software Res ID version 1.00.07 Beta. The final model geoelectric parameters along the six VES stations along profile C were used for the preparation of the geoelectric/geologic section for that profile as shown in Figures 4a-e. Figure 3 is typical resistivity curve obtained from study area. Results and Discussion Figures 4a-e show the geoelectric/geologic sections. The study area is highly variable in resistivity and thickness values of layers within the depth penetrated. Generally, the sections revealed three to five subsurface layers: topsoil/laterite/indurated laterite/quartzite veins, clay/silty/sand, laterite, weathered/fractured layer and the fresh basement. The first layer, often referred to as the engineering layer, has its thickness varying from 0.3m -10m across the study area. The result shows that the study area is underlain by unfractured basement rocks and the average thickness of the weathered layer is about 23m, although depths of about 42m was encountered. Figure 5 shows the resistivity of the top layer of the study area. It is highly variable in resistivity. The southwest and some part of north is highly resistive. This may be connected with presence of surface outcrops and indurated laterite in the first layer which may also be of great importance as it reduces surface run off and infiltration into the underlying aquifer. The high resistivity in most parts of the study area at the top layer indicates low conductivity of the subsurface. Since the earth is a natural conductor, the resistive nature of the top layer could serve as protective capacity to the area aquifer. Also, the presence of laterites and quartzites in the study area beneath the clayey topsoil which extends beyond 2.5m are filter factors which could reduce the degree of aquifer vulnerability from the surface contamination. Figures 6 and 7 show the resistivity and thickness maps of the weathered layer respectively which closely agrees with the work of [11]. The deeper aquifer is found mostly in southwest (VES points D1, D2, E1, E2 and E3), and some parts of north (VES points A2, B2 and B3), and east (VES points B6 and C6), of the map which denoted by deep blue colour. This is also corresponding to the aquifer low resistivity which makes it best region for borehole siting. This information is vital in evaluating the aquifer protective capacity which is the interest of this research work. [11], held that the sections where the aquifer appears thickest with an averagely low resistivity value are the best areas for the exploitation of underground water or siting boreholes. With the aquifer thickness ranging from about 11.0 m to 42.0 m, the study area appears largely good enough for siting of boreholes except some parts of south-south (VES points C5 and D5), aquifer thickness is relatively low. Figure. 5 and 6 show that the zones depicted by deep and light blue colour coincidence are observed mostly in the study area and it is therefore recommended as the best zone for groundwater development. [12], noted that any region with low aquifer thickness (< 11.0 m) may be prone to contamination from near surface sources such as waste and sewage due to its shallow nature. Figure 8 and 9 show the resistivity and the depth to the bed rocks respectively. According to [12], when the bedrock has relatively low resistivity (<750Ωm), this could indicate fracturing and high aquifer potentials strength. [13], observed that the resistivity of a basement is a function of their degree of weathering. The values of basement resistivity range from 1000 Ωm to 7500 Ωm. The VES stations B4 and E6 are highly resistivity (>6000 Ωm). The rocks in these regions are N N believed to be fresh basement. The area with low basement resistivity values (< 2000 Ωm) is also shown on the map, the rocks in these regions may have probably been fractured, faulted or heavily weathered, this could be of great importance to groundwater development. On the other hand, Figure 9 shows the depth to the basement rocks, it is highly variable in depth. The observed deep basement in the Northern parts are best region noted for waste and sewage disposal. This is to avoid easy infiltration of surface run off water into the aquifer that can contaminate the groundwater thereby causing pollution which could result into loss of life and waste of resources. Aquifer Protective Capacity Evaluation Aquifer protective capacity was evaluated from the aquifer layer resistivity and its thickness using the Dar-zarrouk parameters, (Transverse resistance (T) and Longitudinal conductance (S)). Equation (7) and (8) were used to compute Longitudinal conductance and Transverse resistance respectively shown in Table 2 which was used to determine the overburden protective capacity of the aquifer in the study area. The longitudinal conductance map (Figure 10) was utilized in evaluating the overburden protective capacity since table 1 has already established that the higher the Longitudinal conductance, the higher the aquifer protective capacity. According to [10], the higher the resistivity of a material, the lower its conductivity and vice versa. Since the earth subsurface acts as a natural filter to percolating fluid, its ability to retard is a measure of its protective capacity. For instance, Clayey soil is known to be relatively impermeable, but sandy soil which is relatively permeable can provide an infiltration path for the pollutants to enter the aquifers [14]. The study area is characterized by average values of longitudinal conductance ranging from 0.114 to 0.559 Ω which defined its ability to retard. [15], observed that the highly impervious clayey overburden that is characterized with relatively high longitudinal conductance, offers protection to the underling aquifer. Considering Table 1, the restive lateritic top layer and clay beneath the laterite, the study area could be considered averagely protective from surface contamination. Twenty-four (24) VES stations is covered moderate longitudinal conductance (S > 0.2 siemens), which represents 80% of the study area is considered relatively protective. However, some parts of northeast and central (VES stations A4, A5, A6, B4, C3 and C4) is found with longitudinal conductance (S < 0.2 siemens) is considered weak and vulnerable to contamination. The regions cover six (6) VES stations which represent 20% of the study area. Conclusion and Recommendations Researches have revealed that human activities, such as farming, urbanization, waste and sewage disposal, within an area have great impact on the quality of groundwater and environmental pollution. The Kaduna Millennium city has shown capacity to accommodate the expected population expansion due to its location, government interest, relief and the groundwater resource as reported by [5]. In this work, the following have been identified: i. Weak aquifer protective capacity zones (depicted with light blue colour in Figure 10) which were found in some parts of northeast and central (VES stations A4, A5, B4, C3 and C4). This region's aquifer is relatively vulnerable to surface contamination. Hence, human activities such as agrochemicals, excavation, waste and sewage disposal should be avoided or minimized as much as possible ii. Average/good aquifer protective capacity zones (depicted with deep blue colour in Figure 10) which covered a significant part of the study area. These are the regions found to have high thickness with low resistivity values of the weathered and fractured basement. It has high aquifer protective capacity from surface contamination. Hence, best considered for borehole siting. iii. Deep zones (depicted with deep blue colour in Figure 9) which were found in the northern and some parts of southwest of the study area (VES stations A1, A2, A3, B1, B2, D2, and E3). These regions are areas where the basement rocks lie deepest, the adjourning rocks are highly resistive (> 1000 Ωm) and is underlain by unfractured basement rock. Hence, recommended for siting sewage and waste disposal. However, careless handling of pollutants could have generated negative impact on the area aquifer and hence affects environmental management. It is therefore recommended that urban development programs should be properly planned to avoid areas that are prone to contamination. Adequate solid waste disposal method should be adopted, phasing out open dumpsites to safe-guard public health. Landfill base should be made of concrete and paved surfaces to prevent leaching of poisonous substances into the groundwater.
3,999.8
2018-01-11T00:00:00.000
[ "Environmental Science", "Geology" ]
Computational Models of Reactive Oxygen Species as Metabolic Byproducts and Signal-Transduction Modulators Reactive oxygen species (ROS) are widely involved in intracellular signaling and human pathologies, but their precise roles have been difficult to enumerate and integrate holistically. The context- and dose-dependent intracellular effects of ROS can lead to contradictory experimental results and confounded interpretations. For example, lower levels of ROS promote cell signaling and proliferation, whereas abundant ROS cause overwhelming damage to biomolecules and cellular apoptosis or senescence. These complexities raise the question of whether the many facets of ROS biology can be joined under a common mechanistic framework using computational modeling. Here, we take inventory of some current models for ROS production or ROS regulation of signaling pathways. Several models captured non-intuitive observations or made predictions that were later verified by experiment. There remains a need for systems-level analyses that jointly incorporate ROS production, handling, and modulation of multiple signal-transduction cascades. INTRODUCTION Reactive oxygen species (ROS) play a complex role in cellular biology. Initially viewed merely as harmful byproducts of metabolism, ROS are now known to serve additional functions as intracellular regulators of various signaling pathways. At low levels, ROS function as a reactive second messenger mainly by reversible oxidation of key amino acids of target proteins. High ROS levels, by contrast, cause damage to various biomolecules (lipids, proteins, and DNA) and have been linked to pathologies including neurodegeneration (Smith et al., 1997;Giasson et al., 2000), atherosclerosis (Patetsios et al., 2001;Guzik et al., 2006), and renal disease (Nishikawa et al., 2000;Dounousi et al., 2006). ROS play an especially complex role in cancer, with various oncogenes and tumor suppressors influencing, and influenced by, the redox environment of the cell (Meng et al., 2002;Leslie et al., 2003). It is challenging to study experimentally how endogenous and exogenous sources of ROS are handled by the cell. In addition, the context-and dose-dependent intracellular consequences of ROS can result in confounding observations. ROS has been found to stimulate proliferation in some cell types under certain experimental conditions (Ruiz-Ginés et al., 2000;Nieto et al., 2002) and inhibit proliferation (Qu et al., 2011) or induce apoptosis (Pierce et al., 1991) in others. Such contextual and experimental complexities make it difficult to understand ROS holistically by experimentation alone. Computational modeling approaches can tackle this problem by simulating the concurrent dynamics of many variables, including those that are difficult to access experimentally (Janes and Lauffenburger, 2013). Our review here covers the handful of models described thus far for ROS production and ROS regulation of signaling pathways (Figure 1). We start with a brief introduction of ROS, the biological processes that generate them, and the signaling pathways that sense ROS and detoxify the cell. We next focus on models that simulate ROS production by the mitochondria, the predominant intracellular source of ROS, and by membrane-bound enzymes, which link extracellular signaling to intracellular ROS production. We also discuss models that simulate ROS regulation of various signaling pathways, giving a broader view of the influence ROS have on intracellular signaling. Finally, we discuss the need for a systems-level analysis of ROS signaling to provide a generalizable framework that accounts for the many downstream cellular effects of ROS. ROS: SOURCES, SCAVENGERS, AND EFFECTORS Reactive oxygen species are chemically reactive species derived from the incomplete reduction of oxygen. Common examples are hydrogen peroxide (H 2 O 2 ), superoxide (O − 2 ), and hydroxyl radicals (HO·). For most cell types, ROS production mainly occurs through mitochondrial oxidative phosphorylation. Inherent leakiness of the electron transport chain (ETC), specifically from complex I and complex III, causes some electrons to flow out of the pathway and partially reduce oxygen to the superoxide anion. Other endogenous sources of ROS include the membrane-bound enzyme NADPH oxidase, which produces ROS in response to various ligands, along with other enzymes such as xanthine oxidase, cyclooxygenases, and nitric oxide synthase. To detoxify excess ROS, cells are equipped with antioxidants, including the thiols glutathione and thioredoxin, which are regulated by the transcription factor NRF2 (Itoh et al., 1997;Chorley et al., 2012). Tight regulation of ROS by antioxidant systems is necessary to balance the generation of ROS and to curtail oxidative damage to biomolecules. CHALLENGES OF MEASURING ROS Reactive oxygen species are difficult to measure reliably within cells. For example, early measurements of cellular oxidative state used dyes such as 2 ,7 -dichlorofluorescin (DCFH) which were later shown to create additional radical species (Rota et al., 1999). The secondary reactions and instability of the dye made long-term imaging with DCFH impossible. To address these deficiencies, researchers engineered fluorescent reporter proteins such as HyPer, RoGFP, and RxYFP (Hanson et al., 2004;Belousov et al., 2006;Meyer and Dick, 2010), which undergo redox-sensitive conformational changes that elicit a change in fluorescence. Genetically encoded fluorescent proteins enable live-cell imaging and are further capable of localizing ROS production to specific sub-cellular compartments (Malinouski et al., 2011;Swain et al., 2016). Nevertheless, these sensors might miss low concentrations of H 2 O 2 due to the endogenous enzyme peroxiredoxin, which is ∼100-fold more active toward H 2 O 2 compared to introduced probes (Ezeriņa et al., 2014). Given these challenges, some groups have taken a more computational approach to calculating the kinetics associated with H 2 O 2 (Brito and Antunes, 2014). Lim et al. built a reactiondiffusion model to study localization of H 2 O 2 , which is important for control and specificity of redox signaling. They incorporated cytoplasmic diffusion into their reduced kinetic model of H 2 O 2 clearance, in which peroxiredoxin is the dominant scavenging molecule (Lim et al., 2015). Using modeled concentration profiles obtained after bolus addition of H 2 O 2 to the extracellular medium, the authors determined order-of-magnitude estimates for intracellular H 2 O 2 diffusion through the cytosol, with a length scale of 4 µm and a time scale of 1 ms (Lim et al., 2016). The short length scale and rapid time scale indicate that H 2 O 2 degradation and signaling are localized to the area where H 2 O 2 is produced, contradicting the common modeling assumption of a well-mixed cytoplasm. This finding could explain discrepancies observed between bolus addition versus steady intracellular generation of H 2 O 2 (Sobotta et al., 2013;Cheong et al., 2015). Rapid H 2 O 2 scavenging also has implications for intracellular signaling, as H 2 O 2 reactivity is limited to molecules in the subcellular vicinity. ROS PRODUCTION BY COMPLEX III OF THE ETC Physiologically, ROS production increases under hypoxic conditions. Hypoxia causes a decrease in the maximum reaction rate of complex IV, which is thought to cause excess electron leakage from other components of the ETC, such as complex III (Chandel et al., 2000). After a return to normoxic conditions, ROS remain at higher hypoxic levels. The stable switch in ROS production is relevant during organ transplantation and other surgeries requiring an ischemic period. To better understand the mechanism behind this bistability, Selivanov et al. (2009) modeled the Q cycle mechanism of Complex III in the mitochondrial respiratory chain as the primary mechanism of ROS generation. Complex III can take on as many as 400 redox states due to its binding to quinones. The authors elaborated a system of differential equations describing the evolution of all of the redox states of Complex III. Model simulations predicted that Complex III can exist in two different steady states, a low ROS-producing state and a high ROS-producing state. This bistability is dependent upon the initial conditions of the system, specifically the redox state as predicted by levels of semiquinone and free ubiquinol. If starting in a highly reduced state, the overall system remains reduced, whereas if it starts in a less reduced state, Complex III progresses to a steady state with low semiquinone concentration and thus low ROS production. The overall system evolves to the high ROS producing state either by an increase in succinate concentration, causing the reduction of ubiquinone to ubiquinol, or a decrease in oxygen content. Once switched to this high ROS-producing state, Complex III persists in that state even after a return to lower succinate concentration or normoxic conditions. The sustained increase in ROS production provides a mechanism that may contribute to reperfusion injury after ischemia. The model predictions were experimentally validated in isolated rat brain mitochondria incubated with succinate with or without the addition of ADP. The addition of ADP, and subsequent synthesis of ATP, switches mitochondria to a low ROS-producing state and thereby lowers mitochondrial membrane potential. Once all the ADP is consumed, membrane potential increases to pre-ADP levels, but ROS production remains at the lower initial level. These results agree with model predictions that two levels of ROS production could coexist under the same set of parameters and give rise to metabolic heterogeneity in an isogenic population of cells. ROS PRODUCTION BY COMPLEXES I AND III OF THE ETC Reactive oxygen species are also produced by Complex I of the ETC. Gauthier et al. (2013a) built a computational model of the ETC focusing on ROS production by both complex I and III (Figure 1). Simulations were used to study the control of ROS production in cardiac myocytes under different metabolic conditions. The model is composed of non-linear ordinary differential equations describing the oxidation states of the various forms of ubiquinone, produced by complex I electron transfer, and the three subunits of complex III: cytochrome b, cytochrome c1, and the iron sulfur protein. The authors investigated how mitochondrial membrane potential, matrix pH, and ROS scavenging affect ROS production and control. When membrane potential increased 20 mV higher than unstressed cells to above ∼150 mV (Padmaraj et al., 2014), the model predicted that complex III ROS production as a function of membrane potential switches from zeroth order (constant production) to first order (exponential production). Increased membrane potential leads to a reduction in the Q cycle reaction rate, or conversion of ubiquinol to ubiquinone, causing a substantial increase in ROS production rate. The model predictions agree with experimental results reporting a threshold membrane potential of 153 mV, above which ROS production increases dramatically (Korshunov et al., 1997). When simulating an increase in mitochondrial matrix pH, the model predicted that ROS production from complexes I and III increases during forward electron transport. This pH-dependent mechanism of ROS generation was experimentally observed by Selivanov et al. (2008) who found that an increase in pH from 6 to 7 caused a threefold increase in ROS production rate. During reverse electron transport, where electrons flow toward complex I in the presence of a weak reducing agent, complex I ROS production also increased with matrix alkalinization. The model therefore correctly predicted the dependence of ROS production on both mitochondrial membrane potential and matrix pH. ROS levels are governed not only by production, but also by clearance through scavenging mechanisms. To gain a more complete picture of ROS dynamics, Gauthier et al. added glutathione and thioredoxin-mediated ROS scavenging mechanisms to the model (Aon et al., 2012). ROS production decreased to a minimum level as the mitochondrial environment became more oxidized and then rose again as the scavenging systems became depleted. This result agrees with the redoxoptimized ROS balance hypothesis (Aon et al., 2010), which states that ROS levels are lowest at an intermediate mitochondrial redox potential. Together, the authors' minimal model of ROS regulation produced results that matched many independent experiments describing different regimes of ROS production, providing support for the hypothesis that the cellular redox state influences the rate of ROS production. Applying their model, Gauthier went on to investigate ROS production and scavenging in the context of heart failure (Gauthier et al., 2013b). Integrating the ETC-ROS model discussed above into a mitochondrial energetic-redox model (Kembro et al., 2013) allowed the authors to test the hypothesis that mitochondrial Ca 2+ mismanagement leads to high levels of ROS during heart failure. In agreement with this hypothesis, their model showed that under conditions of mismanaged mitochondrial Ca 2+ , NADH levels decrease drastically under simulated cardiac pacing, highlighting the link between compromised Ca 2+ and NADH regulation. Lower amounts of NADH lead to lower NADPH levels and a decreased ability to reduce scavenging enzymes for reuse, causing ROS to accumulate. ROS levels in mitochondria increase under increased load (Satapati et al., 2015), but surprisingly the model predicted that ROS production actually decreases under these conditions. The net increase in ROS abundance stems from an even-further reduction in ROS scavenging, which causes ROS accumulation in the cell. Therefore, in the setting of heart failure, preservation or restoration of scavenging enzymes may prove more effective than efforts to block ROS production (Linke et al., 2005). In another model of ROS production by ETC complexes I and III, Bazil et al. (2016) described ROS generation by oxidative phosphorylation coupled to ATP demand. They updated an existing kinetic model of oxidative phosphorylation (Beard, 2005) to include ROS generation by complexes I and III and first-order scavenging by superoxide dismutase and peroxidase. Model simulations agreed with previous findings that free radical production by complex III is higher than complex I production under physiological conditions (Gauthier et al., 2013a). As ATP demand increases, the steady state production of ROS also increases, in line with experimental observations (Liu, 2010). The authors further applied their model to study reverse electron transport that is seen during reperfusion. Simulating ischemia/reperfusion led to bistability in ROS production (Selivanov et al., 2009(Selivanov et al., , 2011 only when the activity of complex II was increased. Complex II activity requires electrons to be supplied to the quinone pool by the dehydrogenation of succinate to fumarate. The predicted importance of complex II agrees with work by Chouchani et al. (2014) showing that succinate is a main driver of mitochondrial ROS production upon reperfusion (Chouchani et al., 2014). Therefore, complex II inhibition during reperfusion could prove useful to decrease ROS production and reperfusion injury (Valls-Lacalle et al., 2016). ROS PRODUCTION BY THE MITOCHONDRIAL NETWORK In a phenomenon known as ROS-Induced ROS Release (RIRR), damaged mitochondria produce an increased amount of ROS, which causes surrounding mitochondria to increase ROS production through a positive-feedback loop. Park et al. (2011) used an agent-based model describing inter-mitochondrial signaling to study the role of mitochondrial network dynamics in mitochondria-driven ROS production (Figure 1). Simulations were performed with three different mitochondrial networks: uniformly distributed mitochondria, as seen in cardiomyocytes; irregularly distributed mitochondria, as in neurons; and sparsely distributed mitochondria, as found in white blood cells (Maianski et al., 2004). The simulations introduced hydrogen peroxide as an initial oxidative stress, causing mitochondria in the surrounding area to produce more ROS. Mitochondrial ROS diffuse stochastically by random walk in 2D space, amplifying the local ROS response. Depending on the initial oxidative stress insult and the mitochondrial network dynamics, ROS production is blocked by antioxidant enzyme systems or becomes amplified by RIRR, which propagates ROS through the entire cell. The goal of the model was to predict the percent reactive mitochondria resulting from an initial oxidative stress input and the initial hydrogen peroxide concentration that causes RIRR. The model indicated that ROS propagation is faster in the cardiomyocyte model than in the irregular distribution model, as shown by a higher dose dependence of reactive mitochondria as a function of initial oxidative stress. In addition to mitochondrial distribution, the model predicted that the density of mitochondria affects the response to oxidative stress inputs. Cells with a low density of mitochondria have considerable ROS propagation after low levels of oxidative stress, while cells with a high density of mitochondria only show strong ROS propagation after high levels of oxidative stress. The authors hypothesized that these differing responses to oxidative stress are due to differences in ROS signal transduction between mitochondrial networks. They further simulated the addition of different antioxidants to find that superoxide-scavenging antioxidants block ROS propagation more effectively in the cardiomyocyte model, while antioxidants that detoxify hydrogen peroxide are more effective in the irregular-distribution and low-density models of mitochondria. These results suggest that mitochondrial network configuration influences which molecular species is used to propagate ROS in the cell. ROS PRODUCTION IN RELATION TO ANTIOXIDANT SIGNALING Cyclosporin A (CsA) is an immunosuppressant, which indirectly causes oxidative stress (Parra Cid et al., 2003) and adaptively activates the NRF2 pathway in the kidney. Hamon et al. (2014) fused an in vitro pharmacokinetic model (Wilmes et al., 2013) of CsA distribution in cultured renal epithelial cells with a dynamical model of NRF2 signaling originally designed to capture the cellular response to xenobiotics (Zhang et al., 2009). The authors adapted the NRF2 model to accommodate ROS as a state variable generated in proportion to cytosolic CsA (Hamon et al., 2014). In the revised model, ROS are detoxified by glutathione peroxidase and further act as an oxidant and inhibitor of KEAP1, which degrades NRF2. Last, CsA was forbidden from interacting with the aryl hydrocarbon receptor as xenobiotics do in the original model, because experimental data was lacking for such an interaction. To parameterize the fused model, Bayesian inference was used together with transcriptomic, proteomic, and metabolomic data collected from cells treated with different concentrations of CsA dosed daily for 2 weeks. The model predicted that low doses of CsA yielded widespread oscillations throughout the network as cells metabolized the administered CsA and detoxified ROS before the next administration. At high doses, however, the cell is overwhelmed and the modeled network locks into an elevated state of ROS adaptation. These predictions were not followed up experimentally, but the work of Hamon et al. (2014) nonetheless illustrates how toxicologic models can be repurposed for ROS specifically. ROS PRODUCTION IN THE PHAGOSOME MEMBRANE BY NADPH OXIDASE Aside from the ETC, ROS also play a key role in pathogen clearance. Neutrophils utilize ROS to attack bacteria engulfed within a phagosome. The source of this ROS burst is not from mitochondrial respiration but from the NADPH oxidase complex at the plasma membrane (Henderson and Chappel, 1996) (Figure 1). Levels of ROS oscillate in the neutrophil (Wymann et al., 1989); however, the mechanism behind these oscillations was unclear. Olsen et al. (2003) proposed that the oscillations arose from interactions among myeloperoxidase, melatonin, NADPH, and NADPH oxidase. To explore the oscillatory behavior, they built a two-compartment, differential equation model of the phagosome and the cytoplasm (Olsen et al., 2003). Without NADPH oxidase activity, model simulations exclusively produced damped oscillations that converged to a steady-state; by contrast, addition of NADPH oxidase elicited sustained oscillations similar to those reported experimentally (Petty, 2001). The authors triggered ROS oscillations in neutrophils with the activating chemotactic peptide FMLP and showed that pre-incubation with an inhibitor of NADPH oxidase blocked oscillations. Their model further predicted that melatonin would change the amplitude of the ROS oscillations measured. Pre-incubation of FMLPactivated neutrophils with melatonin confirmed the predicted increases in ROS amplitude. Computational and experimental modeling of NADPH oxidase in this setting allowed the authors to understand the basis of melatonin "priming" previously observed in neutrophils (Recchioni et al., 1998), underlining the power of pairing in silico and in vitro experiments. ROS PRODUCTION DURING WNT/β-CATENIN SIGNALING Reactive oxygen species are also generated as a secondary byproduct of multiple signal transduction cascades (Meier et al., 1989;Thannickal and Fanburg, 1995). Haack et al. (2015) built a model of the WNT/β-catenin signaling pathway that included membrane-related processes as well as ROS signaling. The authors sought to explain experimental results showing that disruption of membrane lipid rafts inhibits WNT/β-catenin signaling, and also that ROS activate WNT signaling in the context of differentiation of human neural progenitor cells (Funato et al., 2006;Rharass et al., 2014). The three-compartment model is based on massaction kinetics and includes: a membrane model, in which WNT binds to receptor LRP6 causing its phosphorylation within lipid rafts; an intracellular model, in which AXIN binds phosphorylated LRP6 to prevent degradation of βcatenin; and a redox model, in which ROS increase the concentration of DVL-bound AXIN, making AXIN unable to degrade β-catenin. Simulations were initiated with a burst of ROS that was shown experimentally to coincide with the beginning of neural progenitor differentiation induced by growth-factor withdrawal. The model predicted an immediate, transient β-catenin stabilization resulting from redox-dependent DVL/AXIN binding, followed by a sustained β-catenin response arising from lipid raftdependent canonical WNT signaling. The immediate β-catenin accumulation was observed experimentally by the authors in lipid raft-deficient cells that maintained a transient β-catenin response. By including ROS signaling, the extended WNT/β-catenin signaling model correctly captured experimental β-catenin nuclear dynamics during early neuronal differentiation. Dwivedi et al. (2015) looked at modulation of cell signaling by ROS in another setting, using the IL-4 signaling pathway as a redox-regulated case study. The authors sought to identify the most important mechanisms of redox regulation in the IL-4 pathway, which is important for regulating the effector T-cell response. The activated IL-4 receptor complex upregulates ROS through NADPH oxidase (Sharma et al., 2008), which influences signal transduction that proceeds through JAKs and culminates in the phosphorylation of STAT6. To identify the combination of regulatory mechanisms that best recapitulated the dynamics of IL-4 induced STAT6 phosphorylation, the authors turned to Monte Carlo analysis of an IL-4 ordinary differential equation model. Phosphorylated STAT6 dynamics were best captured by a model that incorporated a protein tyrosine phosphatase whose activity and nucleocytoplasmic shuttling were ROS sensitive. ROS regulation of phosphatase activity and localization, along with other ROS-independent mechanisms, were included in the systems-level model of IL-4 signaling, with parameters fit to experimental data in IL-4-stimulated Jurkat cells. The model predicted diminished STAT6 phosphorylation following IL-4 stimulation and ROS inhibition, which was confirmed experimentally by NADPH oxidase inhibition of IL-4-stimulated cells. Transient oxidation of protein tyrosine phosphatases was also observed experimentally by oxidized protein tyrosine phosphatase immunoprecipitation of extracts from IL-4-stimulated Jurkat cells. The authors' systems-level model provides a framework for investigating additional modes of receptor-initiated oxidation not previously explored. Smith and Shanley (2013) adapted an existing differential equation model of insulin signaling (Sedaghat et al., 2002) to incorporate ROS and study the interplay between insulin signaling and oxidative stress (Figure 1). Insulin-stimulated ROS production was assumed to occur through activation of NADPH oxidase and is about fivefold higher than the background level of mitochondrially produced ROS (Mahadev et al., 2001). ROS deactivate the phosphatases PTEN and PTP, activate the kinases JNK and IKK, and are detoxified by cytoplasmic SOD2. This model was used to make predictions about ROS, FOXO, SOD2, and insulin receptor abundances over long time scales. ROS CROSSTALK WITH INSULIN SIGNALING When hydrogen peroxide was added as an oxidant to the system with or without insulin stimulation, the model predicted surprisingly different responses. Hydrogen peroxide alone caused modest glucose uptake and insulin alone caused strong glucose uptake, but hydrogen peroxide and insulin stimulation together were antagonistic, causing only moderate glucose uptake. In the model, this dampening effect of oxidative stress occurs because hydrogen peroxide and insulin together activate protein kinases (e.g., JNK, IKK), which cause hyperphosphorylation of IRS1 and decrease its ability to form the IRS1-PI3K complex that stimulates glucose uptake. The model also predicted that the FOXO-mediated antioxidant response depends critically on the extent of oxidative stress. With low oxidative stress, the antioxidant enzyme SOD2 is upregulated by FOXO through a JNK-mediated mechanism, but SOD2 is downregulated at higher levels of stress through an IKK-mediated mechanism. Although simplified in its handling of ROS and antioxidant pathways, this integrated systems model captures some of the complexities of oxidative stress for an important metabolic pathway. ROS PRODUCTION AS ONE NODE IN A LARGER NETWORK OF CARDIAC FIBROBLAST SIGNALING The previously discussed models incorporated ROS into a single canonical signaling pathway, but the generation and handling of ROS pervades multiple pathways and can lead to counterintuitive cell outcomes . Zeigler et al. (2016) incorporated ROS as part of a much larger signaling network to identify regulators of cardiac fibrosis. A cardiac fibroblast signaling network was designed to study drivers of fibrosis, which is implicated in many cardiac pathologies (Moreo et al., 2009). The network was compiled from experimental data on 10 pathways that are known to be important in cardiac injury, such as the IL-1 and TGFβ pathways. The model was formed using a logicbased differential equation approach (Kraeutler et al., 2010), whereby species are represented as differential equations with rates of change dictated by Hill functions and truth tables comprised of interacting biomolecules. ROS production is controlled by the activity of NADPH oxidase and feeds into the truth tables of JNK and ERK activation, linking ROS generation at the plasma membrane to downstream intracellular responses. In the model, reducing ROS levels had far-reaching and context-dependent effects on the network. Under baseline conditions, reductions in ROS caused a decrease in matrix metalloproteinase 9 (MMP9), which is important for the breakdown of extracellular matrix. By contrast, in an environment with high TGFβ signaling, like in a myocardial infarction, reducing ROS led to an increase in MMP9 activity. Therefore, a therapeutic prediction of the model is that antioxidant treatment for fibrosis would be more beneficial in the high TGFβ environment of myocardial infarct. CONCLUSION AND FUTURE OUTLOOK The computational models of ROS biology covered in this review largely focus on ROS handling within the cell or on ROS modulation of canonical signaling pathways. In the future, we anticipate more sophisticated models that combine ROS handling and signaling concurrently. A prime test bed for such an integrated approach would be the NF-κB pathway, which is activated by ROS (Schreck et al., 1991) and is responsible for inducing scavenging enzymes such as SOD2 (Wong and Goeddel, 1988). Finn and Kemp (2012) have assembled a provisional model of IKKβ S-glutathionylation in the setting of antioxidants and chemotherapy-induced ROS. The coupling of signaling, production, and scavenging could give rise to feedback networks that explain the variable oxidative stress observed in some settings among single cells in very similar microenvironments . There is also a need to build multiscale models that place ROS in the broader context of developing tissues, tumors, and infections (Schwarz, 1996;Bajikar and Janes, 2012;Wang et al., 2012). The dynamics of proliferation and death impinge on metabolism and signal transduction, which culminate to impact the redox state of cells in the population. Crosstalk between these cellular pathways may require different classes of modeling than those implemented so far (Anderson et al., 2006;Chitforoushzadeh et al., 2016). Advances in measurement will likewise expand the scope of targets modified by ROS (Kim et al., 2015) and reveal the extent to which molecular crosstalk is underappreciated. Integration of ROS signaling into larger networks may allow researchers to predict outcomes of drug treatments that affect ROS generation, which causes drug resistance in some cancer contexts (Nieborowska-Skorska et al., 2014;Okon et al., 2015). A deeper understanding of ROS network dynamics could generate combinatorial treatments that avoid neutralizing drug efficacy. In the broader human population, there are many polymorphisms that affect ROS generation and scavenging, such as p22 phox C242T and SOD2 A16V (Bresciani et al., 2013;Meijles et al., 2016). These variants may tune how ROS interacts with other signaling networks, contributing to heterogeneous patient responses during therapy. ROS are a fact of life that cannot be ignored. Like a living cell, investigators must find ways to deal with ROS holistically and achieve our goals despite their presence. The tools for pharmacologic modulation of ROS are predominantly limited to antioxidants. Systems modeling of ROS may one day provide a venue for exploring more-precise interventions that account for the complex biological processes involved. AUTHOR CONTRIBUTIONS EP and CS conceived of the review with input from KJ. EP and CS drafted the review with revisions provided by KJ.
6,407.8
2016-11-29T00:00:00.000
[ "Biology", "Chemistry", "Computer Science", "Medicine" ]
Quantum field-theoretical descriprion of neutrino oscilla- tions in T2K experiment Abstract. We consider neutrino oscillations in the T2K experiment using a new quantum field-theoretical approach to the description of processes passing at finite space-time intervals. It is based on the Feynman diagram technique in the coordinate representation, supplemented by modified rules of passing to the momentum representation. Effectively this leads to the Feynman propagators in the momentum representation being modified, while the rest of the Feynman rules remain unchanged. The approach does not make use of wave packets, the initial and final particle states are described by plane waves, which essentially simplifies the calculations. The oscillation fading out due to momentum distribution of the initial particles is taken into account. The obtained results reproduce the predictions of the standard description and confirm that the far detector position corresponds to the first minimum for muon production probability and the first maximum for electron production probability. Introduction Neutrino oscillations are a widely discussed and experimentally confirmed phenomenon, which is very important for particle physics.To describe this phenomenon either the quantum-mechanical approach in terms of plane waves or wave packets [1][2][3] or the quantum field-theoretical approach in terms of wave packets [3][4][5] are usually used.The plane wave QM description is not consistent since the production of states without definite masses violates energy-momentum conservation.This problem is circumvented in the framework of the QM and QFT descriptions in terms of wave packets, but the calculations of amplitudes and probabilities become very complicated.The reason is that the standard S-matrix theory is not convenient for describing processes, which take place at finite space-time intervals. The idea of the novel approach is to adapt the standard S-matrix formalism for describing processes passing at finite space-time intervals by modifying the rules of transition from the coordinate representation to the momentum representation in the framework of the Feynman diagram technique, so that these rules reflect the experimental conditions.The approach, based on the work of Feynman [6], was put forward in the paper [7] and developed in the papers [8][9][10][11]. Using this approach we describe processes, which can be investigated in the accelerator long baseline experiment T2K, where a neutrino flux is created by the decays of πand Kmesons.We consider processes, in which the neutrinos are detected due to scattering by nuclei of 16 O, which in the simplified consideration by means of the impulse approximation may be reduced to the scattering at free neutrons, and by electrons. Quasielastic neutrino scattering by neutron We work in the framework of the minimal extension of the Standard Model by the right neutrino singlets.The charged-current interaction Lagrangian of the leptons takes the form where l i is the field of the charged lepton of the i-th generation, U ik denotes the PMNS-matrix, and ν k stands for the field of the neutrino state with definite mass m k . Let us consider the process, in which a neutrino is emitted by the decay of a π-meson and is detected in a reaction ν i + 16 O → 15 O+ p+l − .Since the neutrino energy is much greater than the binding energy per nucleon, we use the impulse approximation, in which the scattering by a nucleus is represented as the sum of the scatterings by free nucleons.Therefore, we consider the quasielastic scattering of neutrinos by a neutron ν i + n → p + l − , where l − denotes an electron e − or muon µ − .The process corresponds to the following diagram: The four-momenta of the particles are designated as it is shown in the diagram.The neutrino ν i is a virtual particle and is described by the propagator in the coordinate representation.The distance between the neutrino production and registration events is fixes and equals 295 km.In this case, since the distance is macroscopically large, the virtual particle is almost on the mass shell due to GS-theorem, i.e. the relation Following the prescription formulated in [7][8][9][10][11], we construct the so-called timedependent propagator of neutrino mass eigenstate ν i in the momentum representation as the Fourier transform of the Feynman propagator S c i (z) of the fermion field in the coordinate representation, multiplied by the additional delta function δ , which fixes the time interval T between the production and detection: where m i is the mass of neutrino state ν i , and we have used the condition that the particle is close to the mass shell to derive the last expression. For certainty, further we assume that the final charged lepton is a muon.Now we can use the modified propagator (2) to write out the amplitude of the process in the momentum representation for the case, when the time interval between the production and registration events equals T : where θ C is the Cabibbo angle, f π is the pion decay constant of dimension of mass, m (µ) is the muon mass, )� is the matrix element of the weak hadron current.Here and below we neglect the neutrino masses everywhere except in the exponential of the time-dependent propagator (2). In our approximation, the squared modulus of the amplitude, averaged and summed over particles' polarizations (which is shown by the angle brackets), factorizes: where � correspond to the pion decay and neutrino scattering by a neutron respectively, and the expression in the square brackets will be referred to as Let us find the probability of the process.The squared amplitude ( 4) is multiplied by the delta function of energy-momentum conservation (2π) 4 and integrated with respect to the phase volume of the final particles.But the integration would result in variation of the virtual neutrino momentum direction, which contradicts the experimental setting.Thus, one must calculate the differential probability of the process, where p n is fixed.It can be achieved by additionally multiplying the squared amplitude by 2π δ (p n − p), where p is one specific value of p n : p 2 = 0, � p is directed from the source to the detector and satisfies the conservation condition at the production vertex.It is equivalent to substituting p instead of p n in (4) and multiplying the result by 2π δ (p + q − p π ).When the intermediate virtual particle momentum is fixed, one can pass from the time interval T to the distance L according to the formula The differential probability has the form: where d 3 W 1 � d 3 p is the differential probability of pion decay, which produces neutrino with the fixed momentum � p, W (µ) 2 is the probability of neutrino scattering by a neutron with a muon in the final state, and the factor P µµ � is called, in the standard approach, the muon neutrino survival probability. The additional delta function fixes not only the direction of neutrino momentum, but also its length, thus we must integrate the formula (5) with respect to |� p|.The final probability of detecting a muon in the considered process: where and the momentum * is determined by the energy-momentum conservation at the neutrino production vertex and has the following form: which implies that the coordinate system is chosen in such a way that the pion momentum � p π is directed along the z axis, thus θ is the pion momentum polar angle. The differential probability of neutrino production in the kaon decay K + → ν i + µ + is given by the same expression (7) with the replacements cos The whole probability of finding an electron in the detection process differs from the obtained one ( 6) by the replacement W (µ) 2 → W (e) 2 , P µµ → P µe , where is called, in the standard approach, the ν µ → ν e transition probability. To obtain the probability of neutrino scattering by a neutron it is necessary to know the expression for the matrix element of the weak hadron current, which, due to the complex structure of hadrons, has to be parameterized by phenomenological form-factors.It was shown that the cross-section of neutrino scattering by a neutron has the following form [12]: where 2 is the transferred momentum, s and u are the Mandelstam variables, M is the average mass of proton and neutron, index l indicates the type of the final charged lepton, the coefficients A (l) , B (l) , C (l) , being very bulky, can be found in the same paper [12].In order to determine the limits of integration let us consider the domain of change of the Mandelstam variable t = −Q 2 .Using the results obtained in book [13] and carrying out the calculations for our case, we obtain: where m l is the charged lepton mass. Neutrino scattering by an electron Let us consider the process of neutrino scattering by electrons, even though its contribution is significantly less than the previous one.The registration process goes through both the charged and neutral currents.The process is described by the following two diagrams: e − (k) The four-momenta of the particles are designated as it is shown in the diagram. The total amplitude of the process is as follows: where we have introduced the time-dependent factors: Performing the calculations, which are similar to those in the Subsection 2.1, we finally obtain the probability of the whole process: where the neutrino production probability dW 1 /dΩ is defined by (7), and the momentum � p * is defined by (8).The neutrino detection probability W 2 has the standard form: where W νee and W νµe are the scattering probabilities of the corresponding neutrinos by an electron. Numerical integration Now we must average the obtained probabilities over the momentum distribution of the decaying particles, which produce neutrinos.Mesons (π-and K-) in the decay volume have a distribution not only in the magnitude of the momentum, but also in the direction, so the angle between the source-detector line, which is at the angle 2.5 • to the z axis, and the momentum of the decaying meson can vary.We introduce a coordinate system such that θ 0 = 2.5 • , ϕ 0 = 0, and define the following unit vectors: � n 0 = {sin θ 0 , 0, cos θ 0 } -from the source to the detector, (16) where θ ′ is the angle between the z axis and the randomly directed meson momentum.It is clear that the angle θ in formula (7), which is the angle between the initial meson momentum and the neutrino momentum, is the angle between these two unit vectors, which is easily found from their scalar product: Let us consider the averaged probability for the case of the π-meson decay and the detection of lepton l − , l = µ, e: , we arrive at the expression where is the momentum distribution of the neutrinos produced in the pion decays.It is clear that the similar expression can be written for the neutrinos, which are produced in the decays of K-mesons.The total neutrino flux will be equal to the sum of the two distributions ρ π (ν) and ρ K (ν) with some weights a π and a K : and the weights are related to each other, because we know the relative frequency of the decays of pions and kaons: 1 decayed kaon accounts for about 10 decayed pions, so a π /a K ≈ 10.Thus, the final probability with the neutrinos from both sources reads where we denote � for short.The form of neutrino flux dependence on the momentum for the angle θ 0 = 2.5 • is taken from the paper [14].We fit this dependence with the following function: The results of numerical integration for the neutrino oscillation processes, where the neutrino is detected in the interaction with a neutron, producing a muon or an electron, are presented in Figs. 1, 2, respectively.The results of numerical integration for the neutrino oscillation process, where the neutrino is detected in the interaction with an electron are depicted in Figure 3.One can see that the detector position, 295 km in the T2K experiment, indeed corresponds to the first oscillation minimum for the muon production and the first oscillation maximum for the electron production. Conclusion The modified S-matrix approach in the framework of quantum field theory allows us to consistently obtain the expression for the probability of a neutrino oscillation process, which includes a process of virtual neutrino production in the source, its propagation over macroscopic distances and a registration process in the detector.In this approach, the initial and final particles are described by plane waves rather than wave packets, and the concept of flavor states is redundant.This formalism makes it possible to describe the neutrino oscillation experiments with the neutrinos propagating in the direction at an angle to the axis of the initial particle beam simply and naturally.The obtained results confirm that the position of the far detector in the T2K experiment corresponds to the first oscillation maximum for electron production processes and the first oscillation minimum for muon production processes.The developed formalism also allows one to describe the neutrino oscillation fading out, which arises due to the spread of momenta of decaying πand K-mesons. )Figure 1 .Figure 2 . Figure 1.Normalized probability of the neutrino oscillation process with the detection by a nucleon and muon production on spatial scales a) 1500 km and b) 11000 km Figure 3 . Figure 3. Normalized probability of the neutrino oscillation process with the detection by an electron on the spatial scales a) 1000 km and b) 50000 km
3,151.4
2019-01-01T00:00:00.000
[ "Physics" ]
COUPLED FIXED POINT THEOREMS FOR TWISTED (α,β )−ψ-EXPANSIVE MAPPINGS In this paper, we introduce a new concept of cyclic and cyclic ordered α−ψ expansive mappings and investigate the existence of a fixed point for the mappings in this class. Further, we shall derive coupled fixed point theorems in complete metric spaces. The presented theorems generalize and improve many existing results in the literature. Moreover, some examples are given to illustrate our results. INTRODUCTION Fixed point theory is one of the most powerful and fruitful tools in nonlinear analysis, since it provides a simple proof for the existence and uniqueness of the solutions to various mathematical models. Its core subject is concerned with the conditions for the existence of one or more fixed points of a mapping f from a topological space X into itself. The Banach contraction principle [1] is one of the most versatile elementary results in fixed point theory. Moreover, being based on iteration process, it can be implemented on a computer to find the fixed point of a contractive mapping. This principle has many applications and was extended by several authors. Among them, we mention the α − ψ-contractive mapping, which was introduced by Samet et al. [2] via α-admissible mappings. In this paper, we introduce a new concept of cyclic and cyclic ordered α − ψ expansive mappings and establish various fixed point theorems for such mappings in complete metric spaces. The presented theorems extend, generalize and improve many existing results in the literature. Some examples are given to illustrate our results. For the sake of completeness, we recall some basic definitions and fundamental results. Wang et al. [3] defined expansion mappings in the form of following theorem. Theorem 1.1. Let (X, d) be a complete metric space. If f : X → X is an onto mapping and there exists a constant k > 1 such that d( f x, f y) ≥ kd(x, y), for each x, y ∈ X. Then f has a unique fixed point in X. In 2013, P. Salimi et al. [10] introduced the concept of twisted (α, β )-admissible mappings in the following form: Definition 1.6. Let (X, d) be a metric space and f : X → X be a twisted (α, β )-admissible mapping. Then f is said to be a holds for all x, y ∈ X, where ψ ∈ Ψ. (iii) twisted (α, β ) − ψ-contractive mapping of type III, if there is p ≥ 1 such that Recently, Kang et al. [11] introduced the concept of twisted (α, β ) − ψ-expansive mappings in metric spaces as follows: Definition 1.7. Let (X, d) be a metric space and f : X → X be a twisted (α, β )-admissible mapping. Then f is said to be a holds for all x, y ∈ X, where ψ ∈ Ψ. In what follows, we present the main results of Kang et al. [11]. Theorem 1.8. Let (X, d) be a complete metric space and f : X → X be a bijective, twisted (α, β )−ψ-expansive mapping of type I or type II or type III satisfying the following conditions: (iii) f is continuous. Then f has a fixed point, that is, there exists z ∈ X such that f z = z. In what follows, Kang et al. [11] proved that Theorem 1.8 still holds for f not necessarily continuous, assuming the following condition: Here, we give some suitable examples to illustrate the results given by Kang et al. [11]. Example 1.10. Let X = R be endowed with the usual metric We prove that Theorem 1.9 can be applied to f . Proof. Let α(x, y) ≥ 1 for x, y ∈ X. Then x ∈ [−1, 0] and y ∈ [0, 1], and so f −1 y ∈ [−1, 0] and Otherwise α(x, y)β (x, y) = 0 and (1) trivially holds. Then f is a twisted (α, β ) − ψ-expansive mapping of type I and, by Theorem 1.9, f has a fixed point. Clearly, 0 and −11 6 are two fixed points of f . Example 1.11. Let X, d, α and β be as in Example 1.9 and f : We prove that Theorem 1.9 can be applied to f . Proof. Proceeding as in the proof of Example 1.10, we deduce that f −1 is a twisted (α, β )admissible mapping and that the conditions (i)and (ii) of Theorem 1.9 hold. Clearly 0 and −5 2 are two fixed points of f . Example 1.12. Let X = [0, ∞) be endowed with the usual metric d(x, y) = |x − y|, for all x, y ∈ X and f : X → X be defined by We prove that Theorem 1.9 can be applied to f . Proof. Proceeding as in the proof of Example 1.10, we deduce that f −1 is a twisted (α, β )admissible mapping and that the conditions (i) and (ii) of Theorem 1.9 hold. Moreover, if x, y ∈ (3) trivially holds. Then f is a twisted (α, β ) − ψ-expansive mapping of type III and, by Theorem 1.9, f has a fixed point. Clearly, 0 and 3 4 are two fixed points of f . To ensure the uniqueness of the fixed point in Theorems 1.8 and 1.9 Kang et al. [11] consider the following condition: CYCLIC RESULTS In this section, we show how is possible to apply the results of Kang et al. [11] for proving, in a natural way, some analogous fixed point results involving a cyclic mapping. Definition 2.1. Let (X, d) be a metric space and A, B be two non-empty and closed subsets holds for all x ∈ A and y ∈ B, where ψ ∈ Ψ. (iii) cyclic α − ψ-expansive mapping of type III, if there is p ≥ 1 such that holds for all x ∈ A and y ∈ B, where ψ ∈ Ψ. Now, we prove the following result for a continuous cyclic mapping. Theorem 2.2. Let (X, d) be a complete metric space and A, B be two non-empty and closed subsets of X such that α : X × X → [0, ∞) and f : A ∪ B → A ∪ B be a bijective, continuous and generalized cyclic α − ψ-expansive mapping of type I or type II or type III. If there exists Also for cyclic α − ψ-expansive mappings, we can omit the continuity condition as is shown in the following theorem: Theorem 2.3. Let (X, d) be a complete metric space and A, B be two non-empty and closed subsets of X such that α : X × X → [0, ∞) and f : A ∪ B → A ∪ B be a bijective and cyclic α − ψexpansive mapping of type I or type II or type III. Also suppose that the following conditions hold: Let {x n } be a sequence in Y such that α(x 2n , x 2n+1 ) ≥ 1 and β (x 2n , x 2n+1 ) ≥ 1 for all n ∈ N∪{0} and x n → x as n → ∞, then x 2n ∈ A and x 2n+1 ∈ B. Now, since B is closed, then x ∈ B and hence We deduce that all the hypotheses of Theorem 1.9 are satisfied with X = Y and hence f has a fixed point. CYCLIC ORDERED RESULTS By using the similar arguments to those presented in the previous section, we are able to obtain results in the setting of ordered complete metric spaces. and holds for all x ∈ A and y ∈ B with x y, where ψ ∈ Ψ. (ii) cyclic ordered α − ψ-expansive mapping of type II, if there is 0 < p ≤ 1 such that holds for all x ∈ A and y ∈ B with x y, where ψ ∈ Ψ. (iii) cyclic ordered α − ψ-expansive mapping of type III, if there is p ≥ 1 such that holds for all x ∈ A and y ∈ B with x y, where ψ ∈ Ψ. (3)) holds for all x, y ∈ Y . Let β (x, y) ≥ 1 for x, y ∈ X, then x ∈ A and y ∈ B with x y. It follows that f −1 x ∈ B and f −1 y ∈ A with f −1 y f −1 x, since f is decreasing. Therefore β ( f −1 y, f −1 x) ≥ 1, implies that, f −1 is a twisted (α, β )-admissible map- Then all the conditions of Theorem 1.8 are satisfied with X = Y and f has a fixed point in A ∪ B, say z. Since z ∈ A implies z = f −1 z ∈ B and z ∈ B implies z = f −1 z ∈ A, then z ∈ A ∩ B. Theorem 3.3. Let (X, d, ) be an ordered complete metric space and A, B be two non-empty and closed subsets of X. Let α : X × X → [0, ∞) and f : A ∪ B → A ∪ B be a bijective, cyclic ordered α − ψ-expansive mapping of type I or type II or type III.Also suppose that the following conditions hold: α(x 2n , x 2n+1 ) ≥ 1 and β (x 2n , x 2n+1 ) ≥ 1 for all n ∈ N ∪ {0} and x n → x as n → ∞, then x 2 n ∈ A and x 2n+1 ∈ B with x 2 n x 2n+1 . Since B is closed and by (iii), we deduce that x ∈ B and x 2 n x, We deduce that all the hypotheses of Theorem 1.9 are satisfied with X = Y and hence f has a fixed point. COUPLED FIXED POINT Now, we shall show that the coupled fixed point theorems in complete metric spaces can also be derived from these results. Before proving the result, we recall the following definition due to Bhaskar and Lakshmikantham [12]. Definition 4.1. Let f : X × X → X be a given mapping. We say that (x, y) ∈ X × X is a coupled fixed point of F if F(x, y) = x and F(y, x) = y. Moreover, from the condition (ii) of the hypotheses of the theorem, we find that there exists So, we have transformed the problem to the complete metric space (Y, ρ). Therefore, all the hypotheses of Theorem 1. Theorem 4.4. Let (X, d) be a complete metric space and F : X × X → X be a given bijective mapping. Suppose that there exists ψ ∈ Ψ and functions α, β : for all (x, y), (u, v) ∈ X × X and 0 < p ≤ 1. Suppose also that (i), (ii) and (iii) of Theorem 4.3 are satisfied. Then F has a coupled fixed point, that is, there exists (x * , y * ) ∈ X × X such that x * = F(x * , y * ) and y * = F(y * , x * ). Proof. For the proof of our result, we consider the mapping f given by (4) as a bijective mapping such that Also, consider the complete metric space (Y, ρ), where Y = X × X and From (12), we have Also, we define the functions η 1 , η 2 : Y ×Y → [0, ∞) as given by (8) and (9). Moreover, from the condition (ii) of the hypotheses of the theorem, we find that there exists Theorem 4.5. Let (X, d) be a complete metric space and F : X × X → X be a given bijective mapping. Suppose that there exists ψ ∈ Ψ and functions α, β : for all (x, y), (u, v) ∈ X × X and p ≥ 1. Suppose also that (i), (ii) and (iii) of Theorem 4.3 are satisfied. Then F has a coupled fixed point, that is, there exists (x * , y * ) ∈ X × X such that x * = F(x * , y * ) and y * = F(y * , x * ). Proof. For the proof of our result, we consider the mapping f given by (4) as a bijective mapping such that Also, consider the complete metric space (Y, ρ), where Y = X × X and From (16), we have Also, we define the functions η 1 , η 2 : Y ×Y → [0, ∞) as given by (8) and (9).
2,958.2
2021-01-01T00:00:00.000
[ "Mathematics" ]
WOMEN'S ENTREPRENEURIAL INTENT IN BURKINA FASO In Burkina Faso, women make up nearly 52% of the population, but they account for less than 20% of business start-ups. This paper seeks to identify factors that explain the entrepreneurial intent of Burkinabe women. It is based on socio-cognitive career theory and a quantitative approach involving a sample of 935 women. The results of the multinomial logistic regression show that five variables (perception of skills, fear of failure, education level, household size and household income level) explain women's intention to start a business with as a reference modality (1: having the intention). The main contribution is of a methodological nature, and through the choice of reference modalities allows for greater precision on the influence of the variables and the categories or sub-groups of women influenced. For the latter, an individual generally tends to choose activities for which he or she believes he or she has the necessary skills and abilities. 66.5% of Burkinabè entrepreneurs feel they have experience in the current activity of their enterprises before the creation of their business. (Dialla, 2004). Similarly, 57.3% of creators mentioned as their motivation for creating the desire to use the experience they had previously acquired on their own account (Ouédraogo, 1999). Entrepreneurial self-efficiency, a concept that can be equated with perceived feasibility (Shapero and Sokol, 1982) or perceived behavioural control (Ajzen, 1991) is very important in explaining entrepreneurial intent (Barbosa et al, 2007;Zhao et al, 2005). Several works (Kolveired, 1996 In the context of this work, the perception of competences refers to the knowledge, skills and experience deemed sufficient by an individual to start a business. Thus we can formulate the following hypothesis. H1. The perception of skills to start a business positively influences women's intention to start a business in Burkina Faso. Fear of failure (Fearfail) and women's entrepreneurial intent: In some cultures, the relationship to failure is fatal. The failure of an entrepreneurial experience is often perceived as the failure of the entrepreneur himself (Liles, 1974) and as a result he loses his legitimacy, credibility and reputation as an entrepreneur. It is equated with personal incompetence that is severely sanctioned at all levels: psychological, financial, social, family, legal and professional (Diamane and Koubaa, 2015). In a study carried out in Algeria on the influence of individual and environmental factors, 63% of people agree that fear of failure prevents them from starting a business (Benredjem, 2009). Song Naba and Toé (2015) estimate that, in general, the fear of failure is very low in African countries. It is 23% in Cameroon, 13% in Uganda, 14% in Botswana and 25% in South Africa. However, cautionary behaviour is more frequent among women than among men, but the proportion varies according to age and level of education (Bernard et al, 2013). In the same vein, Iyer (1995) claims that men interviewed in his intention survey are less afraid of failure and more confident than women. Thus, the following hypothesis is formulated. H2. Fear of failure negatively influences women's entrepreneurial intent in Burkina Faso. Women's age and entrepreneurial intent:-Previous studies clearly highlight age as an important factor in determining entrepreneurial intention (Brockhaus, 1982;Reynolds, 1995;Rotefoss&Kolvereid, 2005). This relationship is curvilinear, with a peak around the age of 35, which is even stronger for women. However, in some countries, such as Spain, the opposite effect is even observed, with women over the age of 55 showing an increased propensity to start a business. As a result of the opportunities provided by higher education, women with higher levels of education are less exposed to necessity entrepreneurship (Lima et al, 2014). Extensive data from the United States indicate that groups with lower levels of education show less interest in an entrepreneurial career (Reynolds, 1995;Reynolds and Miller, 1990). However, the effect of education level does not appear to be linear. In this study we formulate the following hypothesis 4. H4. The level of education of Burkinabè women positively influences their entrepreneurial intention. Mediatisation (Social Valorisation) of entrepreneurship and women's entrepreneurial intent: Role modelling and research to date suggests that women in particular require greater exposure to the success of women entrepreneurs (McMurray, 2001). Scott and Twomey (1988) have shown that repeated exposure to role models is likely to be a key contributor to the genesis of positive behavioural intent. The social valorisation of entrepreneurship often assimilated to the context (environment) plays an important role in the development and pursuit of new opportunities (Audretsch, 2002). Context or environment can be understood as everything that is situated around the individual and can influence him or her. The social valorisation of entrepreneurship is understood through mediatisation (Nbmedia). In this sense, Amari et al (2014), consider that the desire to start a business comes from the individual himself, but refers to the social, family, political, economic and cultural environment. Basu and Virick, (2008; Linan and Chen, 2006; etc.) have shown that this variable significantly explains intention. This leads us to formulate the following hypothesis5. Women's income level and entrepreneurial intent: The initial capital of a new creation comes from personal income and family equity in general: social capital, human capital, financial capital and survival capital (Baccari E and Maoufoud S, 2008, Anderson and Miller, 2003). Hundley (2006) has shown that family income is positively correlated with the rate of self-employment. The same type of positive relationship between income levels and the self-employment rate was also observed among youth by Dunn and Holtz-Eakin (2000). In addition, Erkko and Zoltan (2010) found a positive relationship between an individual's family income and their aspirations to grow their business. However, in a comparative study in Brazil between two income classes, Liman et al (2014) find that low-income (lower class) women have more self-efficacy and a higher intention rate than their higher class counterparts. We believe that income level can affect the level of intention and make the following hypothesis 6. Household size and women's entrepreneurial intent: Women starting their own businesses are often forced to cope with family obligations (Bonnetier, 2005). As Ronsen (2014) shows, it is not only the presence of children in the family that should be considered when studying women's entrepreneurship, it is indeed the whole family framework that will influence the woman's decision to start a business, mainly the dependants and spouse. Fofana et al (2020), argue that motherhood and the associated family burdens have less and less influence on the motivation of women in their entrepreneurial initiatives. Their study shows that family size is really not relevant in explaining intention. In a similar vein, Bernard et al (2013) conclude that having one or more children does not play a significant role in being an entrepreneur, nor is it significant in the survival rate of businesses started or taken over by women. While women entrepreneurs perceive entrepreneurship as a tool to adjust their careers to their family duties (Fofana et al., 2020), it is clear that these duties increase with family size. Thus we formulate the following hypothesis 7. Research Methodology:- This section presents the approach of the research we conducted. Through this study, we seek to identify the factors that explain women's entrepreneurial intent in Burkina Faso. The dependent variable has three modalities (Yes= 1; No = 0 and Don't know = -1) to the question whether the individual intends to start a business. The explanatory variables are a mixture of binary and numerical variables. When the dependent variable has several nonordered categories (K > 2), we speak of multinomial logistic regression. We are then in the context of multinomial logistic regression. Several models (probit, logit, etc.) are often used in this case. The choice between the two models is difficult to justify from a theoretical point of view and depends on the application. There is practically no difference between these two laws (Aurier and Mejia, 2014). The option for the logistics law is simply motivated by its simplicity within this framework. A very interesting property of this regression is that it allows the estimation of an odd ratio which provides information on the strength and direction of the association between the explanatory variable and the variable to be explained. Multinomial logistic regression consists in designating a reference category, the last one (those who intend in our) for example to fix the ideas, and in expressing each logit (or log-odds) of the (K-1) modalities with respect to this reference using a linear combination of the predictive variables. The objective is to model the probability of an individual belonging to aYkmodality. We have an explanatory variable Y to K modalities and we try to model the probabilities. ( = ), = 1, … − 1, = 1, … , The approach consists of setting a reference modality, for example modality K, and modelling the probabilities. ( ) according to where = ( 1 ,•••, ) • Women's intention to start a business or not is thus modelled on the basis of a regression model whose variable to be explained is ( ) is polytomous. X is the vector of the explanatory variables associated with the intention to create or not to create a business and is the vector of the associated coefficients. The vector of variables is composed of: perception of skills (Suskill), fear of failure (Fearfail), age (Age), level of education (Bfreduc), media coverage (Nbmedia), income level (Bfhhinc), household size (Hhsize). ε is the error term. Sample: The data used in this work is taken from the Global Enterpreneurship Monitor (GEM) Burkina survey database conducted in 2015. This survey covered 2,850 individuals, respecting the distribution of the Burkinabe national population by age and gender. As our study focused on women's entrepreneurial intent, we initially eliminated men from the sample. This left 1147 women (out of 2326 individuals). We then eliminated the women who did not respond to the three response modalities on intention (i.e. yes, no and don't know). Finally, our analysis focused on a sample of 935 women. The average household size is 9.8 people. 37, 64% had an annual income between 0 and 364 US dollars and only 9% had an income above 1820 US dollars. 36.14% are between 25 and 34 years old and 6.31% are between 55 and 64 years old. 65.77% of the sample did not go to school, 10.3% have secondary education and only 0.11% have higher education. Analysis of the data: We ran a multinomial logistic regression to determine the relationship between the independent variables and the probability of intending to go into business. The ability of the logit model to process and propose an interpretation of the coefficients for the explanatory variables is very interesting. The Stata/SE14.0 software was used for this purpose. To interpret the results we used odds ratios. The choice of the reference modality mainly affects the reading of the coefficients. The reference modality defined by the software is 1. As the initial choice of the reference modality was not restrictive, we have subsequently changed, characterising in our case the modality (0) in relation to (-1). There is no need in this case to repeat the regressions. We obtain the logit (logarithm of the odds) between any 2 modalities of the dependent variable by simple differentiation. Research Results And Discussion:- The following table1 presents the results of the logistic regression. The tests are significant at the threshold of 1% (3 stars); 5% (2 stars) and 10% (1 star). On the other hand, household size and level of household income are not significant in the case of modality (-1) i.e. women who, when asked if they intend to, answered "don't know" and the level of education is also not significant in the case of women who chose modality 0 (no) compared to the reference modality 1 (yes). By characterising the modality (0) in relation to (-1) the following result is obtained: Link between perception of skills and intention to start a business (Suskill): The regression results show that women who choose modality -1 feel less competent compared to those who choose modality 1. Otherwise women who say they do not know if they intend to start a business in the next 6 months perceive their skills less than those who say they intend to start a business. In addition, women who do not intend to go into business feel less competent than those who do. Finally, by characterising the modality 0 by -1, we discover that the more women's perception of skills (Suskill) increases, the more they choose 0 in relation to (-1). This result shows that the more women perceive their skills, the more precise they are in their choice. And that the choice of intention increases with the level of perceived competence. This result confirms SakolaDjika (2019), who finds that in Burkina Faso, an individual who perceives entrepreneurial skills is 1.75 times more likely to have entrepreneurial intentions than one who does not. As a reminder, this variable is understood through experience in the field of activity among others. These results are consistent with those of Wilson et al (2009), Giacomin et al (2010) who found that the more experience an individual has in a trade, the more competent they feel to practice the trade and therefore to undertake in that field. Fear of failure: The more women are afraid of failure, the more they choose modality -1 (don't know) over modality 1. Then, the more they fear failure, the more they choose not to have the intention in relation to the reference modality (Yes: have the intention). In other words, an increase in the fear of failure makes women more likely to choose not to have the intention rather than to have the intention. By characterising the modality 0 in relation to -1, we realise that the more the fear of failure increases, the more women choose 0 in relation to (-1). Moreover, the effect of this variable is more exacerbated in (0 vs. -1) than in (0 vs. 1). This means that the more women apprehend their fear of failure, the more visibility they have regarding their choice, in this case it is to have no intention. This result is consistent with that of Diamane and Koubaa, (2015) and Benredjem, (2009) among others. It is also justified by the fact that cautious behaviour is more frequent among women than among men (Bernard et al, 2013). Age : The results of studies on the influence of age are contradictory. In the present study, age does not explain the probability of intending to start a business among women in Burkina Faso. The level of education: The higher their level of education, the more women choose modality -1 with reference to modality 1. Then, the higher their level of education, the less they choose 0 with reference to (1). Otherwise, the higher their level of education, the more women say they do not know if they intend to start a business compared to the choice to have the intention. Indeed, it should be pointed out that the higher their level of education, the more women have the opportunity to access salaried jobs. Therefore the choice of entrepreneurship would be a last resort (after unsuccessful attempts to get a job in the civil service). This result confirms Bayala (2015) who finds that 82% of creators have not attended school or have less than the Baccalaureate and that the rate of creation decreases with the level of education. On the other hand, this variable is not significant in the case of women who choose not to have the intention (modality 0) compared to those who do (modality 1). Media coverage: It follows from the table of results that the media coverage of successful new businesses does not influence the likelihood of women's entrepreneurial intent in Burkina Faso regardless of the modality of reference. However, this can be put into perspective when we know that the Burkinabe population is essentially rural and that in this environment access to the media is not necessarily the most common thing, especially among women. Household size and intention:- As the size of their household increases, they are more likely to be unintentional. In other words, those who do not intend have larger households than those who intend. This result can be explained by the demands of women's roles. Indeed, in Burkina Faso it is the woman who takes care of the house, as they say. The larger the size of the household, the less time she will have to devote to entrepreneurship. We can also see the effect of the pooling of resources. Indeed, the different members of the family each contribute in their own way to the running of the household. Thus the more members there are, the higher the contribution and the less the woman feels obliged to be entrepreneurial, given that women are more entrepreneurial out of necessity (Song-Naba and Toe, 2015). This reasoning is corroborated by the result on the link between household income and intention. Income level and intention: Finally, a higher household income encourages women not to intend to use the reference modality. In other words, the probability of women's entrepreneurial intention decreases as their income increases. This result suggests that women choose to be entrepreneurial when household income is insufficient to support the family. It confirms the authors' work that women undertake more out of necessity than opportunity. In a similar vein, Liman et al (2014) find that lowincome women have more self-efficacy and a higher intention rate than their higher income class counterparts. Conclusion:- This work focused on the factors that explain women's entrepreneurial intent in Burkina Faso. A multinomial logistic regression was performed on a sample of 935 women from the GEM (2015) survey data of Burkina Faso. Five variables explain women's intention to start a business with as a reference modality (1: having the intention). Firstly, a reading of the 0 versus 1 modality shows that the more competent women feel, the more they choose the intention to go into business rather than the opposite. On the other hand, the more their fear of failure increases, the larger the size of their household, and the more their household income increases, the less they choose the intention to go into business. Then by switching to modality -1 versus 1 we find that women who do not know whether they intend or not, perceive their entrepreneurial skills less than those who intend. They express a greater fear of failure compared to those who have the intention. The higher their level of education, the more they say they do not know. Finally, by characterising the modality 0 in relation to -1 we can see that the more the level of women's perception of competence increases and the more the fear of failure increases, the more they choose not to have the intention (0) in relation to (-1). The effect of the fear of failure is more exacerbated in (0 vs. -1) than in (0 vs. 1). These results, on the one hand, add to the knowledge about women's entrepreneurship, particularly women's entrepreneurial intent in Burkina Faso. The main contribution is methodological, and through the choice of modality allows for more precision on the influence of variables and the categories or sub-groups of women influenced. In managerial terms, this makes it possible to design and provide specific support for the needs of the sub-groups. Indeed, this work offers levers to build on in order to generate more intention, with the understanding that it is an important step towards creation. For example, support should be more oriented towards competence building (training, internships, etc.), promoting entrepreneurship as a desired career and not "suffered 3 ". However, the results of our study have limitations. Firstly, they concern the failure to take into account other variables such as the place of residence (urban or rural), occupation at the time of the survey (employed or unemployed), etc. These limitations are not always taken into account. Taking these variables into account in the framework of another study will undoubtedly contribute to a better understanding of the subject.
4,667.6
2021-02-28T00:00:00.000
[ "Business", "Sociology", "Economics" ]
An Innovative Approach to Integrated Carbon Mineralization and Waste Utilization: A Review Carbon dioxide (CO2) emission reduction in industry should be a portfolio option; for example, the carbon capture and utilization by mineralization (CCUM) process is a feasible and proven technology where both CO2 capture and alkaline waste treatment occur simultaneously through an integrated approach. In this study, the challengeable barriers and significant breakthroughs of CCUM via ex-situ carbonation were reviewed from both theoretical and practical perspectives. Recent progress on the performance of various carbonation processes for different types of alkaline solid wastes was also evaluated based on CO2 capture capacity and carbonation efficiency. Moreover, several process intensification concepts such as reactor integration and co-utilization with wastewater or brine solution were reviewed and summarized. In addition, the utilization of carbonated products from CCUM as green materials including cement, aggregate and precipitate calcium carbonate were investigated. Furthermore, the current status of worldwide CCUM demonstration projects within the ironand steelmaking industries was illustrated. The energy consumption and cost analyses of CCUM were also evaluated. Carbon Capture and Utilization by Mineralization (CCUM) Carbon capture and utilization by mineralization (CCUM) is a promising and feasible technology because the CO 2 generated from combustion of fossil fuels or carbon-related chemicals can be directly fixed as mineral carbonates.As shown in Fig. 1, since the carbonates are naturally occurring minerals and possess the lowest free energy of formation, the carbonation product is safe and stable over geologic periods of time, resulting in negligible release of CO 2 to the environment.The theoretical consideration of CCUM is based on the so-called "accelerated carbonation" process, whereby the gaseous CO 2 is reacted with alkalineearth metal-oxide (e.g., CaO and MgO) and converted into carbonate precipitates in the presence of aqueous environments.Appropriate feedstock sources for carbonation reaction include (1) natural ores such as serpentine and wollastonite and (2) alkaline solid wastes such as iron-and steelmaking slag and municipal solid waste incineration fly ash (MSWI-FA) (Eloneva et al., 2012;Teir et al., 2009;Huntzinger et al., 2009;Renforth et al., 2011;Sanna et al., 2012a;Nduagu et al., 2013;Olajire, 2013;Said et al., 2013;Pan et al., 2014).However, accelerated carbonation using natural ores creates its own environmental legacy because of the massive mineral requirements and associated scale of mining (Gerdemann et al., 2007).Therefore, the size of the mining operation is believed to comprise the most significant economic and environmental barrier to be overcome if any large-scale process is implemented. Accelerated carbonation can be accomplished either in-situ (underground in geologic formation) or ex-situ (aboveground in a chemical processing plant) (Pan et al., 2012;International Energy Agency (IEA), 2013a;Sanna et al., 2014).In the case of in-situ carbonation, CO 2 is transported to underground igneous rocks, typically basalt, and is permanently fixed within the hosting rocks as solid carbonates (Kelemen and Matter, 2008;Muriithi et al., 2013).In the case of ex-situ carbonation, a source of calcium-silicate feedstock (e.g., natural ores and alkaline wastes) is carbonated aboveground; for instance, carbonation at industrial sites, biologically mediated carbonation, and carbon mineralization in industrial reactors (Gerdemann et al., 2007;Pan et al., 2013a). Integrated Ex-situ Carbonation for Alkaline Wastes In the ex-situ carbonation approach, two branches have been developed: direct carbonation and indirect carbonation.Basically, direct carbonation occurs in one single step, while for indirect carbonation the mineral has to first be refined, and then the refined mineral is carbonated.Although ex-situ carbonation is not economically viable so far, relevant research is still active and attractive, since the raw materials required for carbonation are globally abundant. In this study, an innovative approach to ex-situ carbonation using industrial wastes including waste CO 2 gas, alkaline solid wastes and wastewater was proposed, as shown in Fig. 2. Through the carbonation process, the gaseous CO 2 in flue gas was fixed as solid carbonates, while the wastewater was neutralized to a pH value of 6-7.In addition, the physicochemical properties of carbonated solid waste can be upgraded, which makes it suitable to be utilized as green cementitious materials (Fernandez Bertos et al., 2004a).Carbonation could also act positively in the immobilization of heavy metals, such as Cd, Pb, and Cr, leaching from the alkaline solid waste by sorption in the newly formed products.The leaching of Pb, Zn, Cr, Cu, and Mo is markedly reduced upon carbonation for both APC and BA (Fernandez Bertos et al., 2004a;Arickx et al., 2006;Cappai et al., 2012;Santos et al., 2013b).Cd and Pb have a strong affinity with calcium carbonate and also form complexes with Fe and Al (hydr-)oxides (Rendek et al., 2006).Immobilization of Sb could also be achieved by combining with other processes (e.g., sorbent adding) during carbonation reaction (Cappai et al., 2012).Furthermore, carbonation has recently proved to be an effective way to improve the durability of concrete because relatively insoluble CaCO 3 is formed from the soluble Ca(OH) 2 in the concrete (Tsuyoshi et al., 2010).Therefore, improvements in the chemical and physical characteristics of treated residues after carbonation can facilitate its reuse in a variety of applications, such as construction materials. Objectives In this paper, the challenging barriers and significant breakthroughs of CCUM were reviewed from the perspective of both theoretical and practical considerations.Recent progress on the performance of various carbonation processes for different types of alkaline solid wastes was evaluated based on CO 2 capture capacity and carbonation efficiency.Moreover, several process intensification concepts such as physicochemical activations, co-utilization with wastewater or brine solution, and integration of novel reactors were discussed and summarized.In addition, the utilization of carbonated products as green materials including cement, aggregate, and precipitate calcium carbonate (PCC) were reviewed.Furthermore, the current status of CCUM demonstration projects within the iron-and steelmaking industries in the U.S., France, Australia and Taiwan were illustrated.The energy consumption and cost analyses of CCUM were also evaluated. Properties of Alkaline Wastes Alkaline wastes are suitable and can be used as alternative reactants in carbonation to natural ores because Fig. 2. Integrated carbon mineralization and waste utilization through accelerated carbonation: an innovative CCUM process.they are abundant, cheap and usually generated from nearby CO 2 emission points in many industries.However, it is difficult to directly compare different wastes for mineral CO 2 sequestration because each waste has its own unique set of advantages and disadvantages, and different capacity results are proposed and published in a variety of forms.Fig. 3 shows the relationship of CO 2 capture capacity (in terms of CaO and MgO contents) and hardness (in terms of Fe 2 O 3 and Al 2 O 3 contents) for different types of solid wastes.In general, the contents of CaO and MgO in ironand steelmaking slags were relatively higher than those in fly ash (FA) or bottom ash (BA).However, some steelmaking slag such as basic oxygen furnace slag (BOFS) is hard (due to high contents of Fe 2 O 3 and Al 2 O 3 ) and requires energy-intensive processing, which makes it challenging as viable sinks of CO 2 .Conversely, FA is relatively more promising, since it is a fine powder and already close at hand, which means the costs for transportation, extraction and crushing are minimal.The amounts of CaO and SiO 2 contents in BOFS increase remarkably with the decrease of particle size, while the Fe 2 O 3 fraction decreases significantly (Zhang et al., 2011;Pan et al., 2013b). In addition, wastewater or brine, which is a saline-based waste solution (total dissolved solid is generally more than 10,000 mg/L) produced from some industrial procedures such as oil and natural gas extraction (known as oil-field brines), could be used as the liquid agents in the carbonation reaction (Druckenmiller and Maroto-Valer, 2005;Uibu and Kuusik, 2009;Liu andMercedes Maroto-Valer, 2011, 2012).Most wastewater treatments use chemicals as a neutralizing agent to adjust pH and enhance metal precipitation.However, the use of chemicals carries with it a high economic and environmental cost because it is a "resource" and not a "residue."Under the appropriate conditions, CO 2 would dissolve in the brine solution to initiate a series of reactions that ultimately lead to the bonding of carbonate ions to various metal cations inherent in brine to form carbonate precipitates such as calcite (CaCO 3 ), magnesite (MgCO 3 ), and dolomite (CaMg(CO 3 ) 2 ). Types of Carbonation Processes There are two approaches to ex-situ carbonation: direct and indirect carbonation.Direct carbonation can be conducted in two ways, i.e., a dry gas-solid reaction or an aqueous reaction (gas-liquid-solid).The gas-solid carbonation, as described in Eq. ( 1), is the simplest method to mineralization.Ca/Mg-silicate (s) + CO 2 (g) → (Ca/Mg)CO 3(s) + SiO 2(s) (1) However, the reaction kinetics of the dry gas-solid reaction at ambient pressure and temperature is far too slow to be effective in CCUM on a wide-scale basis.The reaction can be slightly accelerated by either pre-treating feedstock (such as grinding process and thermal activation) to increase the reactive surface area, or possessing carbonation up to 500°C (Pan et al., 2012).Nevertheless, those treatments and processes are very energy intensive; therefore, the environmental benefits of carbonation might be easily offset.Moreover, the low capture efficiency is not currently viable on the industrial scale. The addition of water to the direct carbonation process (i.e., aqueous carbonation) can significantly increase the reaction kinetics because of the mobilization of ions in the reaction of carbonic acid with alkaline materials.Fig. 4(a) shows a schematic diagram of direct aqueous carbonation for CO 2 fixation and construction material production.The carbonation of alkaline solid waste was carried out with the direct contact of flue gas from a stack, in the presence of a liquid phase, typically using tap water.The wastewater generated in the same industries also can be introduced for the carbonation process to avoid the use of freshwater resources.After carbonation, the slurry is separated into (Teir et al., 2007;Reddy et al., 2010;Sanna et al., 2012a;Abo-El-Enein et al., 2013;Dri et al., 2013;Hekal et al., 2013;Muriithi et al., 2013;Santos et al., 2013b, c;Dri et al., 2014;Jo et al., 2014;Reynolds et al., 2014;Salman et al., 2014;Santos et al., 2014;Ukwattage et al., 2014).carbonated solid wastes and liquid solution.The separated liquid solution can be heated by a heat exchanger with relatively hotter flue gas, and recirculated into the reactor for the next carbonation.It was noted that no excessive heat was required in aqueous carbonation for enhancing the reaction; however, the solution can be moderately heated to 60-80°C to achieve a higher carbonation conversion of solid wastes compared to that at ambient temperature (Chang et al., 2012a(Chang et al., , 2013a)). On the other hand, indirect carbonation, as shown in Fig. 4(b), involves several steps: (1) extraction of metal ions from alkaline solid wastes, (2) liquid-solid separation, and (3) carbonation of the filter solution.Eq. ( 5) shows the first step of indirect carbonation (i.e., so-called extraction), where calcium ions are extracted for example using acetic acid (CH 3 COOH) from mineral crystals of CaSiO 3 .The extracted solution is filtered through a fiber membrane to separate the mother solution (i.e., rich in calcium ion) and extracted solids (i.e., calcium-depleted SiO 2 particles).After that, the filtered solution is carbonated with a CO 2 source where the precipitation reaction occurs, as shown in Eq. (6).Several effective extractants such as acetic acid and ammonium salts (e.g., CH 3 COONH 4 , NH 4 NO 3 , NH 4 SO 4 and NH 4 HSO 4 ) are commonly used in the extraction stage (Kakizawa et al., 2001;Dri et al., 2013;Jo et al., 2014;Santos et al., 2014).After carbonation, the end product is usually a pure carbonate (i.e., CaCO 3 and MgCO 3 ) because most of the oxides and hydroxides from the material have been extracted, and followed by direct carbonation of the oxides and hydroxides with the CO 2 .(5) Multistep indirect carbonation in the presence of additives can reach high carbonation efficiency under mild operating conditions within a short residence time (Jung et al., 2014a;Sanna et al., 2014).It was observed that high-purity spherical carbonates (e.g., vaterite) can be obtained by indirect carbonation (Jo et al., 2014).However, the requirement of an energy-intensive process for chemical regeneration is still a limiting factor for large-scale deployment.Research has indicated that energy and chemical costs could be reduced by carrying out the reaction between hydroxide and CO 2 at high pressure and temperature (i.e., 25 bar and 450°C), potentially making the hydroxide route technically achievable on an industrial scale (Nduagu et al., 2012). Table 1 presents the various approaches to ex-situ direct or indirect carbonation using alkaline solid wastes as reported in the literature.The overall CO 2 capture capacity of steelmaking slag such as BOFS, CODS and CCS was relatively higher than that of other solid wastes such as FA.However, it was noted that the energy requirement for crushing, heating, and stirring needs to be offset by the carbonation process exotherm to make the process economically viable in an industrial context (Huijgen et al., 2006, 2007, International Energy Agency (IEA), 2013a; Pan et al., 2013b).Since the reaction temperature would affect not only the leaching rate of calcium ions from solid wastes but also the rates of CO 2 dissolution and carbonate precipitation, it was observed that the appropriate temperature for overall CO 2 capture performance should be set at around 60-80°C in the cases of aqueous carbonation using SS and FA (Chang et al., 2012a, b;Ukwattage et al., 2014). In addition, utilization of the reacted materials should be implemented to couple the CO 2 reduction and waste utilization in industries (Pan et al., 2012).Although the CO 2 capture efficiency using direct carbonation was found Table 1.Performance evaluation of ex-situ carbonation using alkaline solid wastes in the literature.to be superior to that using indirect carbonation, the purity of the produced CaCO 3 precipitate using indirect carbonation was higher than that using direct carbonation. Thermal Heat Activation (Pre-treatment) Thermal heat activation is commonly used as a pretreatment for solid wastes to remove chemically bound water.Especially for serpentines containing up to 13% of chemically bound water, the crystalline features are changed to amorphous ones following the decomposition of hydroxyl groups after heating to 600-650°C (Park and Fan, 2004;Li et al., 2009).It was also observed that the porosity and surface area of solid wastes increase, and the structural instability was created after thermal heat treatment, thereby promoting the rate of carbonation afterward (Park and Fan, 2004;Li et al., 2009).The drawback is that huge amounts of energy are required to achieve the high temperatures, which makes it impractical for use as a largescale treatment.Another new approach is utilizing the exothermic reaction that comes from mineral carbonation, which has been proved to be self-sufficient in terms of energy (Moazzem et al., 2013). Chemical Activation Using Acids and Bases Chemical activation utilizes the solvent, acid or base agents to weaken the Mg-bonds in an Mg-silicate structure.This allows the improvement of dissolution kinetics, thereby increasing the carbonation efficiency.Many chemical solutions such as ammonia, acetone and HCl have been evaluated in the literature (Maroto-Valer et al., 2005;Power et al., 2013;Jung et al., 2014b).Another approach is the "pH-swing" process used in indirect carbonation (Park and Fan, 2004).This process allows silicates to be dissolved at a low pH and precipitates carbonates at a higher pH, resulting in more control of the carbonation process.However, it is much more expensive than traditional carbonation. Co-utilization with Wastewater or Brine Solution Another approach is utilizing the wastewater (or brine solution) to enhance the rate of calcium ion leaching from solid wastes or gaseous CO 2 dissolution into the liquid phase.The carbonation conversion was found to be higher in the wastewater (brine)/solid residue system than in the water/solid (Nyambura et al., 2011;Chang et al., 2013b;Pan et al., 2013b).The leaching rate and capacity of calcium ions from steelmaking slag in metal-working wastewater was greater than that in tap water.Both organic and inorganic (non-acid) ionic species in wastewater (e.g., sodium and/or chloride ions) can promote the dissolution of silicate minerals (Beard et al., 2004;O' Connor et al., 2005;Krevor and Lackner, 2011;Jo et al., 2012;Chang et al., 2013a;Pan et al., 2013b).The findings reported by Jo et al. (2012) show that a greater reactivity of the calcium-bearing complex towards chloride than other ions could thereby result in greater rates of mineral dissolution in the presence of chloride.The selection criteria of wastewater (or brine) for Fig. 5. Different approaches to process enhancement of ex-situ direct carbonation for alkaline solid wastes.carbonation were as follows: first, the pH of the solution should be over 9.0 where CO 3 2-dominates because the precipitation of carbonate is favored under a basic condition.Second, the selected solution should contain neither bicarbonate nor carbonate ions (Druckenmiller and Maroto-Valer, 2005;Liu and Maroto-Valer, 2012). Biological Enhancement (Bio-Carbonation) Microorganisms, e.g., anaerobic and aerobic bacteria, can positively affect carbonation by directly or indirectly enhancing the solubility and dissolution of minerals (Huang and Tan, 2014).Another approach is to utilize algae and cyanobacteria to perform photosynthesis in the presence of sunlight as energy to convert gaseous CO 2 (Huang andTan, 2014, McCutcheon et al., 2014).The absorption of cations by a net negative cell wall increases the cation concentration within the cell, which can be used to facilitate mineral precipitation.Specially designed "carbonation ponds" or "basins" with a high alkalinity would have to be made using a natural cyanobacteria-dominated consortium in order for the photoautotrophs to thrive and precipitate carbonates (McCutcheon et al., 2014).In addition, this process can serve as silicon sinks by the formation of mineralized cell walls (i.e., frustule) from [SiO 2 nH 2 O] in the bioreactor. Furthermore, "enzymatic carbonation" relying on carbonic anhydrase (CA) can be effective, especially when the supply of CO 2 is limiting the rate of carbonation, even in an industrial environment, or utilized in an open-air environment.In addition, the CA enzyme enhances the carbonation of Ca-and Mg-bearing materials (Li et al., 2013a, b).A CA enzyme-based system has been developed inside a membrane that was able to remove 90% of the CO 2 supplied (Figueroa et al., 2008).However, the instability and very high costs of CA may be the challenges for practical applications (Sanna et al., 2014). Combination with Novel Processes Since the aqueous carbonation of alkaline solid wastes was believed to be a diffusion-controlled reaction, intensification of mass transfer efficiency among phases was essential to improve CO 2 capture capacity and reduce energy consumption and operating costs.A slurry reactor incorporated with ultrasound vibration has been introduced to accelerate the precipitation rate of calcium carbonate via ultrasonic irradiation (Rao et al., 2007(Rao et al., , 2008;;Santos et al., 2012, Santos et al., 2013a).The results indicate that the efficiency of physical mixing, particle breakdown and removal of passivation layers increased with sound waves with frequencies in the range of 16-100 kHz.Therefore, a better conversion can be achieved in a shorter time compared to that without ultrasound; for instance, the carbonation conversion of combustion ashes increased from 27% to 83% with ultrasound for 40 min (Rao et al., 2007). Another approach is using a rotating packed bed (RPB) reactor, which has been successfully introduced for carbonation of solid wastes (Chang et al., 2012a(Chang et al., , 2013a;;Pan et al., 2013aPan et al., , b, 2014)), as a so-called "high-gravity carbonation process."RPB can provide a mean acceleration of up to 1,000 times greater than the force of gravity, thereby leading to the formation of thin liquid films and micro-or nano-droplets.(Chen et al., 2004, Chen et al., 2010, Cheng and Tan, 2011, Kelleher and Fair, 1996, Lin and Chen, 2008, Wang, 2004, Yu et al., 2012).Both the CO 2 removal efficiency and carbonation conversion of steelmaking slag in an RPB were observed to be greater than those in an autoclave or a slurry reactor (Chang et al., 2012b;Pan et al., 2013a, b).High CO 2 removal efficiency can be achieved with a retention time of less than 1 min under ambient temperature and pressure conditions (Pan et al., 2013a). Summary In this section, the principles and applications of several novel processes were illustrated, as summarized in Table 2. Alkaline wastes including wastewater (e.g., cold-rolling mill wastewater) and solid wastes (e.g., steelmaking slag) from steel manufacturing plants could be used to sequester meaningful quantities of CO 2 , especially if the wastes are generated near a point source of CO 2 emission, although the CO 2 sequestration potential remains marginal on a global scale of CO 2 emission.In other words, the integrated waste treatment, i.e., CO 2 , wastewater, and steelmaking slag, for sustainable development should be promoted and implemented in industries.It was observed that mineralization of CO 2 by accelerated carbonation of alkaline wastes has the potential to not only sequester CO 2 but also upgrade the physicochemical properties of waste streams. Cement in Concrete Concrete is made of roughly 80% aggregate (sand and gravel), 10-15% cement, and 5-10% additives, water, and air.The worldwide cement industry is increasingly turning to the use of alkaline solid wastes, such as blast furnace slag (BFS) and FA, as supplementary cementitious materials (SCM) because of increasing petroleum prices and government regulations, as well as a lack of raw materials and increasing demand for concrete and cement.Processes using fresh fine BFS or fly ash as alternative binders in place of Portland cement in concrete have been developed and widely used, especially in the U.S., to reduce the energy consumption and CO 2 emission associated with concrete production.However, the challenges resulting from the negative effects of this substitution, which are mainly related to early strength development, remain. Engineering experience shows that with 50% clinker replacement with fresh FA, the early strength of concrete is reduced dramatically (Crow, 2008).Despite the fact that FA usually replaces no more than 25% of the Portland cement in concrete, research conducted at Montana State University successfully demonstrated the use of 100% fly ash concrete with glass aggregate to construct a building (Hasanbeigi et al., 2012).Furthermore, not every alkaline solid waste can be successfully utilized in concrete directly.Several types of alkaline solid wastes like BOFS typically containing 3-10% free-CaO and 1-5% free-MgO would lead to fatal expansion of hardened cement-BOFS paste (Zhang et al., 2011), which has limited its application as aggregates or cements during the past.The carbonation process is capable of permanently mineralizing carbon in the form of either fine or coarse aggregates or an SCM to meet the growing green product market as a carbon-negative material.In other words, when alkaline solid wastes are carbonated via direct carbonation, the products can also be used for a broad range of applications such as construction aggregate (large particles) and cement (fine particles), without the potential presence of free-CaO and free-MgO.Accelerated carbonation of alkaline wastes could be a potential commercialization route where the produced carbonates can be used as a substitute for components of cement (International Energy Agency (IEA), 2013b).The suitability of the calcareous material as a partial replacement for cement clinker in cement has been documented in some non-structural applications in the U.S., but the suitability of the calcareous material as a cement ingredient in concrete applications (Zaelke et al., 2011) has not yet been demonstrated publicly.It was also reported that CKD and FA have been successfully used to produce a green Portland ash (Shah, 2004;Zaelke et al., 2011).Currently, the challenges in the application of this process include (1) the effect of impurities on performance, (2) acceptance of the product by the cement industry, (3) the ability to capture large amounts of CO 2 , (4) energy requirements, (5) finding an appropriate water source, (6) production of alkalinity, and (7) having sufficient demand for the end product. Aggregate in Concrete In addition to the "green" cement, the carbonated alkaline solid waste can function as construction aggregate to partially replace sand, gravel, and crushed stone.Many industrial waste materials can potentially be used as economical and environmentally friendly sand substitutes for cementitious building products.There are two types of aggregate: (1) coarse aggregate (generally ranging from 9.5 mm to 37.5 mm) including gravel and crushed stone, and (2) fine aggregate (usually smaller than 9.5 mm) including sand and crushed stone.Most construction aggregate is used to strengthen composite materials, such as concrete and asphalt concrete, for a myriad of uses ranging from railroad bases to housing foundations.Monkman et al. (2009) evaluated the use of carbonated ladle slag as a fine aggregate in zero-slump press-formed compact mortar samples and compared them to similar samples containing control river sand.The 28-day strengths of the mortars made with the carbonated slag sand were comparable to the strengths of the normal river sand mortars (Monkman et al., 2009), which clearly indicates the successful use of carbonated ladle slag as fine aggregates to prepare mortar samples simulating its applications in precast products, such as masonry units, paving stones, and hollow core slabs, which could be further treated by carbonation curing.Moreover, the carbonated particles became coarser due to agglomeration, which should be beneficial for use in aggregate manufacturing (Fernandez Bertos et al., 2004b). In addition, recycled concrete aggregate (RCA), collected from old roads and buildings, has shown promise as an alternative to natural aggregate (NA).Table 3 presents the various properties of RCA for replacement of NA in construction concrete.While RCA and NA have similar gradation, RCA particles are more rounded in shape and have more fine particles broken off in L.A. abrasion and crushing tests (McNeil and Kang, 2013).This suggests that the use of RCA as a structural concrete should be viable because the performance of RCA concrete beams was still within standard specifications (Sagoe-Crensil et al., 2001;Shayan and Xu, 2004;McNeil and Kang, 2013;Behera et al., 2014).Furthermore, creep can be minimized by incorporating FA as either an additive or a replacement in concrete in the case of RCA utilization (Behera et al., 2014). Precipitate Calcium Carbonate (PCC) Precipitate calcium carbonate (PCC) is a product from indirect carbonation.Different crystal morphologies and shapes of PCC can be produced and utilized in various applications in the construction, oil, plastics, paper, and pharmaceutical industries (Teir, 2008;Eloneva et al., 2010Eloneva et al., , 2012)).Approximately 75% of the produced PCC is expected to be used in the paper industry (Teir et al., 2005, Zevenhoven et al., 2008), where PCC can serve as a replacement for more expensive pulp fiber and optical brightening agents to improve the quality and printing characteristics of paper (e.g., smoothness, gloss, whiteness, opacity, brightness, and color). PCC also can be used to replace some of the cellulose as fillers and coating pigments in plastics, rubbers, paints, and papers (Eloneva et al., 2009(Eloneva et al., , 2010)).For these applications, several physicochemical properties of PCC play an important role, including particle size distribution, specific surface area, morphology, polymorphism and purity (Sanna et al., 2012b).In addition, PCC can be utilized as an additive and filler in construction materials.Companies in North America, the EU, and Australia are working on developing a similar process, which involves CO 2 capture by bubbling flue gases through saline water to produce solid carbonates as an aggregate material in cement (International Energy Agency (IEA), 2013b). Summary Application of fresh steelmaking slag as an alternative to standard materials has been known for a number of years around the world.It was used most often in asphalt mixtures, other layers of pavement structure, unbound base courses and embankments.Nonetheless, the use of alkaline solid wastes must comply with strict regulations, consisting of civil-technical and environmental requirements.Several barriers including volume expansion of blended materials (e.g., in the case of fresh BOFS) and concerns about environmental impacts and social acceptance have been encountered.It has been proved that the CaO f and Ca(OH) 2 in steelmaking slags can be eliminated after carbonation, thereby preventing the expansion problem of the blended materials.In addition, several studies concluded that the mechanical properties of mortar blended with carbonated (Sagoe-Crensil et al., 2001;Shayan and Xu, 2004;McNeil and Kang, 2013;Behera et al., 2014).solid wastes were superior to those with fresh solid wastes.At the same time, the carbonated materials can meet the standards of construction engineering, providing positive benefits for practical applications. CASE STUDIES Pilot Study/Demonstration Project Scale-up of the CO 2 post-combustion process is possible without significant developments or costs (International Energy Agency (IEA), 2013b).However, process performance should be further improved by process integration, thereby reducing the energy requirement for the capture process.An integrated steelmaking process is composed of numerous facilities from the entire life-cycle of iron ore to steel products including raw material preparation (e.g., coke production, ore agglomerating plant and lime production), ironmaking (e.g., blast furnace, hot metal desulphurization), steelmaking (e.g., basic oxygen furnace, ladle metallurgy), casting and finishing mills.Therefore, deployment of the CCS process in steelmaking mills is challenging, since the CO 2 emissions come from multiple sources.The largest part of direct CO 2 emissions in steelmaking mills is from power plants which are around 48% in total CO 2 emissions, followed by blast furnaces at around 30% in total CO 2 emissions (Santos, 2013).In addition, the source of the CO 2 may not be the emitter of the CO 2 .In other words, the emissions also strongly depend on management of the use of byproduct gases, as well as on the definition of boundary limits. Table 4 summarizes the pilot studies and demonstration projects of CCUM using alkaline wastes.At Rocks, Wyoming, in the U.S., accelerated carbonation has been demonstrated in a 2,120 MW coal-fired power plant using FA since 2007 (Reynolds et al., 2014).Another pilot study in the U.S. has been developed by Calera Corp.The Calera technology can capture CO 2 (approximately 30,000 t/y) from fossil fuel power plants and other industrial sources and sequester it in geologically stable substances suitable for disposal, storage, and/or use as building materials.In the summer of 2009, Calera identified what was believed at the time to be an ideal demonstration site at a browncoal power plant in the Latrobe Valley, Victoria, Australia (Zaelke et al., 2011). In France, the Carmex project was initiated in 2007 to carbonate various materials such as harzburgite, wehrlite, iherzolite, slags, and olivine through direct carbonation with and without organic ligand or mechanical exfoliation.The highlight of this project is that the accessible alkaline wastes are matched to large CO 2 emitters through a dedicated geographical information system (GIS).A high carbonation conversion, 70-90%, can be achieved without heat activation of feedstock.The Carmex experiences indicate that the use of mineral carbonation is feasible for industries (Bodénan et al., 2014). Recently in Australia, the MCi project has been carried out to transform CO 2 into carbonates for use in future building products like bricks, pavers and plasterboard replacements that are non-fired products (Mineral Carbonation International, 2013).Similarly, the China Steel Corporation (CSC) project in Taiwan was launched in 2013 by introducing the highgravity carbonation process (i.e., RPB reactor) for smallscale carbonation of BOFS and alkaline wastewater.The CO 2 removal efficiency of hot-stove gas from the blast furnace was greater than 95%, with total elimination of CaO f and Ca(OH) 2 content in the BOFS.The carbonated BOFS was further used as green cement with different substation ratios in mortar.As presented in Fig. 6, an integrated approach to applying the high-gravity carbonation process (so-called HiGCarb process) was proposed for CO 2 capture in flue gas and solid product utilization within a steelmaking plant.It was noted that the CO 2 removal rate by the high-gravity process can meet the time scale in industrial plants. The world's first commercialized carbon mineralization plant (Capitol SkyMine®) is under construction by Skyonic at the Capitol Aggregates, Ltd., cement plant in San Antonio, Texas, U.S. and scheduled for completion by 2014.This plant is expected to directly remove CO 2 (~75,000 t/y) from industrial waste streams through co-generation of carbonate and/or bicarbonate materials (~143,000 t/y) for use in bioalgae applications to become a profitable process.In addition to capturing and mineralizing CO 2 , the SkyMine® process can remove SO x , NO 2 and heavy metals such as mercury from existing power plants and industrial plants that can be retrofitted with SkyMine®. Cost Evaluation and Energy Consumption CCUM is definitely an important part to the reduction of CO 2 from the industrial sector and the use of industrial wastes as cement replacement.Technology may not be the only barrier to the deployment of CCS in the industrial sector.Market competitiveness and the global nature of some of these industries are important issues that should be addressed.Energy and cost penalties are related to plant scale, operation conditions, and operation modulus such as pre-treatment (e.g., grinding and thermal activation) and/or post-treatment processes (e.g., product separation and disposal) (Pan et al., 2012;Sanna et al., 2012a).However, due to lack of commercialized plant studies, cost estimations of accelerated carbonation are based roughly on pilot-or laboratory-scale operations. In the case of indirect carbonation using chemical extraction (e.g., HCl, HNO 3 , CH 3 COOH and NaOH) without regeneration of chemicals, a fairly high cost, US$600-4,500, would be required for capturing one ton of CO 2 (Sanna et al., 2014).In addition, the regeneration of the chemicals would generate more than 2.5 times the amount of CO 2 fixation in the carbonation process (Teir et al., 2009).The operating costs would also depend largely on the purity of the PCC product.An average cost of US$80 was required per ton of the PCC production from two-stage carbonation using cement wastes at 50°C and 30 bar, where the energy consumption including pulverization, carbonation, CO 2 separation, CO 2 pressurization, and stirring process for both extraction and carbonation were considered at a total of 52.8 MW (Katsuyama et al., 2005).On the other hand, as presented in Table 5, the energy consumption and cost evaluation of direct carbonation were found to be relatively Fig. 6.An integrated approach to applying high-gravity carbonation (HiGCarb) process for CO 2 capture in flue gas and solid product utilization within steelmaking plant. lower than that of indirect carbonation.In the case of direct carbonation, the energy requirement of the grinding process was the major cost in the overall process (Gerdemann et al., 2007, Pan et al., 2013b).It was observed that the cost of ex situ direct carbonation was in the range of US$54-133 per t-CO 2 , depending on the types of feedstock and operating modulus.Moreover, the handling of solids in the process has the potential to raise the O&M costs when compared to CO 2 absorption using ammonia and amine technologies (Yu et al., 2012). In contrast, the total cost of in-situ carbonation was estimated at US$72-129 per t-CO 2 (considering an estimated transportation and storage cost of ~US$17 per t-CO 2 in basaltic rocks (Gislason and Oelkers, 2014)), without taking into account long-term monitoring costs.However, all of those costs are by far greater than the recent European carbon market price of ~US$7 per t-CO 2 in 2014 (Sanna et al., 2014).It was noted that the CO 2 price may increase to US$35-90 per t-CO 2 by 2040 (Wilson et al., 2012).In another scenario estimated by the International Panel on Climate Change, the price of carbon credit gives ~US$55 per t-CO 2 as a lower bound estimate, necessary to meet Kyoto protocol targets. It has been observed that the carbonated SS can potentially be used as partial cement replacement materials (Liang et al., 2012;Salman et al., 2014).However, in order to make exsitu carbonation more economically feasible, a breakthrough on the use of carbonated solid wastes or products should be sought in the areas of technology, regulation, institution and finance.The global cement market is quite large, with roughly 3.5 billion metric tons used in 2011, and a processing cost of nearly US$100 per ton (International Energy Agency (IEA), 2013b).The benefits returned from product utilization should be taken into consideration in the fiscal evaluation of the overall carbonation process.From the viewpoint of energy consumption, fine FA is a good candidate for lowcost carbonation because no grinding process is required in advance.In addition, waste-heat integration from manufacturing processes could be implemented instead of electrical heating to reduce the overall energy requirement and operating cost (Balucan et al., 2013). Life Cycle Assessment (LCA) Although accelerated carbonation can potentially fix and store CO 2 over a geological time scale, the environmental effects of the process involved need to be considered from the perspective of life cycle assessment (LCA).The LCA of the CCUM process is particularly important, since energy is used in transporting slag, grinding, sieving, pressuring, heating, and operating the reactor (Xiao et al., 2014).CCUM may increase other environmental impacts such as eutrophication or acidification due to the increase in the concentrations of other pollutants.Hence, the effects of CCUM should be weighed and compared carefully according to changes in the environmental impacts. Steelmaking slag is a solid waste of the steel manufacturing industry characterized by its strongly alkaline nature and significant levels of metal ions, especially calcium.After carbonation, the physicochemical properties of slag can be improved and suitably used as SCM to replace portions of the Portland cement in concrete.Typically, 1.5-1.7 tons of natural resource (e.g., limestone and clay) and 0.11-0.13ton of coal are used per ton of cement clinker production (Kumar et al., 2006).Moreover, concrete made from Portland cement carries a carbon footprint of approximately 0.73-0.99ton CO 2 per ton cement, corresponding to roughly 537 pounds of CO 2 per cubic yard of concrete (Hasanbeigi et al., 2012).Therefore, both the natural resource consumption and carbon footprint of concrete can be mitigated through the use of carbonated solid wastes as construction aggregate (large particles) or SCM (fine particles). The annual world cement production is expected to grow by ~60% and reach 3.7-4.4Gt by 2050, compared to approximately 2.5 Gt in 2006 (Hasanbeigi et al., 2012).According to Zhang et al. (2011), blended cements with 30-60% residual slag product have properties comparable to those of Portland cement (Kumar et al., 2006).Therefore, it can be expected that huge environmental benefits can be obtained, e.g., approximately 1.45-3.89Gt of limestone, 0.22-0.60Gt of clay, and 0.12-0.34Gt of coal would be annually avoided by 2050, with the use of BOFS in blended cement.Furthermore, to achieve an acceptable reaction rate for industrial applications, alkaline wastes are usually ground down to fine particles (~ 100 μm) prior to use.The effect and toxicity of finely ground particles seems to cause significant health issues related to particulate formation, which is often believed to be a leading cause of respiratory disease (Koornneed and Nieuwlaar, 2009;International Energy Agency (IEA), 2013a;Giannoulakis et al., 2014). Summary Despite the great amount of alkaline waste available for CO 2 capture, the costs of both direct and indirect carbonation are too high for large-scale industrial deployment, suggesting that it may not be a complete solution to carbon capture issues.Ex-situ carbonation of alkaline wastes, which combines the treatment of industrial wastes that are readily available near a CO 2 emission point, could be part of an integrated approach to CO 2 mitigation issues for industrial plants.Another interesting issue is the problem of costallocation in the case of multiple emission reductions.Although CO 2 emission in an industrial plant was lower than in a power generation plant, the management of a variety of CO 2 sources within a single industrial plant and the selection of appropriate process and technology for CO 2 capture were the main barriers to lowering the industrial capture costs.To promote industrialization of CCUM, future work should be concentrated on the selection of appropriate processes, design and build-up of full-scale equipment, and management of material recycle and residue treatment. Conclusions Since reducing CO 2 emissions is critical to limit global warming to 2°C, CCUS technologies need to be deployed in steelmaking industries.Currently, the technology development is on track; however, the financial benefits are not great, since CCS projects need to demonstrate long-term viability and market support in the form of direct incentives.The proposed integrated CCUM process provides a solution for multiple waste treatments, i.e., reduction of CO 2 in flue gas, neutralization of alkaline wastewater, and stabilization of alkaline solid waste such as steelmaking slag.This could both lower the cost for wastewater treatment and CO 2 capture and enhance the utilization potential for steelmaking slag.Since the CO 2 emitted from the stacks in industries was generally pressurized and purified, it could be applied for accelerated carbonation directly at the source point, thereby reducing the cost for CO 2 capture.Although the use of waste steelmaking slag and metalworking wastewater for CO 2 capture will not lead to significant carbon credits, these wastes are available in large amounts and are near the emission sites, hence eliminating the need for transportation of raw material.The integrated CCUM process seems to be a viable option for industries because it is capable of the production of a saleable product, multi-pollutant treatments for flue gas and wastewater, and potential process integration for lowering the costs. Recommendations Since industrial waste products are generally produced near places of CO 2 emissions, using the CO 2 that is emitted to carbonate industrial waste offers an improvement over existing methods because it does not require the CO 2 or the industrial waste to be transported, and it allows better monitoring of total emissions.Several recommendations of CCUM were proposed for further investigation: 1.The reduction of CO 2 emissions in industry can be integrated with the waste treatment including alkaline solid wastes and wastewater, and their utilization such as green cement and aggregate in the construction engineering industry.2. A cost benefit analysis should be systematically carried out based on the consideration of (a) capital and operating costs for carbonation process and (b) profits from CO 2 reduction (e.g., carbon tax or carbon credit) and waste utilization (i.e., high-value products).3. Carbon footprint and energy consumption of the developed novel processes should be calculated using the LCA to determine the optimal particle size of solid wastes from the viewpoint of both CO 2 capture efficiency and product utilization. Fig. 1 . Fig. 1.Standard molar free energy of formation for several carbon-related substance at 298 K. Table 2 . Process intensification of ex-situ carbonation using alkaline solid wastes in the literature. Table 3 . Various properties of recycled concrete aggregate for natural aggregate replacement Table 4 . Pilot Studies and Demonstration Projects of accelerated carbonation around the world. Table 5 . Cost evaluation and energy consumption of ex-situ direct carbonation using various processes.
9,717.4
2015-01-01T00:00:00.000
[ "Engineering" ]
Mechanical Properties and Durability of Concrete with Water Cooled Copper Slag Aggregate Copper slag is a voluminous waste material obtained during the manufacturing of copper (matte smelting process). As its disposal becomes a concern for environmental protection agencies and governments, possible alternative outlets for this waste material are needed. The paper presents a laboratory study on CEM-II concrete mixes, containing water-cooled copper slag waste material for a partial or full replacement of fine concrete aggregate. A series of tests were performed at two different water to cement ratios, to determine the workability, cube compressive strength, indirect tensile strength, static modulus of elasticity, and a number of durability-related characteristics (water absorption, accelerated corrosion, carbonation, alkali-silica reaction). The results showed that water-cooled copper slag had variable effects on the resulting fresh or hardened concrete properties, depending on the sand replacement level and water to cement ratio. However the measured strength values were likely to be linked to the usual variability of concrete batches, rather than a significant effect of the copper slag aggregate. This hypothesis was further supported by statistical analysis. Concerning the durability related characteristics, the overall performance of the concrete containing copper-slag was in most cases similar or better than that of normal concrete with natural sand aggregate. Based on the results water-cooled copper slag can therefore be considered to be a suitable fine aggregate for concrete. This shows promise for developing an additional viable solution to tackle the issue of copper slag waste. Introduction Industrial operations produce large quantities of waste materials. An example is copper slag (CS), a voluminous waste material obtained during the manufacturing of copper. The extraction of copper from sulphide ores usually involves flotation followed by smelting and refining operations to produce pure copper metal. The major constituents of a copper smelting charge are sulphides and oxides of iron and copper; other constituents include SiO 2 , Al 2 O 3 , CaO and MgO (either present in the original concentrate or added as flux) [1]. Two distinct liquid phases are thus formed: copper-rich matte (a sulphide-rich phase) and copper slag (an oxide-rich phase, also known as ferro-sand). While copper-matte liquid is settling down in the smelter due to its higher density, smears of copper slag are segregated and remain on the surface (also in liquid form). These are then removed and cooled. Slow air-cooling creates a hard and crystalline product while fast cooling in water produces amorphous, glassy granulates [1]. The utilization and disposal of this waste CS becomes a concern for environmental protection agencies and governments, as to obtain 1 ton of clear copper 2.2-3 million tons of CS are produced [1], with an annual worldwide production of copper recently reported to be approximately 35 million tonnes [2]. Due to the increasing problem, a number of studies showed many possibilities of how this material could Abstract Copper slag is a voluminous waste material obtained during the manufacturing of copper (matte smelting process). As its disposal becomes a concern for environmental protection agencies and governments, possible alternative outlets for this waste material are needed. The paper presents a laboratory study on CEM-II concrete mixes, containing water-cooled copper slag waste material for a partial or full replacement of fine concrete aggregate. A series of tests were performed at two different water to cement ratios, to determine the workability, cube compressive strength, indirect tensile strength, static modulus of elasticity, and a number of durability-related characteristics (water absorption, accelerated corrosion, carbonation, alkali-silica reaction). The results showed that water-cooled copper slag had variable effects on the resulting fresh or hardened concrete properties, depending on the sand replacement level and water to cement ratio. However the measured strength values were likely to be linked to the usual variability of concrete batches, rather than a significant effect of the copper slag aggregate. This hypothesis was further supported by statistical analysis. Concerning the durability related characteristics, the overall performance of the concrete containing copper-slag was in most cases similar or better than that of normal concrete with natural sand aggregate. Based on the results water-cooled copper slag can therefore be considered to be a suitable fine aggregate for concrete. This shows promise for developing be reused or recycled. For slags containing appreciable amounts of metals a first viable option is the metal recovery by various processes, including electric arc furnace smelting, leaching and flotation. However the metals contained in the copper slag are often at very small amounts and their recovery may not be economical. Instead other uses of the copper slag were explored and adopted, for instance in abrasive or cutting tools, pavements, tiles, glass, roofing granules, etc. [3]. Due to its physical and chemical characteristics an alternative possible application for this material could be its use as aggregate and probably also cement replacement in concrete production. This is a promising application, as concrete is the most widely used material in construction after water, thus providing an ideal opportunity for the recycling of waste materials in large quantities. In general however aggregate properties influence the freshly mixed and hardened concrete properties. Concrete aggregate must be clean and free of objectionable materials, which can affect the bonding of the cement paste to the aggregate or be corrosive to metal reinforcement. It must be strong, hard and durable, uniformly graded and falling within certain upper and lower bounds of grading. Extensive studies and evidence of good performance are therefore needed before a new material can be adopted with confidence for concrete production at an industrial scale. In this respect copper slag as concrete aggregate appears to be less well researched compared to other waste materials. A further complication is that slag varies not only in form (crystalline vs. amorphous) according to the cooling process but also in its chemical composition (this can vary according to the furnace type, the metallurgical production process, the composition of the extracted ore, etc. [4]), which could potentially have varied effects on the resulting concrete properties. An important point to note is that CS has been excluded from the hazardous waste lists of the United States Environmental Protection Agency as well as the United Nations (UN) Basel Convention on the Transboundary Movement of Hazardous Waste and its Disposal [4]. Further studies confirmed that the heavy metals present in the slag are stable and are not likely to dissolve significantly even through repetitive leaching under acid rain in a natural environment, and that the highest concentration of all the elements was reported to be far below the prescribed limits in USEPA 40CFR Part 261 (e.g. [5,6]; the latter paper reported that the National council for cement and building materials in New Delhi India, found that the leaching of heavy metals from copper slag samples was well below the toxicity limits even under aggressive conditions). There should therefore be no serious concerns regarding the leaching of toxic elements if copper slag was used in large-scale construction. A literature review on concrete containing copper slag was performed at the beginning of this research. The summary of this review is reported below, in terms of concrete sand replacement only (i.e. not coarse aggregate replacement as e.g. in [7,8] or cement replacement as e.g. in [9,10]; also papers reporting results on mortars, e.g. [11] are excluded). Only primary material is reported, which excludes recent review papers such as [12]- [15] and a review book on copper slag as a construction material, published while this manuscript was under review [16]. From the literature review, it transpired that most research to-date studied one type of slag (either air-cooled CS or water-cooled CS). It was also noticed that in many papers the type of slag used was not explicitly mentioned (e.g. [17][18][19][20][21][22]) but that overall the air-cooled slag appears to be the type of slag used in most publications; this is either mentioned explicitly (e.g. [6,[23][24][25]-the latter paper uses both air cooled and water cooled slag for the partial replacement of coarse and fine aggregate respectively-) or can be inferred based on the supplier's details and material characteristics (e.g. specific gravity). On the other hand the water cooled slag appears to have been studied to a lesser extent: of the reviewed papers only in one paper it was implied that slag was perhaps water cooled. Namely Kharade et al. [26] describing the industrial process producing copper slag, (without specifying that this was the process followed by their slag supplier), mentioned that slag 'is transported to a water basin with a low temperature for solidification' (note however that Poovizhi and Khatirvel [24] using slag from the same supplier mentioned this explicitly to be air-cooled). The slag used in Al-Jabri et al's publications [17][18][19][20] was possibly water-cooled based on information found directly from the suppliers' current website (assuming that the current industrial processes described on the website were the same at the time of Al Jabri et al's research). Some papers used copper tailings to replace sand aggregate [27,28]; these are not directly comparable to the water-cooled aggregate used in the present study. In the literature the slag was used to replace either fine or coarse concrete aggregates, thus further reducing the amount of the available experimental evidence for each type of slag and aggregate type respectively. Moreover, a number of related publications were published in national society journals and were not written in English (e.g. [29,30]). A number of the existing studies assessed concrete strength with copper slag as partial sand replacement of up to 40-50% (e.g. [21,24] etc.); some other papers proceeded to full sand replacement (e.g. [17-20, 26, 31]). In some of the latter papers (i.e. [17,18,19,31]) special types of concrete were tested, involving the use of additional materials (e.g. silica fume and superplasticisers) compared to normal concrete. Based on these papers copper-slag concrete was reported to have similar or even higher strengths than normal concrete at least for modest sand replacements of about 20-40%; however this increase was usually followed by a loss in strength so that for high slag contents the strength of the concrete with copper slag is lower than the control (no copper slag) mixes [17-20, 26, 31]. Very few papers assessed workability of the copper-slag concrete. Generally it was found that copper slag may increase workability of concrete mixes (e.g. [17][18][19][20]31]) but most of these papers used superplasticizers to enhance workability of high strength concrete mixes; also in one paper [26] the slump was found to slightly increase with copper slag percentage but overall it remained very low despite the fairly high water/cement ratio (w/c = 0.48). No measured data of moduli of elasticity were found in the literature, except in a Japanese paper as reported by [2]. Finally, there is little information widely available on various durability aspects of concrete with copper-slag fine aggregate, with few exceptions [e.g. 6,23,25]. In the former two papers concrete sand was partially replaced by copper slag and the water absorption, corrosion of reinforcement, chloride permeability and acid and sulphate resistance of the resulting concrete were studied. It was found that concrete with copper slag had a lower chloride permeability (that the authors linked to a decreased porosity of the concrete). However a slightly lower acid resistance and slightly higher corrosion and water absorption rates were observed. The latter paper used CEM-I cement and considered both coarse aggregates replacement by air cooled copper slag at 20 and 40% volume replacements as well as fine aggregate replacement by copper slag at 20 and 40% levels per volume. It studied gas permeability of copper slag concrete (which was significantly lower or comparable to that of the control mix), chloride ingress (which was found to be significantly lower for copper slag mixes), freeze-thaw resistance to de-icing agents (which was significantly reduced compared to that of the control mix), and accelerated carbonation which occasionally showed only a slight increase of the carbonation depth (locally) when using copper slag but was still acceptable. Durability was also linked to the study of the porosity of copper-slag concrete; the capillary porosity of copper slag concrete was found to be similar to that of normal concrete, whereas the effect of copper slag on the open porosity was found to be variable but still acceptable. Due to the lack of extensive experimental evidence concerning in particular the durability aspects of this type of concrete, it has been reported that even in countries where copper slag aggregate has in principle been approved for construction projects, the use of copper slag in concrete production has been hindered [30]. Further studies are thus required to assess whether concrete with copper slag would satisfy the quality requirements for industrial production. To address this research need, a comprehensive experimental programme was set out to study salient properties of concrete with water-cooled copper slag used as fine concrete aggregate at various concrete sand replacement levels, including full sand replacement. The first part of the experimental results, using cement type CEM-I, was recently presented in CEST2015 international conference [32]. This paper presents a second series of results also produced at London South Bank University, based on a different type of cement. These include data on properties such as modulus of elasticity that are lacking in the literature, and a study of various durability aspects of concrete containing water cooled copper slag, which were not discussed in Mavroulidou and Liya [32]. A statistical analysis on the compressive strength characteristics of the resulting concrete is then presented at the end of this paper to assess whether sand replacement by copper slag significantly affects the resulting concrete strength. Experimental Procedures, Materials and Mixes For this study the water-cooled copper slag was supplied by ScanGrit (brand name ScanGrit Iron Silicate Grade 5 i.e. of particle size ranging 0.2-2.5 mm). According to the supplier's description it is an inert synthetic mineral manufactured by granulation in water of the slag arising from unique fumed copper smelting processes. It is an iron silicate with trace metals bound in an amorphous glass form as complex silicates and oxides and contains no free silica; it also has a chloride content of <15 ppm. Its detailed chemical composition according to the supplier is presented in Table 1. The particle density and water absorption characteristics of the copper slag compared to those of river sand were determined using BS 812-2: 1995, Method 5 [33]. These are shown in Table 2 together with other physical characteristics of the two materials. As opposed to Mavroulidou and Liya who used CEM-I [32], in this study the cement used was limestone Portland Cement (CEM-II/A-L 32,5R) obtained from Lafarge-UK (note that in the UK CEM-I has been increasingly replaced with less CO 2 intensive CEM-II products). The typical chemical composition of this cement provided by the supplier is shown in Table 3. The fine aggregate was concrete sand of a maximum size of 5 mm; the coarse aggregate was crushed mixed gravel of a maximum size of 10 mm (brand name Tarmac Trupak). The particle size distribution of the aggregates (coarse concrete aggregate and fine concrete aggregate or copper slag) used in the mixes was determined using the dry sieving test (see Fig. 1). It can be seen that the sand and copper slag materials had a very similar grading, with D 90 of 1.2 and 1.9 mm for the copper slag and sand respectively, D 50 of 0.47 and 0.54 mm for the sand for the slag and sand respectively. This confirms the suitability of the slag for use as fine concrete aggregate from the point of view of gradation. It also implies that the grading would in principle be maintained consistent when mixing the two materials hence giving adequate mixes without the need to take extra precautions. Concrete was made with a mix design of 1:1.5:3 (1 part cement; 1.5 parts sand and 3 parts coarse aggregate) according to British Standards guidelines for RC40 [34]. Two different sets of mixes were made with water/cement (w/c) ratios of 0.55 and 0.45 respectively, with the mix composition shown in Table 4. Mixes without copper slag were made to serve as control mixes. Copper slag was then used at increasing percentages per total sand mass to replace regular concrete sand (fine aggregate). For consistent comparisons when assessing the effect of the copper slag on the different concrete properties, the w/c ratio was kept constant when the sand was partly or fully replaced by copper slag (i.e. w/c of 0.55 and 0.45 respectively for the two sets of mixes). The mixing was in accordance with BS EN 12390-2:2009 [35] using a rotating mixer. First the aggregates and cement were mixed. Water was then gradually added during the first 30 s of mixing; after all the materials had been added, mixing continued for at least 2 min and not more than 3 min, until a homogeneous paste was obtained. Specimens were then cast in moulds in three separate layers; each layer was compacted on a vibrating table for approximately 15 s, to remove any entrapped air bubbles from the mix (care was taken to avoid over-vibrating the mix, as this could lead to segregation). Specimens were demoulded 24 h after casting and water cured at a temperature of 20 °C (±2 °C) until testing. The workability of fresh concrete mixes was then assessed based on the measured slump of the mixes immediately after mixing, according to BS EN 12350-2:2009 [36] [39] from two-point flexural strength tests on selected mix beams of 500 mm length and a section of 100 mm × 100 mm. The static modulus of elasticity of selected mixes was determined on cylinders prior to splitting cylinder testing according to BS 1881-121:1983 [40]. A series of tests were performed to assess the durability of concrete mixes, namely water absorption by immersion, accelerated corrosion, carbonation and alkali-silica reaction testing. Water absorption testing shows how easily concrete will allow the penetration of liquids, which is undesirable, as it allows for the ingress of aggressive chemicals, leading to premature corrosion of reinforcing steel, spalling and deterioration of concrete. The water absorption tests were performed following the BS 1881-122 guidelines [41] on 72-h oven-dried 100 mm 3 concrete cubes (cured for 28 days), which were subsequently left to cool in a dry airtight vessel for 24 h. The cubes were then completely immersed in water for 30 min; the moisture absorption was calculated as the increase in the mass of the cube after immersion, expressed as a percentage of the mass of the dry cube. Due to the presence of a high amount of iron oxides in the slag it was also considered of relevance to assess the corrosion of resulting reinforced concrete. Corrosion of steel reinforcement is one of the major causes affecting the long-term performance of reinforced-concrete structures. As the process of steel corrosion is long, accelerated corrosion testing was performed using an impressed current density methodology (see e.g. [42]). The apparatus consisted in a 10 V DC power supply, a data logger, two stainless steel plates (marine grade) and a container with 3.5% of NaCl solution. 100 mm cube samples were cast for the two studied w/c ratios of 0.45 and 0.55 respectively. The copper slag content tested was CS 0%, CS 60% and CS 100%. During casting a pre-weighed metal bar of a diameter of 8 mm was fixed at the centre of the concrete cube specimens. The samples were cured in a water tank for 7 days, consistently with the curing of other hardened concrete specimens. They were subsequently subject to a constant immersion into the NaCl solution for 21 days. During the test the steel bar was connected to the positive terminal whereas the negative terminal was connected to the stainless steel plates. After 21 days the specimens were removed and the metal bars were separated from the concrete. The metal bars were cleaned according to ASTM G1-90 standard [43], using a solution of 500 ml of hydrochloric acid 5% with 3.5 g of hexamethylenetetramine in water to obtain a solution volume of 1000 ml. The percentage of the actual amount of steel lost in corrosion was then calculated gravimetrically by weighing the cleaned bars and assessing their mass loss compared to the original bar mass before testing. Carbonation testing was performed using the phenolphthalein indicator method according to BS EN 14630: 2006 [44]. Carbonation can destroy the alkaline environment of concrete, which protects embedded steel reinforcement from corrosion. To assess possible carbonation problems, freshly cut cores of specimens were sprayed with a phenolphthalein pH indicator solution (1 g phenolphthalein in 70 ml ethyl alcohol diluted to 100 ml with distilled water) and measurements taken within 30 s after spraying [44]. These were specimens left for 10 months in outdoors environment, partly protected by a roof (i.e. natural environment carbonation conditions). Finally, accelerated alkali-silica reaction testing was performed on mortar bars according to ASTM C1260 [45] guidelines. The only difference was that due to health and safety reasons the prescribed temperature in the NaOH solution bath was lowered during the night time. To compensate for this, the test was run for longer than the specified 14 days. Quadruple specimens per mix per curing time were cast for the cube compressive strength (the most important property of concrete, on which common concrete element design is based), whereas indirect tensile strength and durability testing samples were cast in duplicate. Fresh Concrete Workability A possible anticipated difficulty when using aggregate other than that recommended for concrete could be the effect on the workability of the fresh concrete. As this research is a parametric study to assess the effects of copper slag on various properties of concrete, we chose not to adjust the mix in order to get a fixed slump (e.g. by adjusting water content and using superplasticisers). Instead we systematically observed the changes in the slump of the mixes upon replacements of sand by copper slag at constant steps. Figure 2 presents results of the slump tests for each copper slag percentage for the two different w/c ratios used. These showed that all 0.55 w/c mixes had high (100-175 mm) to very high slumps (above 175 mm) and hence workability; however in most cases the slumps of mixes including copper slag were lower to those of the control mixes, in particular for a water/cement ratio of 0.45, which for high percentages of copper slag showed low to very low workabilities (slumps below 50 mm), which would be inadequate for a number of practical uses. This finding goes against the original assumptions made that the lower water absorption of the copper slag aggregate would cause higher slumps than those of the control mixes. It is possible that slumps were affected to some extent by the angular shape of the copper slag aggregate as discussed later. It is interesting that [25] also found a decrease in workability when fine aggregate was replaced by copper slag, against initial expectations. Overall however the workability results show that workable mixes can be achieved for all sand replacement levels if the w/c ratio is carefully controlled. Hardened Concrete Properties Cube Compressive Strength Tests Figure 3 a and b show respectively the 7 and 28 day average cube compressive strengths for different regular sand replacement levels and mixes of w/c = 0.55 and w/c = 0.45 respectively. It can be seen that the effect of the copper slag is variable, as strength may either increase (this is the case for the w/c = 0.45 mixes and most 7 day curing mixes with w/c = 0.55) or decrease (see most 28 day strengths of mixes with w/c = 0.55). However most differences in strengths compared to the respective control mixes for 0.55 and 0.45 w/c ratios are generally small for most slag percentages and could be due to the usual variability in the concrete batches around a mean value, rather than any significant trends. This observation will be further investigated statistically in "Statistical Analysis" section of the paper. Table 5 represents the evolution of compressive strength between 7 and 28 days based on the ratio of the 7 day curing strength f c7 and that of the 28 day curing f c28 of the tested mixes. It can be seen that the evolution follows the usual expected pattern of normal concrete for which, as a rule of thumb, the 7 day strength is often considered to be about 75% of the 28 day strength value (for instance Neville [46] shows a seven-day strength of approximately 75% and 80% of the 28 day strength for w/c of 0.53 and 0.4 respectively, which is consistent with the results in Table 5). Fig. 4a it can be seen that the splitting tensile strength (f t ) results are as expected for normal concrete, i.e. overall consistent with the compressive strength results in particular for the w/c = 0.55 mixes, which were all around 7% of the compressive strength (f c ) results; those of the w/c = 0.45 were all around 5.5-6% of the compressive strength with a slightly higher scatter. Using both sets of data a very strong correlation with the compressive strength was found as shown in Fig. 4c. The flexural strengths of the mixes (Fig. 4b) were also overall consistent with the cube compressive strengths trends as in the case of normal concrete, again showing a very strong correlation with the compressive strength (see Fig. 4d); more particularly the 0.45 mixes had flexural strengths of about 10% of the respective compressive strengths, a value consistent with common empirical rules of thumb regarding the relationship between the tensile strength and compressive strength of normal concrete. For the w/c = 0.55 mixes the ratio of flexural to compressive strength was found in this instance to be higher, i.e. approximately 11-12% of the compressive strength, which is still within expected flexural strength values as a proportion of the compressive strength. As expected, there is a very strong correlation between the splitting and flexural strengths, as they both constitute different indirect measures of the tensile concrete strength (see Fig. 4e). Static Modulus of Elasticity The results shown in Fig. 5a, b refer to static moduli of elasticity, E c (a property crucial to the long-term serviceability of concrete). They were obtained from cylinders subsequently used to determine the splitting tensile strength (presented earlier). It can be seen that the copper slag concrete has similar or higher moduli of elasticity in comparison to the control mixes (see Fig. 5a), and that the E c values show a very strong correlation with the compressive strength results of this study (see Fig. 5b), as expected for regular concrete; the measured values of E c are also fairly consistent with empirical expressions where E c is given as a function of the compressive strength [see e.g. 46,47]. Durability Testing Water Absorption Tests Figure 6 shows the average absorption results of duplicate specimens. It can be seen that all 0.55 w/c ratio mixes with copper slag had an increased water absorption compared to the control mix, with the exception of the 20% CS mix. On the other hand the w/c = 0.45 mixes had a lower water absorption compared to the respective control mix, with the exception of the two mixes with the highest CS percentages (80 and 100% respectively). It should be noted however that all mixes maintained low water absorptions of less than 3%, which is within expected limits for good durability with respect to liquid ingress in concrete [46]. Other researchers (e.g. Al-Jabri et al. [20] using a w/c = 0.5) also found an initially lower water absorption (in [20] this occurred for up to 40% CS content), which gradually increased above that of the control mix for higher copper slag percentages. It is possible that the increase in water absorption, which was observed mostly in the w/c = 0.55 mixes was due to some excess water (the copper slag particles having a lower surface water absorption than sand); this could lead to a higher porosity and could explain both the higher absorption and Figure 7 represents the accelerated corrosion test results in terms of mass loss of the embedded steel reinforcement bars. It can be observed that both control mixes (of w/c ratios of 0.45 and 0.55 respectively) had a higher corrosion (based on the higher mass loss of the bars) than the mixes with copper slag. The highest corrosion was found for the 0.45 control mix which also showed some surface cracking of the concrete adjacent to the reinforcing bar. On the other end, the w/c 0.55 mix with 100% CS showed a minute mass loss. The results indicate that using water-cooled copper slag as a replacement of fine aggregate increased the corrosion resistance of the resulting concrete compared to concrete with natural fine aggregate. Note that the opposite trend was shown in Brindha et al. [23] for air-cooled copper slag used for up to 50% sand replacement levels, where increasing CS percentages showed a small increase in the corrosion rate compared to the control mix. Carbonation Testing Results According to the phenolphthalein test, if there is no carbonation and the concrete is highly alkaline a vivid purple colour is obtained after the spraying of the concrete with the solution. Most mixes showed almost no signs of carbonation except for some local carbonation mostly around microcracks or aggregates, which is consistent with concurrent research findings by [25]. The results of the average carbonation depths d kmean (recorded to the nearest 0.5 mm) of the tested copper slag mixes for duplicate specimens per mix are presented in Table 6. These show a similar performance to that of normal concrete and increasing carbonation depths as w/c increases. For instance results reported for normal concrete can commonly range between 1 mm to over 3 mm per year [46]. For limestone cement concrete in particular (i.e. the cement type used in this study) depths of carbonation of 3.2 and 4.6 mm for w/c of 0.45 and 0.55 respectively were recently reported in the literature [48] (after 7 days of accelerated carbonation). Alkali-Silica Reaction Testing The test assessed the possibility of a reaction of the alkalis in cement as well as compounds in the copper slag such as K 2 O and Na 2 O with silica from the amorphous water cooled copper slag aggregate. This reaction between alkalis contained in Portland cement and aggregate containing a reactive form of silica (i.e. any kind of silica that can dissolve in cement paste) can lead to a serious degradation of the concrete; after dissolution, silica can react with sodium/ potassium ions contained in the cement paste to form a gel which appears on the surface of the aggregate; this gel attracts water which leads to an expansion causing concrete to crack. The test was conducted on duplicate mortar bar specimens of CEM-II containing respectively 0% (control mix) and 100% copper slag (considering that 100% CS would constitute a worst-case scenario if the copper slag aggregate was reactive). Very minute expansions, all well below the ASTM limit of 0.1% [45] were recorded throughout the testing period, implying that the slag is innocuous. Statistical Analysis Compressive strength is an important factor in designing and optimising concrete mixes and for designing structural elements. Moreover, as discussed earlier, other mechanical properties can be correlated with compressive strength. For this reason some further statistical investigation using R software was performed on the 28-day compressive strength results in order to assess any possible significant trends; 28-day compressive strength was chosen for analysis as this is the usual industry-wide basis for consistent comparisons of the compressive strength of concrete products. In the analyses datasets of 28-compressive strengths from Mavroulidou and Liya [32] were also included, to provide an additional factor to consider in the analysis, i.e. the cement type; the results in [32] were based on a different type of cement (CEM-I) but considered the same two w/c ratios, i.e. 0.45 and 0.55. The analysis was in the form of n k factorial experiments (with replication) with the number of factors k = 3 and two levels of the factors (n = 2), i.e. copper slag content (the two levels assigned were −1 (low) and +1 (high)), water/cement (w/c) ratio (0.45 and 0.55 factor levels) and the categorical factor "cement type" (CEM-I and CEM-II factor levels). Boxplots, histograms and normal probability Quantile-Quantile (QQ) plot of the dataset (not shown here for brevity) showed that the normality assumption would be appropriate; this was confirmed by Shapiro-Wilk normality test (the p-value = 0.9334 did not reject the null hypothesis of the test that data were from a normally distributed population). However the Fligner-Killeen test of homogeneity of variances rejected the null hypothesis that variances are equal (a second assumption of ANOVA) as it returned a p-value = 0.03656 (this remained the case even after different transformations of the data). With this reservation in mind ANOVA tests were performed considering main effects and binary interactions between factors but the outcomes were cross-checked with Kruskal-Wallis non-parametric analysis. The ANOVA results showed that w/c is the most statistically significant factor (p = 2.493 × 10 − 6 ) followed by the cement type (p = 1.139 × 10 − 4 ). On the other hand the level of copper slag was found to have no statistically significant effect (p = 0.2087304). No significant interaction effects were found (p = 0.2208567, p = 0.9216146 and p = 0.7191864 respectively for the copper slag-w/c ratio, copper slag-cement type and w/c ratiocement type interactions). As there are no interaction effects, the main effects discussed above are meaningful. Quality control of the ANOVA model (boxplot and normal Quantile-Quantile (QQ) plots), which is not shown here for brevity, found the residuals to reasonably follow a normal distribution (as required); this was confirmed by Shapiro-Wilk normality test on the residuals (p-value = 0.3587). Finally, the plot of residuals vs. fitted values showed a rather random pattern with residuals plotting on both sides of 0 without any obvious relationship between the size of the fitted value and the variance of the residuals, which is the desired outcome. The non-parametric Kruskal-Wallis analysis of the main effects confirmed that copper slag content is not statistically significant (p-value = 0.7927), and that the w/c was the most significant factor with a p-value = 0.001122 for a confidence interval of 95%; on the other hand the Kruskal-Wallis analysis found cement type to be significant at a lower confidence interval only (p-value = 0.07399). Whereas the significant effect of the w/c and cement type factors was expected, the lack of significant effect of the copper slag content (which was of particular interest in this study) was further investigated and supported by one-way analysis of variance using the two entire datasets of 28 days cube compressive strength results of CEM-II for the 0.55 and 0.45 w/c ratios respectively (with four replications). Kruskal-Wallis one-way analysis of variance, 1 confirmed copper slag level to be of no statistical significance (p = 0.19 and p = 0.2838 for the 0.55 and 0.45 w/c ratio datasets respectively). Discussion The above tests showed that most concrete properties are insensitive to the incorporation of copper slag from 0 to 100% replacement levels. Some justification of these findings is offered here as follows: firstly, the consistence of concrete is controlled by a large number of factors such as particle size and distribution, particle shape and texture as well as particle water absorption. In this case the two aggregates (copper slag and natural river sand) have essentially similar Particle Size Distributions (PSD), hence sizes, uniformities and fineness moduli, which are related to their packing. The particle shape and surface texture of an aggregate influence the properties of freshly mixed concrete more than the properties of hardened concrete. In this case the natural aggregate and the copper slag were both of a smooth texture however the latter particles were angular as opposed to the subrounded river sand particles; the angularity, which usually reduces workability making concrete harsh and the increased frictional resistance of the particles may therefore partly counterbalance the effect of lower water absorption (which would imply higher free water in the mix hence a potentially higher workability) compared to the natural aggregate. In addition the bonding between cement paste and a given aggregate generally increases as particles change from subrounded to angular. This increase in bonding could explain the good strengths (compressive and tensile) of the copper slag concrete mixes (as noted earlier strength changes linked to the addition of copper slag were not found to be statistically significant). This is despite the reduced water absorption of the copper slag aggregate which would imply higher free water in the mix (and which would negatively affect strengths) and the glassy surface expected to have a negative effect on the bonding. It is possible that the results were partly affected by some segregation and bleeding which was noticed; this is consistent with the somewhat increased water absorption of the resulting concrete (still however of low values) at higher copper slag contents as excess water and bleeding can increase the porosity. For the lower w/c ratio mixes, the relatively higher water absorption of the resulting concrete could also be due to the drier and hence more porous mix (consistent with the lower slump) when high percentages of slag were used. To optimise the mixes and reduce this effect, further study on the water demand of the mixes is required. Nevertheless, the overall good durability performance of the copper slag concrete, which was observed in these tests, is consistent with the generally adequate physico-chemical characteristics of the copper slag. From the above discussion the use of copper slag as fine aggregate replacement can therefore be considered as promising overall. Conclusions Current focus on sustainable construction leads to the intensive search for alternative materials to be used as aggregates in concrete production. Copper slag, the waste material produced in the extraction process of copper metal in refinery plants, has a number of physical characteristics similar to that of natural sand; it can thus be a potentially good candidate for use as a fine aggregate in concrete production with many environmental benefits, such as waste recycling and the avoidance of landfilling, as well as a reduction in the need for using non-renewable natural aggregate. The paper presented a comprehensive parametric study on the effects of water-cooled copper slag on a wide range of fresh and hardened concrete properties and durability characteristics of concrete, where copper slag was used as fine concrete aggregate replacement. The results showed that: • Overall, mixes with the tested copper slag percentages had strengths and measured moduli of elasticity (for which there is lack of data in the literature) comparable to those of normal concrete and showing the expected trends (i.e. good correlations of compressive strengths with indirect tensile strengths and moduli of elasticity; these complied fairly well with suggested empirical correlations of such properties with compressive strengths for regular concrete, included in a number of standards or guidelines). • Statistical analysis confirmed that there was no significant effect on the compressive strengths of concrete (the fundamental property for concrete design) and no interaction effects between water content, cement type and copper slag levels regarding the compressive strengths obtained; as with regular concrete the most influential factor for concrete strength was found to be the w/c ratio. • Durability of concrete in terms of reinforcement corrosion, depth of carbonation, and the possibility of deleterious alkali-silica reactions was also found in general to be similar or better to that of normal concrete; the water absorption performance of the copper slag concrete was overall also fairly similar to that of the control mixes; the values fluctuated around that of the respective control mix value, slightly increasing at the highest copper slag replacement levels. • On the other hand the effects of copper slag on fresh concrete workability were variable and the lower w/c mixes in particular were adversely affected by the increase in copper slag content; such issues could however be addressed by optimising the mixes by adjusting the w/c ratio and/or using superplasticisers. This was beyond the scope of this study. Based on the above findings it can therefore be concluded that upon further mix optimisation this type of aggregate can be used as a suitable substitute for natural sand. Eventually, as for any other cases where waste materials are suggested for use in concrete, the viability of using this aggregate in concrete will depend on local economics, i.e. cost, availability of the material in sufficiently large quantities and availability and costs of similar natural aggregates in the respective regions where concrete production plants operate. Nonetheless with the depletion of the natural aggregates, the use of suitable alternative aggregates based on waste materials should be encouraged as a potentially more sustainable option overall.
9,497.6
2017-01-27T00:00:00.000
[ "Materials Science" ]
Predictive Modeling of VO 2 max Based on 20 m Shuttle Run Test for Young Healthy People : This study presents mathematical models for predicting VO 2 max based on a 20 m shuttle run and anthropometric parameters. The research was conducted with data provided by 308 young healthy people (aged 20.6 ± 1.6). The research group includes 154 females (aged 20.3 ± 1.2) and 154 males (aged 20.8 ± 1.8). Twenty-four variables were used to build the models, including one dependent variable and 23 independent variables. The predictive methods of analysis include: the classical model of ordinary least squares (OLS) regression, regularized methods such as ridge regression and Lasso regression, artificial neural networks such as the multilayer perceptron (MLP) and radial basis function (RBF) network. All models were calculated in R software (version 3.5.0, R Foundation for Statistical Computing, Vienna, Austria). The study also involved variable selection methods (Lasso and stepwise regressions) to identify optimum predictors for the analysed study group. In order to compare and choose the best model, leave-one-out cross-validation (LOOCV) was used. The paper presents three types of models: for females, males and the whole group. An analysis has revealed that the models for females ( RMSE CV = 4.07 mL · kg − 1 · min − 1 ) are characterised by a smaller degree of error as compared to male models ( RMSE CV = 5.30 mL · kg − 1 · min − 1 ). The model accounting for sex generated an error level of RMSE CV = 4.78 mL · kg − 1 · min − 1 . Introduction Cardiorespiratory fitness (CRF) is a strong health indicator [1][2][3][4][5][6]. A low CRF level entails the risk of adverse cardiovascular events [2]. In adults, CRF is regarded as a predictor of general mortality and it is negatively correlated with hypertension and diabetes [7][8][9][10][11][12][13]. Low CRF in young people is related e.g., to obesity, which increases the risk of cardiovascular diseases [14] and metabolic syndrome characteristics [15,16]. Studies have also revealed that improved CRF leads to better health results regardless of body mass index (BMI) [9]. A higher physical fitness index in adults significantly reduces the metabolic risk for a particular body fat level [17]. Preventive measures against cardiovascular diseases which are intended to optimise CRF are also implemented in low-risk groups (10% risk of ischaemic heart disease occcurrence within the next 10 years) [18][19][20]. The authors of many studies emphasise the significance of lifelong physical activity to improve or maintain the appropriate level of CRF [19,21,22]. In early adulthood, a high CRF level provides the most benefit related to survival. The maximum oxygen consumption (VO 2 max) is the main CRF criterion measurement [15]. A direct measurement of the factor is regarded as the best indicator of aerobic fitness [23,24]. Since the implementation of VO 2 max direct measurement is quite complex, maximal oxygen uptake tends to be forecasted based on indirect measurements, using predictive models [23,25,26]. A 20 m shuttle run test (20 m SRT) is often used to measure cardivascular fitness [22,23,27,28]. Therefore, it seems reasonable to model the index using various mathematical methods. A predictive model may help to identify the level of CRF without costly equipment and specialised research time. The prediction of the VO 2 max parameter may become an important element of health monitoring and fitness improvement in young healthy people. In the literature, there are many papers that focus on the prediction of this indicator using, among other factors, the results of resistance tests, anthropometric parameters or physical activity indicators [15,23,29]. Therefore, the VO 2 max predictive models may be classified as both exercise and non-exercise models. Exercise models make predictions based on the result of an endurance test with maximum or submaximal intensity [15,23,[30][31][32][33][34]. In the literature, there are also prediction models based on ergometric tests [35,36]. The most popular predictors include age, sex and BMI [25]. Most of the published models are based on multiple regression or machine learning methods. In this paper, the authors present different models to predict VO 2 max based on the result of the 20 m shuttle run test and anthropometric parameters. The aim of the work is to determine the optimal predictive model to estimate VO 2 max for young healthy people (students). The models may be used e.g., by a Physical Education teacher for CRF monitoring. To the best of our knowledge, these are the first studies for such a large group (n = 308) using the direct measurement of oxygen uptake. Data was collected at five large universities in various parts of Poland. Participants The nature of the study was cross-sectional. It applied to a group of 1097 (aged 19.7 ± 1.4) healthy students (487 males aged 20.0 ± 1.6 and 610 females aged 19.5 ± 1.3) studying in five large academic centres in Poland (Krosno State College, University of Rzeszow, Maria Curie Skłodowska University in Lublin, Cracow University of Technology and Poznań University of Life Sciences). The study group was selected in two stages. In the first stage, randomly selected students at the academic level of bachelor's studies participated in a 20 m SRT. Based on the median of the covered distance (sections): Me = 880 m for females (44 sections) and Me = 1520 m for males (76 sections) people who completed 20 m SRT below and above the median of the distance covered (sections) were selected for the second stage to ensure the highest possible differentiation of results. Finally, 154 women (aged 20.3 ± 1.2) and 154 men (aged 20.8 ± 1.8) participated in the last stage. They participated in a 20 m SRT and the VO 2 max was identified using a portable gas analyser K4b 2 . Before the cardiac stress test, a skilled study team made anthropometric measurements. The body height was measured with a stadiometer (SECA 213 Hamburg, Germany) with an accuracy of up to 1 mm. Measurements of the waist circumference were carried out with non-elastic flexible tape according to two protocols: WHO (World Health Organization) STEPS protocol (approximate midpoint between the lower margin of the last palpable rib and the top of the iliac crest) and according to US NIH (United States National Institutes of Health) protocol (a measurement was made at the top of the iliac crest) [37]. The hip circumference measured around the widest portion of the buttocks. The body weight and body weight components (Fat-body fat percentage, FFM-fat free mass, TBW-total body water) were measured by means of Body Composition Analyzer Tanita TBF 300 (Tokyo, Japan). Table 1 presents used somatic indexes with equations. Generally, the measured anthropometric parameters (height and body weight, waist circumference, WHR-waist-hip ratio, WHtR-waist to height ratio, BMI-body mass index, BAI-body adiposity index) comply with the results for the mean values of the parameters in the population of Polish students presented in studies by other authors [38,39]. A description of the variables (e.g., VO 2 max level, age, heart rate and anthropometric parameters) together with the basic statistics (mean value and standard deviation) is given in Table 2. The models were calculated on the basis of 23 independent variables (x 0 -x 22 ) and one dependent variable (y). Independent variables include: gender (x 1 ), parameters of the 20 m shuttle run (x 1 -x 4 ), age (x 5 ), anthropometric features (x 6 -x 10 ), somatic indexes (x 11 -x 19 ), and body components (x 20 -x 22 ). The VO 2 max result (y) obtained during the 20 m shuttle run test with a telemetry gas exchange system (Cosmed K4b 2 ) was a dependent variable. WHtR-waist to height ratio, WHR-waist-hip ratio, BMI-body mass index, FMI-fat mass index, FFMI-fat-free mass index, BAI-body adiposity index, BSA-body surface area. HR-maximal heart rate, HRR-recovery heart rate, WHtR-waist to height ratio, WHR-waist-hip ratio, BMI-body mass index, FMI-fat mass index, FFMI-fat-free mass index, BAI-body adiposity index, BSA-body surface area, Fat-body fat percentage, FFM-body fat-free percentage, TBW-total body water. Field Assessment of VO 2 max The 20 m SRT was conducted according to established procedures [40]. The speed at the first minute was 8.5 km/h and this increased by 0.5 km/h every minute. The measurement of the VO 2 max during a 20 m shuttle run test was made using a portable gas analyzer K4b 2 (Cosmed, Rome, Italy). During the graded test, inspired and expired gases were continuously monitored, breath by breath. Prior to testing, Cosmed K4b 2 was warmed-up for a minimum of 20 min. Following the warm-up period, the K4b 2 was calibrated with standard gases in accordance with the manufacturer's specifications. Heart Rate (HR-beats/min) was continuously monitored (Polar coded transmitter) with a focus on the maximal heart rate (HR max ) and recovery heart rate in the first (HR R1 ) and fourth (HR R4 ) minutes after the test. The criteria for attaining VO 2 max included any two of the following: volitional exhaustion; attainment of at least 90% of the age predicted HR max (220 beats/min minus the age of the subject in years); respiratory exchange ratio (RER) equal to or greater than 1.10; and VO 2 leveled off even with an increase in intensity [41,42]. Predictive Methods Multiple input single output (MISO) model types were used in the study. Classic regression models, regularised regression models and artificial neural networks were used in the calculations of the models. All predictive models were calculated in R colorred software [43]. The implemented methods included: • The ordinary least squares (OLS) regression used a popular method of least squares, in which weights are calculated by minimizing the sum of the squared errors. The function lm to calculate the OLS was used. The criterion of performance J(w) takes the form: where n-total number of data, p-total number of input, x j -input variable, y i -output variable, w j are unknown weights (parameters) of the model. • The Ridge model was calculated using the function lm.ridge from the "MASS" package. In Ridge regression [44], the criterion of performance includes a penalty for increased weights and takes the form: Parameter λ ≥ 0 decides the size of the penalty: the greater the value of λ, the bigger the penalty. • Multilayer Percleptron (MLP)-to implement these methods, the RSNNS package was used [45]. Apart from the linear models, the artificial neural network was used. Two types of ANNs were applied: a multi-layer perceptron (MLP). MLPs are fully connected feedforward networks, and the most popular architecture used in applications. Training was performed by error backpropagation and logistical function as an activation function of hidden layers was used. • Artificial neural networks with a radial basis function (RBF)-the application of the RBF network is similar to MLP and used the RSNNS package [45]. The RBF are also feed-forward networks, but they have only one hidden layer. This network performs a linear combination of radially basis functions. The presented methods were used to calculate models from all variables ( Table 2) and for subcollections of variables received after using selected reductive procedures of the input collection variables [46,47]. Lasso regression is the first method analysed. Its mechanism facilitates assigning a penalty to variables and in this way they are eliminated from equations. The Lasso regression was obtained with the function enet from the "elasticnet" package. In Lasso regression [48], the norm L 1 is used i.e., the sum of absolute values: Classic "variable selection" procedures are used when there is a high number of potential predictive variables (Y) which means a high number of potentially calculable equations (solutions of input collections) [47]. The procedures introduce or eliminate individual variables from the equation and consist only of studying the subcollection of all possible equations. The following four selection procedures were used in the analysis: • Stepwise Forward Regression-The forward selection procedure begins with an equation which contains only a free expression. The first variable in the equation is the one which has the highest correlation with the Y variable. If the coefficient of regression of the variable differs significantly from zero, the variable remains in the equation and another variable is added. The second variable introduced into the equation is the one which has the highest correlation with Y. (Y has been adjusted for the effect of the first variable). If the regression coefficient is significant, adding the next variable is implemented in the same way. • Stepwise Backward Regression-The backward elimination procedure begins with an equation with all variables. One variable is removed in each step. The variables are neglected depending on their contribution to a reduction in the total error of squares. The variable with the lowest contribution to the reduction in the total error of squares, i.e., the one which has the smallest t-test in the equation, will be removed as the first one. Assuming that there is one or more variables with negligible t-tests, the procedure consists of removing all variables with the lowest negligible t-test. The procedure is completed when all t-tests are significant or when all variables have been removed. • Stepwise Regression (bidirection)-The stepwise method combines the action mechanisms of both abovementioned methods. Generally, it is a forward selection procedure which contains an extra mechanism that enables the removal of variables on any stage, similarly to backward selection. In this procedure, the variable which was earlier added to the equation can be removed later. Calculations made to add and remove variables are the same as in the forward and backward procedures. All models determined in the study were tested by means of leave-one-out-cross-validation (LOOCV). During the LOOCV, RMSE CV was calculated which has the form: where: n-total number of data,ŷ −i -the output of a model calculated after removing the pair (x i , y i ). Results All models were evaluated using the prediction error RMSE CV calculated during cross-validation (Tables 3 and 4). Figure 1 presents validation errors in relation to the parameters of the models. For Ridge models, this parameter is the lambda parameter λ ∈ [0, 20] with a step of 1, in the case of the Lasso regression, the parameter s ∈ [0, 1] has a step of 0.1. Artificial neural networks (MLP and RBF) were evaluated for the number of neurons in the hidden layer m ∈ {2, 3, . . . , 11}. By analysing the results for the models identified based on all variables (Table 3), it may be observed that the most accurate model for women was obtained using the RBF type neural network with eight neurons in the hidden layer (RMSE CV = 4.31 [mL·kg −1 ·min −1 ]). The best model making predictions among men were based on the whole collection of variables which generates an error of RMSE CV = 5.50 [mL·kg −1 ·min −1 ]. This was calculated with Ridge regression for λ = 20. A similar observation may be made for the most accurate model for all data where the Ridge regression (λ = 12) is also the one which generates the smallest error (RMSE CV = 4.89 [mL·kg −1 ·min −1 ]). The following stage of the analysis involved variable selection methods to improve the predictive capacity of the presented models, determine optimal input collections, and consequently to identify the factors which determine the VO 2 max prediction in the analysed group. The models determined by means of Lasso and stepwise regression were evaluated using LOOCV ( Table 4). The application of variable selection methods improved the predictive capacity of the OLS model. The error level for the female group was RMSE CV = 4.11 [mL·kg −1 ·min −1 ] when the bidirectional method was used, while, for males, the model obtained with forward regression turned out to be more precise-RMSE CV = 5.35 [mL·kg −1 ·min −1 ]. For all data, the forward and bidirectional methods determined the same input data for which the error generated by the model amounted to RMSE CV = 4.78 [mL·kg −1 ·min −1 ]. The identified input collections were used to calculate new neural networks (Table 4). Neural networks for selected variables are characterised by a smaller prediction error than the networks identified based on all variables. RBF models generate the value of RMSE CV = 4.07 [mL·kg −1 ·min −1 ] for women, RMSE CV = 5.30 [mL·kg −1 ·min −1 ] for men and RMSE CV = 4.80 [mL·kg −1 ·min −1 ] for all participants. The models obtained for females and males demonstrate the poorest fitness among all analysed models, while the model for all data is worse than the OLS model (forward, bidirectional). The equations for optimal linear models are presented in Table 5. All-all participants, F-female, M-male, Lasso-ordinary least squares regression, OLS-ordinary least squares regression, MLP-multilayer perceptron, RBF-artifical nural network with radial basis function. Table 5. VO 2 max linear predictive equations. Discussion The paper presents mathematical models for the prediction of VO 2 max based on a 20 m SRT and anthropometric parameters. All models were developed and cross-validated using a sample of 308 healthy young people (aged [19][20][21][22][23][24][25][26][27]. The models obtained are classified as "maximal models" [49] and that is why errors in this group of models were compared with errors presented by other authors. The majority of papers with VO 2 max maximal predictive models present a common model for females and males [15,30,31]. An analysis of errors for predictive models in which sex was the input variable revealed that the error RMSE CV = 4.78 [mL·kg −1 ·min −1 ] is a result which does not deviate from errors presented in other papers ( Table 6). The calculated model is more accurate than the model proposed by Mahar [30] and Silva [15]. It is however less accurate than the models proposed in papers by Akay [31,50]. The optimal model for the whole group used the following factors for prediction: sex (x 0 ), distance (20 m SRT) (x 1 ), body height (x 7 ) and content of adipose tissue (x 20 ). In young males and females, regular physical exercise definitely improves CRF by increasing VO 2 max and decreasing body fat percentage, leading to a better quality of life [51][52][53][54]. The VO 2 max level varies significantly among individuals and mainly depends on genetic aspects, sex, age anthropometric properties of health, lifestyle and training status [51,52,[55][56][57][58][59]. Reference values may change over time and should be regularly updated/validated [60]. The data may be found in papers analysing the results of cardiac stress tests carried out for large groups of healthy people [8,60,61]. Typical values of VO 2 max in young healthy male students amount to about 50 [mL·kg −1 ·min −1 ], and in women to about 40 [mL·kg −1 ·min −1 ] [62]. CRF in terms of aerobic capacity is affected by the composition of the body. Low CRF in young adults with increased body fat could be a factor in the development of cardiovascular comorbidities later in middle age and old age [63]. Bioimpedance is an alternative method to estimate the percentage of body fat, when compared to DXA (Dual Energy X-ray Absoptiometry), it is a gold standard method, as there is a high level of agreement [64]. Simple use, lack of radiation and the relatively low cost of bioelectric impedance suggests that it is a feasible analysis for body fat measurement, especially in large populations [65]. Besides a common model for females and males, this paper determines separate models for each group. It may be observed that the models for females are characterised by a significantly smaller error as compared to those for men. The RBF neural network with eight neurons in the hidden layer turned out to be the most accurate. It generates an error of RMSE CV = 4.07 [mL·kg −1 ·min −1 ]. Bandyopadhyay [66] presented similar studies in his paper. He described a multiple regression model whose prediction error amounted to 1.27 [mL·kg −1 ·min −1 ] ( Table 6). The model used speed and body height as predictors. Other studies concerning prediction among females were published by Chatterjee [23], who calculated a model generating an error of 0.53 [mL·kg −1 ·min −1 ]. The model used only the maximum speed. When the errors obtained in this paper are compared with errors obtained by other authors, it may be concluded that their errors were significantly smaller. It shall be emphasized that the population analysed in the study was much larger and more diversified for the VO 2 max level (44.9 ± 7.0 [mL·kg −1 ·min −1 ]). Moreover, cross-validation was used in the paper for evaluation, while in the papers discussed the model quality was evaluated by means of standard errors. That is why direct comparisons are impossible. The model calculated for women has the simplest structure and includes: distance (20 m SRT) (x 1 ), body height (x 7 ) and body fat (x 20 ). The model contains the same variables as the model for all participants save for sex, which is a constant. Similarly to the female group, the RBF network turned out to be the best model for males, whereby in the male group the optimal network consists only of two neurons in the hidden layer. The network generates an error level of RMSE CV = 5.30 [mL·kg −1 ·min −1 ]. Comparing the obtained results with other papers where models were calculated only for males, it may be observed that the parameter value does not deviate significantly from the published results ( Table 6). The optimal model for males calculated in the study generated greater errors than the models presented by Machado and Demadai [67] (4.10 [mL·kg −1 ·min −1 ]) and Bandyopdhyay [68] (1.41 [mL·kg −1 ·min −1 ]). The error obtained is still smaller than the error generated using the Costa model [69] (7.20 [mL·kg −1 ·min −1 ]). When comparing the errors of the calculated models, it should be emphasized that a direct comparison of the errors obtained in the studies is valid only when the same maximal test is used and when a similar study sample is employed [70]. Therefore, it seems that the results obtained may only be compared with models which use 20 m SRT and are developed based on a population of healthy people aged 19-27 years, and are cross-validated. Cross-validation does not evaluate the fitness error but a generalisation error. The model determined for males consisted of the following variables: distance (20 m SRT) (x 1 ), waist circumference (NIH protocol) (x 9 ), WHR (NIH protocol) (x 14 ), FFMI (x 17 ) and BSA (x 19 ). The obtained collection of predicators is not incidental and the significance of each variance for VO 2 max is reflected in the literature. Waist circumference and WHR are popular indicators used for the evaluation of cardiovascular diseases [71]. Waist circumference is also a predictor of visceral fat and indicator of central obesity [72]. There remains no uniformly accepted measurement protocol, resulting in a variety of techniques employed throughout the published literature [73,74]. The most commonly used are four measurements of waist circumference defined by specific anatomic landmarks: (1) immediately below the lowest ribs, (2) at the narrowest waist, (3) the midpoint between the lowest rib and iliac crest, and (4) immediately above the iliac crest [75,76]. Measurements made at the umbilicus are also commonly used in clinical and research settings [74]. Besides DXA and bioimpedance, different indicators based on anthropometric measurements are used for the evaluation of body fat content. The indicator that is often used in obesity studies is BMI [39,71,77]. However, it presents some limitations. It neither evaluates the content of adipose tissue and its distribution in the body nor does it differentiate body fat content depending on age, sex and ethnic origin [71,78]. The identification of adipose tissue location is necessary to detect visceral (central) obesity, which increases the risk of cardiovascular diseases [79]. That is why the qualitative supplementation of the classical concept of BMI may help to create reference values for FFMI for a given age category [80]. BSA, a method for describing body size, is commonly used in medicine as a biometric unit [81,82]. The use of a predictive model makes it possible to determine the level of CFR without using expensive equipment and specialized research teams. The prediction of the VO 2 max parameter may be useful as an element of monitoring the health and physical performance of students of young healthy people. OLS-ordinary least squares regression, MLP-multilayer perceptron, SVM-support vector machines, RBF-artifical nural network with radial basis function. The strongest points of presented paper are: • a large research group representing the population, • VO 2 max measurements made using direct methods, • the use of a large set of variables to determine the optimal predictors for VO 2 max estimation, • and obtaining a relatively small VO 2 max estimation error. The limitations of the study are related to the using models in practice. VO 2 max estimation by the proposed models should be performed for healthy people of the same age as the research group.
5,849.2
2018-11-10T00:00:00.000
[ "Computer Science" ]
Full-Vectorial Light Propagation Simulation of Optimized Beams in Scattering Media Volumetric scattering prevents imaging modalities in biomedical optics from imaging deep inside tissue. The optimization of the incident wavefront has the potential to improve these imaging modalities. To investigate the optimization and light propagation of such beams inside scattering media rigorously, full-vectorial simulations based on solutions of Maxwell’s equations are necessary. In this publication, we present a versatile two-step beam synthesis method to efficiently simulate the scanning and phase optimization of a focused beam inside a static scattering medium. We present four different approaches to the phase optimization of the energy density and the absolute value of the Poynting vector. We find that these quantities have two regions with different, almost exponential decays over depth for a non-optimized beam. Optimization by conjugating the phase of the projected electric field in various directions at the focus shows an improvement below a certain penetration depth. Seeking global solutions to the optimization problems reveals an even better enhancement in the energy density and the absolute value of the Poynting vector in the focus. For Poynting vector optimization, the differences between the presented optimization approaches are more significant than for the energy density. With the presented method, it is possible to efficiently simulate different imaging methods improved by wavefront shaping to investigate their possible penetration depths. Introduction The scattering of light is one of the greatest obstacles preventing optical imaging modalities from imaging deep inside scattering media such as biological tissue. To reduce the amount of unwanted scattered light in the resulting image, different methods have been developed, such as confocal scanning microscopy and optical coherence tomography. Most of these techniques try to effectively reject the multiply scattered light. Therefore, this has led to the sole use of ballistic light for imaging results in the limitation of the penetration depth due to the Beer-Lambert law. Another way to reduce the scattering of light is optical clearing, which modifies the optical properties of the specimen to an almost homogeneous refractive index distribution [1]. This approach is impractical in many applications. Common approaches, which also use scattered light for imaging in the optical spectrum, such as diffuse tomography [2], have a reduced resolution compared to common microscopy techniques [3]. With the advent of spatial light modulators, the manipulation of the incident light to regain a focus inside or behind a scattering medium became possible [4,5]. Spatial light modulators are high-resolution modulators that use liquid crystal display technology, deformable mirrors, or microelectromechanical systems [6][7][8]. All of these technologies have successfully been applied to correct aberrations or focus behind a scattering medium [9][10][11]. Wavefront shaping methods have since been shown to be able to improve imaging in multiple scattering media with imaging resolutions similar to state-of-the-art microscopy techniques [12]. Frequently used methods to optimize the incident wavefront are via feedback [13], phase conjugation [14], and using the transmission matrix [15,16]. To find the optimal wavefront, one needs access to the electromagnetic field at the point of interest, which is feasible outside the scattering medium but inside the medium requires, for example, a guide star [17]. If the refractive index distribution of a specimen is known, it is theoretically possible to calculate the scattering of the incident light and optimize it via simulation. To investigate how the electromagnetic field behaves when optimizing the incident wavefront, full-vectorial electromagnetic field simulations are an invaluable tool. Common methods to model light propagation inside diffuse media based on the radiative transfer equation are not applicable in this case because they do not describe interference effects [18]. Therefore, methods to investigate light optimization by manipulating the incident wavefront must describe interference effects. Often, these types of simulations are carried out in a waveguide geometry [19] or in two-dimensional scattering media, where only out-of-plane polarization is considered [20,21]. Other simulation approaches for wavefront shaping scenarios have used approximations of wave propagation, such as the beam propagation method [22,23]. To cover all aspects of the electromagnetic field, such as polarization and phase, fullvectorial electromagnetic field simulations based on Maxwell's equations are needed, which is a computationally demanding task for a large medium. In this publication, we describe in Section 2 how to efficiently simulate the full-vectorial light propagation of focused beams inside static scattering media by a two-step beam synthesis method. This approach makes it possible to optimize the incident beam to enhance the energy density or the absolute value of the Poynting vector in the focus. Additionally, average values of the mentioned quantities can be obtained with our proposed method. Results for a focused beam scanned into a scattering medium and optimized via phase modulation are presented in Section 3. We conclude this study with Section 4, where we also discuss the main results. Two-Step Beam Synthesis Method We use the Debye-Wolf theory [24,25] for an aplanatic optical system to simulate the scanning and optimization of a monochromatic focused beam inside a scattering static medium. A sketch of the focusing system is depicted in Figure 1. As we are interested in the simulation of a spatial light modulator in the back focal plane of the imaging system, it is convenient to model the incident beams with their angular spectrum representation. The incident geometrical ray with an electric field E inc (s) propagating parallel to the optical axis (z-axis) refracts at the Gaussian reference sphere, where s = (s x , s y , s z ) denotes the normalized wavevector k/k, which is fully characterized by the two coordinates s x and s y in k-space. The wavevector is denoted by k and the absolute value of the wavevector by k = 2πn m /λ with the vacuum wavelength λ and the refractive index of the surrounding medium n m . For a monochromatic beam, the third component of the normalized wavevector can be calculated by s z = (1 − s 2 x − s 2 y ) 1/2 . The refracted ray has an electric field E ∞ (s), which is the electrical field vector of a plane wave propagating in the direction s to the focus region. To obtain the electric field distribution E(r) in the focal region, we must integrate over the whole numerical aperture, each point in k-space. This results in the representation of the focusing field with the so-called Debye-Wolf integral [24] where r denotes the position vector and f is the focal length of the imaging system. To obtain E ∞ (s) from E inc (s), we follow the approach presented in [26]. The incident electric field for a specific ray is decomposed into a component that is perpendicular to the meridional plane E inc (s) · n φ and a component that is parallel to the meridional plane E inc (s) · n ρ . Figure 1. Illustration of the focusing system with the optical axis along the z direction. The incident light is characterized in k-space by the normalized coordinates s x and s y with its angular spectrum E inc (s). The in-and out-of-plane polarizations of E inc (s) with respect to the meridional plane (plane rotated by φ around the optical axis) can be calculated with the normal vectors n φ and n ρ . The aplanatic lens refracts the incident light at the Gaussian reference sphere. The resulting electric field vector E ∞ (s) for the plane wave propagating in direction s is calculated with the normal vectors n φ and n θ . The refracted light then propagates with angle θ relative to the optical axis to the focus region. The unit vector n φ = (− sin φ, cos φ, 0) is perpendicular to the meridional plane and n ρ = (cos φ, sin φ, 0), the corresponding parallel unit vector. The angle φ describes the rotation of the meridional plane around the z-axis. The electric field components are also perpendicular to the propagation direction that is parallel to the optical axis. After the geometrical ray is refracted from the Gaussian reference sphere, the propagation direction changes to s. The perpendicular part of the electric field vector remains the same after the Gaussian reference sphere but the parallel part has to change as the electric field has to be perpendicular to s. Therefore, the electric field vector for a plane wave in the angular spectrum representation can be calculated by [26] The unit vector n θ = (cos θ cos φ, cos θ sin φ, − sin θ) is perpendicular to the propagation direction s and the factor √ s z is due to energy conservation. The angle θ describes the angle between the refracted ray direction s and the z-axis, as shown in Figure 1. We further assume that the aplanatic optical system has a polarization-independent transmission of unity. Therefore, we can write the electric field distribution at the focus with Equation (1) in terms of the angular spectrum of the incident field generated by a fully illuminated aperture as where we omit the constant factors outside the integral. For a given incident field distribution in k-space, Equation (3) can be calculated numerically to obtain the field distribution around the focus in free space. The incident field to the focus region can be used as the input for a numerical Maxwell's equations solver, such as the FDTD method [27], to calculate the scattering of a beam incident on a turbid medium. Scanning the incident beam to many positions relative to the scattering medium is computationally expensive as Maxwell's equations have to be solved for each position. In this study, we use a two-step beam synthesis method, which we already have applied for beam propagation in two-dimensional scattering media [21]. We first discretize the double integral in Equation (3) with a sum over equidistant points in k-space where ∆s x and ∆s y denote the spacing between the equidistant points in k-space. Each term in the sum corresponds to a plane wave propagating in the direction s i,j to the focus region with a polarization vector calculated by Equation (2). For a given scattering medium, Maxwell's equations are solved for each plane wave with s i,j , which results in a set of N near-field solutions. In this publication, we are interested in monochromatic beams that are scattered by a dielectric medium. Therefore, we solve Maxwell's equations on a grid with a modified Born series approach similar to the one described in [28]. Because we only examine phase modulation in this study, we restrict ourselves to incident fields with the polarization E inc = (E 0 , 0, 0) and constant amplitude E 0 in the back focal plane. The calculation of the set of N near fields is the most time-consuming part, where the bottleneck is a single plane wave simulation. Discretization of the double integral in Equation (3) leads to the repetition of the fields in the x-and y-planes. To avoid aliasing, we have to consider the Whittaker-Shannon sampling theorem [29]. Therefore, the spacing between neighboring k-vectors has to be where L i is the lateral height and width of the simulation region. In the second step of the two-step beam synthesis method, all N near fields are added according to the angular spectrum in k-space to numerically approximate the integral in Equation (3). Thus, the plane wave part in Equation (4) is replaced by the field distribution over the simulation grid E i,j (r) to obtain the scattered electric field distribution inside the medium The summation is also done for the magnetic field distribution and results in the corresponding total magnetic field H(r) due to the linearity of Maxwell's equations. The magnetic field has to be computed in order to calculate the energy density distribution w(r) and the complex Poynting vector S(r) inside the medium. Scanning the incident beam by ∆r only requires a phase shift for each plane wave in the angular spectrum of exp(− iks · ∆r). Therefore, the term exp − iks i,j · ∆r needs to be added in Equation (6) under the double sum. In addition, the angular spectrum can also be altered to optimize the focused incident beam to increase the energy density or the Poynting vector at a specific position. In this publication, we consider only phase modulation in the angular spectrum, although polarization and amplitude modulation are also possible with this method. For each optimization channel, a phase factor of exp(− iφ opti,i,k ) can be applied to spatially modulate the phase of the incident beam in the back focal plane. There are different ways to optimize a certain observable, such as the energy density or the Poynting vector, with the knowledge of the electromagnetic field distribution from the plane wave set but, in general, this is a global optimization problem. Phase Optimization Techniques The optimization of the incident electromagnetic field to obtain, for example, a focus inside a scattering medium requires either a feedback signal [13] or information about the electromagnetic field [30] at the region of interest. The total number of optimization channels N ch is usually given by the experiment setup. Often, a spatial light modulator is used, where each pixel or a group of pixels equals one optimization channel. In a fullvectorial light simulation, it is possible to access the electromagnetic field anywhere in the simulation region. Therefore, the information for optimization is easily accessible. In general, the incident light can be optimized for any objective function. Here, we focus on the energy density w(r) and the absolute value of the Poynting vector | (S(r))|. () denotes the real part. The energy density is calculated as w(r) = 1 4 ( (r)E(r) · E * (r) + µ 0 H(r) · H * (r)) and the complex Poynting vector as S(r) = 1 2 E(r) × H * (r). Each channel has an electromagnetic field denoted byẼ n (r),H n (r), which can be calculated by summing up the electromagnetic fields E i,j (r) and H i,j (r) over the corresponding region of the n-th channel in k-space. One of the most common attempts to optimize the energy density or intensity of the electromagnetic field inside or behind a scattering medium is to use phase conjugation. This technique results in the maximum constructive interference of the electric fields, but only if light can be modeled as a scalar or all incident channels have the same polarization state in the focus position. For phase conjugation, looking only at the x component of the electric fieldẼ n,x (r foc ) at the optimization position r foc , we apply a phase change of exp(− i arg{Ẽ n,x (r foc )}) to each channel. This phase shift is also applied to the magnetic fields. The energy density and the Poynting vector are then calculated with these fields and are further denoted by "x-pc". Scattering randomizes the phase and polarization state of light for each channel. Thus, it can be expected that phase conjugation with respect to a random direction n rand inside strongly scattering media can also enhance the energy density or absolute value of the Poynting vector. Phase conjugation along a random direction is obtained by projecting the electric field at r foc of each optimization channel along n rand and applying this phase shift of exp − i arg{Ẽ n (r foc ) · n rand } to the electromagnetic field of each channel. We further denote the optimized quantities obtained by this method as "n-rand-pc". It is known that the electric field vector can only oscillate in a plane [31], even inside a strongly scattering medium. Therefore, the optimized electric field vector oscillates also in a plane. One option to optimize the incident field is to find the best direction n opti (φ, θ) = (cos φ sin θ, sin φ sin θ, cos θ) in which to project the electric fields of each channel to find the optimal phase shift. To find the optimal direction, we must solve the following optimization problem for the energy density w foc = w t (r foc ) at the focus r foc Here, it should be noted that we are using the electric field to project along the optimal direction n opti to find the optimal angles for φ and θ. This could also be done with the magnetic field. In the same way, we can write the optimization problem for the average value of the Poynting vector (S(r foc )) as Solving Equations (7) and (8) involves only the variables φ and θ. We further denote this optimization scheme as "n-opti". The maximal number of variables that can be optimized is the number of channels N ch if each channel can be phase modulated. Therefore, the optimization problem for the energy density in the focus point can be written as Furthermore, the optimization problem for the average value of the Poynting vector in the focus position is In further sections, we denote quantities optimized with all the possible N ch channels as "g-opti". To find optimal solutions for Equations (7)-(10), we use the Julia packages Optim.jl [32] and Blackboxoptim.jl [33]. Scattering Medium and System Specifications In Figure 2, the turbid medium that is used in this study is shown. The scattering medium is built up by cubic scatterers randomly positioned in a host medium with refractive index n m = 1. We choose to use cubic scatterers with the same orientation because it is easy to randomly position them without overlapping with a large concentration. Each cubic scatterer has a side length of 0.7λ and a real refractive index of 1.5. Hence, the medium is non-absorbing. The scatterers are randomly distributed inside a rectangular volume with side lengths L x = 41.7λ, L y = 41.7λ, and L z = 20λ. The volume fraction occupied inside the rectangular volume is f V = 0.3. The concentration and refractive index of the scatterers were determined by test simulations in such a way that the medium scatters relatively strongly. This medium is similar, for example, to a thin powder layer of silica particles of similar size and concentration. The refractive index distribution of the randomly positioned cubic scatterer was transferred into a simulation grid that had a resolution of λ/6. To quantify the scattering mean free path of the turbid medium, we performed a simulation, where a plane wave was scattered by the turbid medium. The propagation of the plane wave was in the z-direction and we applied periodic boundary conditions in the x-and y-direction to avoid artifacts from the finite size of the medium. Furthermore, the polarization of the incident plane wave was in the x-direction. The coherent intensity, which is often defined as E x 2 x,y (z), can be used as an estimator for [35]. The averaging of E x (r) was performed over the x-and y-direction of the simulation volume. The normalized coherent intensity E x 2 x,y (z)/ E 0 2 is shown in Figure 2b, where E 0 2 is the absolute square of the amplitude of the incident plane wave. A different approach to obtaining an estimate of is to mimic a collimated transmission experiment by performing the plane wave simulation, as described before, for different widths L z of the medium. Calculating the two-dimensional Fourier transform of E x at a transverse plane behind the medium gives the far field. The normalized intensity from the collimated transmission simulations is shown with orange markers in Figure 2b. Both approaches give similar estimates of the coherent (unscattered) intensity. A fit of an exponential function gives a scattering mean free path of = 1.05λ. The scattering mean free path is often estimated with the scattering efficiency of a single scatterer and the concentration of the scatterers [36]. If we assume a medium with the spherical scatterers but the same volume as a single scatterer, as in this study, we find that this approach would give an estimation of = 0.62λ, which is a ≈40 % relative difference from the numerical approaches. A concentration of f V = 0.1 would give a relative difference of ≈4.6 % between the different approaches. It is known that this linear dependency between the inverse of the scattering mean free path and scatterer concentration is only valid for small concentrations. Therefore, we used the decay of the coherent intensity and the collimated transmission simulations to estimate the scattering mean free path. The simulated incident focused beam is defined by the imaging system described in the previous section. In this study, we choose a numerical aperture of NA = 0.45, which results in a maximum angle of θ max = 26.74°relative to the optical axis for the allowed k-vectors. The numerical aperture limits the area in k-space as depicted in Figure 1. In order to use the two-step beam synthesis method, we have to simulate each k-vector in a separate simulation; therefore, we have to sample the aperture. We choose an equidistant spacing of ∆s x = ∆s y = 0.025 according to Equation (5) to avoid artifacts coming from the repetition of the incident beams in the lateral direction. The sampling of the aperture in normalized k-space is shown in Figure 3. In total, we simulated N = 1009 plane waves for the uniform illumination of the back focal plane and polarization of E inc = (E 0 , 0, 0) . Results The described vectorial two-step beam synthesis method is capable of simulating the scanning and optimization of arbitrary electromagnetic beams inside turbid media. In this section, we present results obtained with this simulation approach to scan inside a scattering medium and visualize the energy density distribution and the absolute value of the Poynting vector. Afterward, we investigate the impact of phase optimization for different numbers of optimization channels for these quantities. Scattering of a Focused Beam by the Turbid Medium Scanning a focused beam into a scattering medium results in the redistribution of the incident light. If the scattering mean free path is much smaller than the scanning depth, the focus completely deteriorates, which makes the focused beam unusable for imaging modalities such as confocal microscopy. To illustrate the scattering of a focused beam inside a highly scattering slab, we scanned the focus to a depth of z = 15λ inside the scattering medium. The normalized energy density distribution of the beam in vacuum w(r)/w 0 is shown in Figure 4a. A clear focus at the position r foc = (0, 0, 15λ) with w(r)/w 0 = 1 can be seen. w 0 denotes the energy density at the focus position in vacuum. The full width at half maximum of the focus in vacuum is ≈8.5λ in the axial direction and ≈1.1λ in the lateral direction. Scanning the focused beam to the same position in the presence of the scattering medium results in the energy density distribution shown in Figure 4b. The effect of scattering reduces the energy density inside the medium because a lower amount of light enters the scattering medium and also the energy is distributed over a larger volume. The maximum normalized energy density inside the scattering medium is 0.35 and its location is about a distance of λ from the entrance surface. At the original focus, the energy density drops by around two orders of magnitude to w(r foc )/w 0 = 0.003. In the same way, the Poynting vector can be investigated. In Figure 5a, the absolute value of the Poynting vector is shown, which is normalized by the peak value in vacuum at the focus position further denoted by | (S 0 )|. The largest value can be found in the focus as expected. The full width at half maximum of the focus for | (S foc (r))|/| (S 0 )| is ≈8.5λ in the axial direction and ≈1.1λ in the lateral direction. These are the same values as for the focus size of the energy density. Introducing the scattering medium leads to the redirection of the flow of light, as can be seen in Figure 5b. Furthermore, a reduction in the intensity of the Poynting vector is additionally caused by the backscattered light, which leads to a counterpropagating flow of light, reducing the overall Poynting vector. Phase Optimization to Enhance the Focus Energy Density By scanning a focused beam into a scattering medium, the focus inevitably deteriorates, as shown in the previous Section 3.1. Phase optimization can lead to an increase in the energy density at the desired focus location. In Figure 6, two simulations of phase-optimized beams are shown. The non-optimized focused light beams were first scanned to a depth of z = 15λ and afterward optimized with N ch = 137 (Figure 6a) and N ch = 1009 (Figure 6b) equally sized channels. Equally sized channels refer to the channels with the largest rectangular area in k-space that is modulated. There are additional channels with smaller areas at the edge of the circular region in k-space that are modulated, but we do not count them here as full optimization channels. For these examples, an optimal solution according to Equation (9) was sought. The phase distributions applied in k-space are shown in the left plots and the resulting energy density distribution of the optimized incident beams are shown in the right plots. After phase optimization with N ch = 137, a distinct speckle-sized focus is regained at z = 15λ. It can also be noticed that in the volume close to the incident surface, the energy density has a larger value as in the desired focus. Optimization with N ch = 1009 results also in a speckle-sized focus but with a greater value of the energy density in the focus than in any other region of the scattering medium. The reason for this is that as the number of channels increases, the electromagnetic fields constructively interfere only in the focus, and, in other regions, the field results in a random interference pattern. The axial and lateral full width at half maximum of the focus is ≈0.6λ for N ch = 137 and N ch = 1009. To obtain the quantitative behavior of the energy density in the focus, we scanned the non-optimized and optimized beams into the medium to obtain the energy density in the focus over depth. Different optimization schemes and different numbers of optimization channels were used to obtain an enhancement in the energy density in the focus. Here, we define the enhancement as the quotient of the optimized and the non-optimized quantity. The non-optimized quantity is denoted by "scan". In Figure 7 (top), the averaged energy density w foc (z) over depth normalized by the energy density in the focus in vacuum w 0 for non-optimized ("scan") and optimized ("x-pc", "n-rand-pc", "n-opti", and "g-opti") beams is shown. The results from the optimized beams are shown with different line styles and the colors denote the different numbers of optimization channels. The averaging was performed over 49 lateral positions between the area of −6λ and 6λ in the x and y-direction, where each lateral position was separated by at least a distance of 2λ. By using only a small limited lateral area, we aimed to avoid boundary effects due to the finite size of the medium. The enhancement over depth is shown in Figure 7 (bottom). The shaded areas show the expected enhancement, where the maximum value for a specific N ch is obtained by the scalar theory [4] from the formula η(N ch ) = π/4(N ch − 1) + 1. The minimum value is obtained by η(N ch )/3. This can be justified by the consideration that the polarization of a general electromagnetic field has three spatial degrees of freedom and these orthogonal polarization states cannot interfere. For the scanned non-optimized beam, the normalized energy density in the focus first increases for shallow depths up to a distance 2λ inside the medium. This can be explained by the backscattered light from larger depths enhancing the focus energy density on average. For depths z > 2λ, w foc (z) /w 0 decreases exponentially with a factor ≈1/1.5λ, which is lower than predicted by the Beer-Lambert law. This deviation can be explained due to the presence of scattered light in the focus. The behavior of the energy density changes for depths larger than 10λ to an exponential decay with ≈1/4.8λ. This phenomenon is also present in two-dimensional, strongly scattering media [21]. Behind the medium, w foc (z) /w 0 drops abruptly due to the absence of backscattered light from larger depths. In Figure 7 (top), we see that applying the conjugate phase of the electric field of each channel at the focus in the x-direction ("x-pc") or a random direction ("n-rand-pc") leads to no enhancement or even a decrease in the energy density in the focus for depths up to 2λ. As the electric field of the incident beam is predominantly polarized in the x-direction, conjugating the phase in this direction leads to almost no enhancement before the scattering medium and at shallow depths because all the electromagnetic fields of the channels are already in phase at the focus. For larger depths, modulating the incident beam with the two different phase conjugation approaches leads also to a significant enhancement. The results of the approaches "n-opti" and "g-opti", which utilize global optimization algorithms to find an optimal solution of the phase patterns, show similar behavior before the scattering medium and at shallow depths with a small or no enhancement compared to the nonoptimized beam. For larger depths, the enhancement is always greater for these types of optimization approaches. Furthermore, optimization according to the "g-opti" approach gives always a greater energy density in the focus than "n-opti", but one has to keep in mind that "g-opti" uses N ch variables in the optimization problems, whereas "n-opti" only uses two. The enhancement with all optimization approaches shows a steep increase until a depth of z = 8λ. This effect can be explained by the fact that the focused incident beam is itself optimized for a vacuum and deteriorates when scanned into the scattering medium. After a depth of z > 8λ, the focus is practically lost, and therefore the enhancements for different numbers of incident channels stagnate or have a decreased slope. For the energy density enhancement, only the results for N ch = 9 reach the theoretically expected enhancement between η(9)/3 = 2.4 and η(9) = 7.3. For more optimization channels, the different electromagnetic fields might not be completely randomized for the scanned non-optimized beam and the consulted scalar theory might not be fully applicable to vectorial light propagation. Phase Optimization to Enhance the Absolute Value of the Focus Poynting Vector In addition, it is often of interest to investigate the flow of light described by the Poynting vector and how it can be increased by the phase modulation of the incident beam. As seen in Figure 5, a scattering medium reduces the absolute value of the Poynting vector in the focus compared to the value in vacuum S 0 . Phase optimization according to Equation (10) enhances the absolute value of the Poynting vector at the desired focus. This can be seen for a scanning position at z = 15λ in Figure 8 for two different numbers of optimization channels. We find that in the case of N ch = 137, a distinct focus is regained at the desired focus position, although | (S(r foc ))|/| (S 0 ))| can have greater values in the scattering medium at shallow depths. For N ch = 1009, the maximum absolute value of the Poynting vector is in the desired focus and also larger as for the simulation with N ch = 137. In both cases, the full width at half maximum in the lateral direction is ≈0.7λ and in the axial direction ≈0.3λ. Examining the average absolute value of the Poynting vector for a non-optimized beam over the scanning depth shows a decrease with penetration depth, as can be seen in Figure 9. Averaging was performed in the same way as for the energy density. Furthermore, the absolute value of the Poynting vector does not show an increase in value at shallow depths compared to the energy density in the previous section. This can be explained by the counter-propagating waves from the scattering medium, which lead to a resulting average decrease in the Poynting vector. We also find two regions within the medium with different exponential decays, at first with an exponential of ≈1/1.4λ until z = 8λ and with ≈1/4.5λ until the end of the medium. Both exponential behaviors have a slightly lower decay rate compared to the Beer-Lambert law. Phase optimization to enhance the absolute value of the Poynting vector has been done for three different numbers of optimization channels N ch ∈ {9, 137, 1009}. The four different optimization approaches described in Section 2.2 were used for optimization. The results are shown in Figure 9 in different colors and line styles. For scanning positions in front of the medium, all optimization techniques besides "g-opti" deteriorate the focus compared to the non-optimized beam and result in a lower value of | (S foc (z))| /| (S 0 )|. For a scanning depth greater than 4λ, all optimization techniques lead to an increase in the absolute value of the Poynting vector in the focus. At scanning positions inside the medium, we find that the differences between the global optimization technique "g-opti" and the other approaches increase with an increase in the number of optimization channels. Behind the medium, the "n-opti" and "g-opti" approaches lead to similar values of the average absolute value of the Poynting vector in the focus. The enhancement with the different techniques increases with the scanning depth for all approaches and is shown in Figure 9 in the bottom plot. Discussion and Conclusions In this study, we extended the two-step beam synthesis approach already used to investigate light propagation and optimization in two-dimensional media [21,37] to be able to model full-vectorial light propagation and optimization in three dimensions. The presented method is split into two main steps. First, a set of plane wave near-field solutions for a static medium is obtained by numerically solving Maxwell's equations, where the incident polarization and the propagation direction of the plane waves are determined through the Debye-Wolf theory. In the second step, the near fields of arbitrary incident beams scattered by the medium can be computed by adding the plane wave near-field solutions according to the angular spectrum of the incident beam. The separation of the calculation of the plane wave solutions and the synthesis of a complex beam makes it possible to reuse the plane wave solutions. Thus, with this method, the scattering of different incident beams can be efficiently calculated for the static medium under consideration. This includes scanning and optimizing the incident beam. Common methods would have to perform the complete numerical calculation again for each change in the incident beam. We used this approach to simulate the scanning and phase optimization of a focused beam with a numerical aperture of NA = 0.45 inside a strongly scattering medium with a scattering mean free path of = 1.05λ and thickness 20λ. We examined the energy density and absolute value of the Poynting vector at the intended focus position over depth ranging from −2.5λ to 22λ, where 0λ was the location of the front surface of the scattering medium. The optimization of the incident beam was investigated for a different number of optimization channels N ch ∈ {9, 137, 1009} and different optimization schemes. To find an optimal phase pattern to regain a focus, we used the knowledge of the electromagnetic field at the intended focus position. The energy density and the absolute value of the Poynting vector in the focus showed two regions over depth, with first a steep and afterward a flatter, exponential-like decay. This can also be found in two-dimensional scattering media [21] but is not present in thin media or scattering media with a larger . For example, the absence of the two regions can be found in simulations for a scattering medium with = 1.95λ (not presented in this publication) and two-dimensional media [38]. Phase optimization by modulating the phase in k-space and applying the conjugate phase of the electric field in x or a random direction in the focus for each optimization channel can lead to a significant enhancement inside the scattering medium. An increased enhancement can be obtained by seeking a solution to the optimization problem to find the best direction in which to project the electric field to obtain the phase for phase conjugation. This requires two optimization variables. The best performance was always achieved when a solution for the "g-opti" was sought, but with the drawback that the N ch variables have to be optimized for the given objective function. An additional drawback of the presented simulation approach is that one near-field simulation takes around 30 min, and, for a larger volume and a larger numerical aperture, the number of required near-field solutions increases drastically. However, with increasing computing power and a larger number of parallel simulations, this approach can be accelerated. The versatility of the presented method makes it possible to model arbitrary vectorial light beams. Possible applications of the approach are the investigation of different electromagnetic quantities inside turbid media, as shown in this study for the energy density and the Poynting vector. This can be of importance to quantify the performance of imaging modalities. Another possible use case is the calculation of optimal phase patterns computationally for real turbid media with a measured refractive index distribution to improve imaging modalities [23]. For large turbid media, the numerical solutions of Maxwell's equations are often too time-consuming to obtain. Therefore, it is also possible to use our proposed method but with plane wave solutions obtained with approximations such as the beam propagation method. Data Availability Statement: Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
8,864.8
2023-09-22T00:00:00.000
[ "Engineering", "Physics", "Medicine" ]
Patient-specific stopping power calibration for proton therapy planning based on single-detector proton radiography A simple robust optimizer has been developed that can produce patient-specific calibration curves to convert x-ray computed tomography (CT) numbers to relative stopping powers (HU-RSPs) for proton therapy treatment planning. The difference between a digitally reconstructed radiograph water-equivalent path length (DRRWEPL) map through the x-ray CT dataset and a proton radiograph (set as the ground truth) is minimized by optimizing the HU-RSP calibration curve. The function of the optimizer is validated with synthetic datasets that contain no noise and its robustness is shown against CT noise. Application of the procedure is then demonstrated on a plastic and a real tissue phantom, with proton radiographs produced using a single detector. The mean errors using generic/optimized calibration curves between the DRRWEPL map and the proton radiograph were 1.8/0.4% for a plastic phantom and −2.1/ − 0.2% for a real tissue phantom. It was then demonstrated that these optimized calibration curves offer a better prediction of the water equivalent path length at a therapeutic depth. We believe that these promising results are suggestive that a single proton radiograph could be used to generate a patient-specific calibration curve as part of the current proton treatment planning workflow. Keywords: proton radiography, proton therapy, treatment planning, stoichiometric, calibration (Some figures may appear in colour only in the online journal) Introduction Proton therapy has the potential to deliver better dose distributions than radiotherapy by photon/electron beams, but there are a large number of uncertainties that make predicting the beam range in the patient difficult. Currently, these uncertainties are accounted for with increased planning margins, resulting in the irradiation of more tissue than is necessary. One of the largest uncertainties comes from the conversion of the patient anatomical information from one of photon attenuation to one of proton energy loss, requiring x-ray computed tomography (CT) numbers to be converted into relative stopping powers (RSPs). This process is thought to contribute 0.5-1.8% of the total proton beam range uncertainty, dependent on whether the uncertainty in the mean ionization energy is considered (Schaffner andPedroni 1998, Paganetti 2012). Currently most centres use a single function (referred to in the remainder of this document as the HU-RSP curve) for this conversion, although it is known that there is not a one-toone correspondence between CT numbers and RSPs (Yang et al 2012). The stoichiometric approach first proposed by Schneider et al (1996) is widely accepted as the most accurate method for producing the calibration curve for this conversion. The final step of this procedure involves calculation of the theoretical RSPs of human biological tissues, using composition data for average, healthy, adults (Woodard and White 1986, White et al 1987, ICRU 1989. As such, the stoichiometric calibration is not specific to the patient being imaged. Yang et al (2010) showed that neglecting typical patient-to-patient variations in density (4%), calcium content (2%) or hydrogen content (1%) causes inaccuracies in the RSP prediction of 2.2%, 2.0% and 1.3%, respectively. Proton radiography offers a potential solution to this discrepancy as it produces maps in units of water-equivalent path length (WEPL). Reconstruction of individual proton radiographic projections allows a 3D map of the patient RSP to be generated, in what is referred to as 'proton CT'. Proton radiography is not used routinely, for this or any other purpose, however, largely because the most common approach requires high speed tracking of the protons before and after the patient and measurement of the residual energy of the protons exiting the patient (Schneider and Pedroni 1995, Pemler et al 1999, Schneider et al 2004, Schulte et al 2004, Shinoda et al 2006, Talamonti et al 2010, Sipala et al 2011. The majority of these prototype designs are bulky, expensive and difficult to incorporate into the clinical environment. Using only a single detector positioned beyond the patient offers a more applicable solution for proton radiography and as such there has been much research into this field recently (Gelover-Reyes et al 2011, Seco and Depauw 2011, Telsemeyer et al 2012, Testa et al 2013, Poludniowski et al 2014. The general concept is to measure a bulk dose. However, while convenient, not tracking protons means the individual paths cannot be reconstructed and the bulk dose becomes mixed with protons that have taken different paths through the patient. Methods therefore have to be introduced to detect and handle this effect, known as 'range mixing' (Bentefour et al 2012). Optimizing the calibration curve with proton radiography offers the potential to combine the mapping of RSPs from radiography with the high spatial resolution of the x-ray CT. In a single previous effort to produce patient-specific calibration curves using proton tracking radiography, Schneider et al (2005) split the calibration curve in the soft tissue region (between −200 and 135 HU) into five parts and randomly varied the curve 20 000 times until the difference between the proton radiograph and a digitally reconstructed radiograph WEPL (DRR WEPL , generated from the x-ray CT dataset) is minimized. Only protons that travelled straight through the proton tracking modules were considered for analysis. Their work showed promising results, with an improvement in the mean range prediction from 3.6 to 0.4 mm for a dog patient. However, no further investigation of the technique and its introduction to routine clinical practice has since been reported by this or any other groups. In this work we present a method for optimizing the calibration curve using single-detector proton radiography. Specifically, we: (a) Devise and implement an optimization scheme that minimizes the difference between the measured proton radiograph (known WEPL map) and DRR WEPL map generated from the x-ray CT dataset, by varying the HU-RSP calibration curve. (b) Validate the optimizer against a set of synthetic datasets, in which the CT number and RSP are controlled and known. (c) Demonstrate the full approach on phantoms, using WEPL maps measured by singledetector time-resolved proton radiography. (d) Confirm that the estimate of the WEPL map at the treatment depth is improved using the optimized calibration curve, by comparing with a WEPL map measured by the method of dose extinction. Optimization scheme The structure of the optimization procedure is shown schematically in figure 1. The DRR WEPL map is created using the Siddon transform implemented in Plastimatch (plastimatch.org), and the length of intersection of each ray trace with every voxel, together with the voxel CT HU value, is stored in a matrix file for future computations. A single ray corresponds to a single pixel in the DRR WEPL map, so the density of rays matches the resolution of the DRR WEPL . The CT HU values are then converted into RSPs using the given HU-RSP calibration curve. For a single ray trace i, the individual intersection lengths l j multiplied by the RSP of the voxel, summed across all the voxels j the ray passes through, gives the calculated WEPL, w , i c of the ray trace: Combining all the ray traces together gives the DRR WEPL map. In equation (1) F(HU j ) is a function that describes the calibration conversion and it is typically a single piecewise linear function: The division of the linear sections is a choice of the user. In this work all results are compared to a 'generic' calibration curve used for general treatment planning at our institution, formed of three linear portions (−1000 to 40 HU with a gradient of 0.001, 40 to 2990 HU with a gradient of 0.0005, and a line with zero gradient between 2995 and 9960 for titanium). For optimization, this curve was split into 25 linear spline points, based on the material binning suggested for Monte Carlo simulations by Schneider et al (2000), with an additional point at water. The 26 points were [−1000; −950; −120; −83; −53; −23; 0; 7; 18; 80; 120; 200; 300; 400; 500; 600; 700; 800; 900; 1000; 1100; 1200; 1300; 1400; 1500; 2990] (the titanium correction was excluded from the optimization procedure as this would ordinarily be overridden as part of treatment planning). For the optimization, the parameters a, b, a′, b′… in equation (2) were varied until the difference between the DRR WEPL and the proton radiographic WEPL reached a minimum. The cost function describing this difference Δ, which must be minimized, was thus defined: where w i m is the proton radiography (measured) value of the i th pixel and w i c is the WEPL (calculated) value of the i th pixel. The optimization was performed using Matlab's built-in Nelder-Mead optimization function, 'fminsearch'. In addition to the convenience of a Matlab built-in function, Nelder-Mead optimizations are more robust to local minima and do not require an equation to be provided for the derivative of the cost function (Lagarias et al 1998). Details of the input variables used in optimization scheme can be found in table 1. When optimizing, 24 spline points (two points were fixed, see below) were allowed to vary in the y-direction only (HU stays fixed but RSP changes). The number of variables, n, therefore was equal to 24. We also imposed other restrictions, namely that: (i) the curve must be monotonic; and (ii) that the curve must pass through the points of air (HU = −1000, RSP = 0.001) and water (HU = 0, RSP = 1); by introducing a high cost penalty in the optimization if these constraints were not met. For the synthetic datasets, for which the desired solution was known, repeats ran on a while loop until the solution converged. For the real measured datasets we ran two repeats, each of 2400 iterations (100 × n), using the output of the first iteration as the input for the second. This allows the optimizer to form another simplex, which has been shown ensures the Nelder-Mead algorithm can produce a locally optimal solution (Simon 2011). Optimizer validation against simulated datasets To validate the function of the optimizer we worked with synthetic data. Real CT datasets contain noise and real proton radiographs are subject to multiple Coulomb scattering (MCS), so it would be otherwise difficult to discern whether the optimizer had reduced the errors to the theoretical minimum. Using literature tissue compositions from Woodard and White (1986), White et al (1987) and ICRU (1989), theoretical CT numbers were calculated using the parameterization of our scanner and the standard procedure described by Yohannes et al (2012), while the RSPs were computed using the approximation of the Bethe-Bloch formula given by Schneider et al (1996). Simulated phantoms were constructed and perfect, known WEPL maps were generated based on the geometry. DRR WEPL maps were created with no divergence in the ray tracing to isolate the performance of the optimizer. The calibration curves were optimized against these known WEPL maps, for a set of synthetic datasets shown in figure 2. 2.2.1. Single materials, multiple materials and heterogeneous phantom. The simplest task for the optimizer is to alter the calibration curve at a single point only, which is the case for homogeneous objects composed of one material. To test this, 30 individual spherical phantoms were simulated, each with a radius of 15 cm and composed of a single homogeneous material (tissues from table 2 of Yohannes et al (2012)). An example for the breast tissue is shown in figure 2(a). To test the ability of the optimizer to simultaneously reduce the error for multiple materials at once, a synthetic phantom was created (figure 2(b)) of 5 × 5 × 5 cm 3 blocks of 28 different materials across the whole HU range (same tissues as for the homogeneous spheres, excluding 'blood' and 'cell nucleus'). To verify that the optimizer works through heterogeneous materials, a synthetic phantom was composed of two of the multiple material phantoms (figure 2(b)) in succession with the rows of the second phantom offset. In both the multiple material and heterogeneous phantoms the voxels were 5 × 5 × 5 mm 3 in size, so each material block contained 10 × 10 × 10 pixels. Anthropomorphic phantom and real patient CT data. An anthropomorphic pelvic phantom (custom made by The Phantom Laboratory, Greenwich, NY) was CT scanned (figures 2(c), (e) and (f)). To avoid discrepancies due to CT artifacts and to allow a known WEPL map to be constructed, the CT numbers of each organ were overridden to a single value and assigned a single RSP. Although currently not used clinically because of the potential dose to the rectum, better proton range prediction could allow for the use of anterior proton fields for prostate treatments, which would offer a significant dose sparing advantage over photon treatments (Tang et al 2011). As such, optimization was applied over an anterior field. As a final test, the CT numbers of each organ in a lung patient were overridden to a single value (figure 2(d)). For real patients we only need to know how the curve should be optimized for the field to be treated, so for speed the optimization was performed on a smaller cropped region, as shown in the inset of figure 2(d). Impact of CT noise In all previous validation tests, optimization involved minimizing the difference between two perfect images: (i) a DRR WEPL formed through a CT with no noise; and (ii) a simulated known WEPL map. In real patient cases the CT will contain noise, which is a stochastic phenomenon, usually assumed to have a Gaussian distribution (Chvetsov and Paige 2010). Therefore to investigate the impact on the optimization, the CT numbers of each organ in the pelvic phantom (figures 2(c), (e) and (f)) were overridden to a Gaussian distribution of values. Optimization was performed on datasets in which a range of noise levels (standard deviations of 1, 2, 3, 5 and 10%) was applied to the scaled CT numbers (+1000 HU compared to usual values) of each organ. Proton radiographic and dose extinction measurements All radiographic measurements were made in the proton gantry at Massachusetts General Hospital, using a spread-out-Bragg peak with a range of 20 cm and a modulation width of 18 cm (allowing us to capture all WEPLs within the range 2-20 cm). Proton radiographs were acquired using the time-resolved proton radiographic method, the full details of which can be found elsewhere (Lu 2008, Gottschalk et al 2011, Testa et al 2013. In summary, rotation of the range modulator (RM) wheel, at a typical well-defined 600 rpm, delivers a series of Bragg peaks spread out in depth. Using the RM wheel essentially as a precise clock, at each depth there is a unique time-varying dose pattern which can be converted to a water-equivalent depth based on a calibration in water. Our calibration was based on the root-mean-square (RMS) deviation of each modulator cycle (Gottschalk et al 2011) and the detector used was an array of 249 semiconductor diodes arranged on an octagonal matrix (see figure 5(b)), with a diagonal pitch of 7.07 mm and a linear pitch of 10 mm separating diodes on the same row. The data acquisition module has a sampling time of 2 ms, which is sufficiently small to capture the 100 ms cycle of the RM wheel. Optimization was conducted as detailed in section 2.1, with DRR WEPL maps replicating the geometry of the proton radiographic measurements. In our gantry protons are assumed to originate from a single point 227 cm upstream of the isocentre, so the beams are divergent in the ray tracing. The DRR WEPL map was produced at the same location as our diode array in the measurements, which was placed directly below the phantom (with no air gap). The set up for the proton radiographic measurements is shown in figure 3(c). To determine if the optimization procedure results in a more accurate WEPL map at the therapeutic depth, we compared with the simple, robust method of dose extinction. In this method a controlled dose (10 MU) was delivered and increasing thicknesses of water-equivalent material were placed between the proton beam and the phantom until the dose to all of the detectors is zero. Depth dose curves for each diode were then formed, from which the thickness that stops 50% of our protons, also known as the R80, was computed. These R80 values were subtracted from the beam range (20 cm) to compute the WEPL as measured by dose extinction (DE WEPL ). Plastic and real tissue phantoms To test the optimization procedure we imaged the phantoms shown in figure 3 (and explained in the subsequent paragraphs). They are both composed of two halves. The reason for this is that radiographs are made through the entire patient, but the desired quantity for treatment is the WEPL to a therapeutic depth. Assuming the therapeutic depth is the distal edge of the top half of the phantom, the procedure consists of the following three steps: (i) acquire timeresolved proton radiograph through the entire phantom; (ii) optimize the calibration curve by minimizing the difference between the DRR WEPL and the proton radiographic WEPL; and (iii) predict the WEPL to the distal edge of the top half using the optimized calibration curve. To validate that the optimization improves the WEPL estimate, we split apart the phantoms and conducted dose extinction measurements (section 2.4) on the top halves only. In the top half of the plastic phantom ( figure 3(a)), there are eight tightly packed inserts from the Gammex 467 phantom (Gammex Inc., Middleton, WI), arranged in a 12 cm PMMA box (1 cm thick walls) with a central divider. Water was used to fill in the gaps, up to the insert height (7 cm). The insert materials ranged from RSPs of 0.94 (Adipose) to 1.60 (Cortical bone), which together with the thickness of the PMMA box gave a range of WEPLs from 7.6 to 12.2 cm. The bottom half of the plastic phantom was a piece of homogeneous solid water, equivalent to 5.12 cm of water. For the real tissue phantom, the top half (same 12 cm PMMA box as for the plastic phantom) was filled with a single piece of boneless beef chuck (fat left on), a non-smoothed pork bone and a strip of veal liver along the base ( figure 3(b)). The bottom half was composed of a simple commercial lunchbox filled with another single piece of boneless beef chuck and another non-smoothed pork bone. The bones in the top and bottom halves were oriented 90° to each other, with some overlap. Given that the real tissue was not frozen or fixed during the experiment, CT scans were acquired approximately 1 h before and 30 min after the proton radiographic measurements to allow for quantification of any changes in tissue shape. The latter volume was warped to the former volume using a b-spline deformable registration (with the software Plastimatch) and tissue changes were computed along each direction to assess the stability of the tissue during the experiment. Handling range mixing The inability to be able to reconstruct individual proton tracks is one of the major obstacles that needs to be addressed for the routine use of single detector proton radiography. Since protons undergo MCS, they take a curved trajectory through the body and thus there can be multiple paths to a given point. If all of the most probable paths pass through the same amount of water-equivalent material, then all the protons at the given point will have a well-defined energy distribution (assuming the protons entering are mono-energetic) and it is possible to define the WEPL. If, however, protons can get to the point of interest via two (or more) equally probable paths with different energy losses, then the energy distribution is not well defined and the final signal becomes mixed. Since single detector proton radiography does not allow us to reconstruct the proton paths, it is imperative that instances of range mixing are identified as the WEPL value is no longer reliable and the particular diode should not be used as part of the optimization. For our purposes we applied a simple method to remove potentially range mixed diodes, by assuming that range mixing occurs at the boundaries of two materials. To identify the boundaries, we converted a 1 mm resolution DRR WEPL map into a map of local standard deviation by assessing the standard deviation of all surrounding pixels (on a 3 × 3 square). Any pixels having a standard deviation of >1% were defined to be close to a boundary and could thus be subject to range mixing. Any diode locations that overlapped with these pixels were then masked out during optimization. Optimizer validation against simulated datasets The results for all the synthetic dataset validation tests are detailed in table 2. Tests 1 and 2 were completed on a 32 bit 4 GB CPU architecture, while tests 3, 4 and 5 were completed on a 64 bit 8 GB CPU architecture. The optimizer was able to either completely eradicate or significantly reduce the root-mean-square-error (RMSE) across the WEPL map in all tests, with a wide range of optimization times (18 s to 4.1 h) dependent on the complexity of the phantom and the CPU architecture. Figure 4 shows an example result for the anthropomorphic pelvic phantom: (a) shows the known total WEPL map; (b) shows the calibration curves in the soft tissue region before and after optimization. Before optimization, the DRR WEPL difference map and histogram of differences compared to the known WEPL map are shown in (c) and (d) respectively. The same results using the optimized calibration curve are shown in (e) and (f). Impact of CT noise The optimizer was shown to handle noise in the CT number well, with results before and after optimization detailed in table 3. Optimization against time-resolved proton radiographs Example proton radiographs of the real tissue phantom, acquired with the time-resolved technique, are shown in figure 5: a 3D map is shown in (a), a 2D map (showing the diode locations) is shown in (b), and the 2D map after masking of the pixels potentially subject to range mixing is shown in (c). As explained in the methods, optimization was only conducted on the unmasked diodes. The results following 120 and 125 min of optimization for the plastic and real tissue phantoms, respectively, can be seen in figure 6. The relevant statistics before and after optimization are detailed in table 4. From the registration it was found that the tissue remained quite static during the course of the 3 h experiment. With reference to the directions in figure 3, it was found the mean/max deformations were: −0.53/ −1.48 mm in the LR direction; −0.46/ −1.99 mm in the AP direction; and 0.07/ −1.51 mm in the SI direction. Estimation of the WEPL at the therapeutic depth To validate that the optimized calibration curve offers an improvement in the WEPL estimate at a therapeutic depth, independent measurements of the top half of the phantom were made using dose extinction. Integrated doses were recorded by the imager behind the top halves of the phantom, for a number of thicknesses of solid water on top. A comparison of the DRR WEPL maps at the therapeutic depth (constructed using the generic and optimized calibration curves) with the DE WEPL maps are shown in figure 7, with the relevant statistics detailed in table 4. Validation against synthetic datasets Synthetic datasets were used to validate the optimizer. Based on the assigned RSPs for the HU values in the synthetic dataset, perfect WEPL maps could be created against which to optimize. In all cases the error defining the cost function was either completely eradicated or significantly reduced. The only cases where the optimizer failed to completely minimize the error were for the multiple material and heterogeneous phantoms. In these phantoms there were 28 different materials, but the optimized calibration curve was only split up into 25 linear sections. As such, it would have been very unlikely for the optimizer to completely eradicate the errors. The fact that there was a significant reduction demonstrates the optimizer's robustness to potentially conflicting requirements. The results also showed that the complexity of the dataset does not impact on the results, with complete eradication of the error for the anthropomorphic and patient geometries. The perfect datasets were critical in constructing the optimizer, but it was also important to validate that the optimizer was robust in more realistic cases. It was found to function as expected with varying CT noise levels, with an almost identical final error following optimization. As to be expected, there was an intrinsic spread in the errors that the optimizer could not remove due to the noise. Optimization against time-resolved proton radiographs The results in figure 6 demonstrate that the procedure works when optimizing against real proton radiographs. For both the plastic and real tissue phantoms all computed statistics were improved. Following optimization on the plastic phantom the RMSE difference was reduced from 3.4 to 2.6% and the mean was reduced from 1.8 to 0.4%. For the real tissue phantom the RMSE was reduced from 3.6 to 2.9% and the mean error was reduced from −2.1 to −0.2%. As expected the improvements are more modest than for the synthetic datasets that had a single solution, but they are sufficiently large to warrant the clinical value of the technique. For the real tissue phantom the tissue shape changes were small during the course of the experiment. Maximum changes of −1.48/ − 1.99/ −1.51 mm in the LR/AP/SI directions were found between the pre-and post-radiographic CT images, which were acquired 4 h apart in time. The time difference between the pre-radiographic CT image, upon which the optimization was performed, and the proton radiographic measurement was around 1 h. Assuming a constant rate of shape change, differences in the real tissue between the CT image and the proton radiograph would be less than 0.5 mm in all directions. Given the pitch between adjacent diodes was never less than 7 mm, we assume that such changes did not impact on our results. Estimation of the WEPL at a therapeutic depth Splitting apart the phantoms, dose extinction measurements allowed for the generation of WEPL maps at therapeutic depths. Comparing the DE WEPL maps with DRR WEPL maps at the same depth, it was clear that for both phantoms the optimized curve matched the DE WEPL better than the generic curve. The RMSEs for the plastic phantom were 3.4/2.5% for generic/ optimized calibration curves. The same statistics for the real tissue phantom were 2.0/1.4%. These improvements are similar to those of the previous section, suggesting that optimization of the calibration curve using proton radiography across the whole patient allows for a more accurate estimate of the WEPL at a therapeutic depth. Technique limitations The technique proposed here relies on the ability to produce an accurate proton radiographic image. Optimizing against an inaccurate proton radiographic image would result in an inaccurate calibration curve being produced and an inaccurate estimate of the RSPs in the patient. We are confident, however, that the single-detector time-resolved approach we utilized in this study was able to produce accurate WEPL values as in a previous study the same equipment was able to achieve millimeter accuracy in a pure water phantom (Testa et al 2013). Another source of uncertainty is the applicability of straight-line DRR ray tracing to proton paths. It is well known that protons do not take straight line paths due to the effect of MCS, and thus a straight line approximation would underestimate the WEPL. One potential solution would be to use the optimized calibration curve as input for a Monte Carlo simulation (by binning into materials and assigning compositions and densities) so that the effects of MCS are accounted for. The optimized calibration curve could then be fine-tuned, using the results of the simulation. We believe this underestimation to be a second order effect in our results, however. The maximum WEPL was 17.32 cm for the plastic phantom and 13.31 cm for the real tissue phantom. The Highland (1975) formula to compute the root mean square projected angle of scattering σ s (radians) is given by, where S 2 = 14.1 MeV and ε = 1/9 are empirical factors; X is the thickness of the scatterer (cm); X 0 is the radiation length (36.08 cm for water in our case as we use WEPLs); p is the momentum (MeV/c); and β is the particle velocity as a fraction of the speed of light c. At the energy used (approximately 175 MeV) and at the largest thicknesses of our phantoms, σ s = 0.0292 rad for the plastic phantom and σ s = 0.0253 rad for the real tissue phantom. An angular deflection leads to a projected lateral deflection D as described by Leroy and Rancoita (2009) In our experiments the average lateral deflections were thus D = 0.29 cm for the plastic phantom and D = 0.19 cm for the real tissue phantom. Given the closest pitch between adjacent diodes (0.7 cm) was larger than both of these values, we believe that a straight line approximation was valid. If the protons travel through material with a composition different from water, such as bone, it is possible these deflections will increase. The deflections will also increase if the distance between the patient and the detector is increased. It is likely more accurate results will be produced by assuming a cubic spline path (Li et al 2006) or a most likely path (Williams 2004), however this requires the tracking of individual particles (which is not possible with a single detector). Additionally, we are not aware of any software that can currently easily compute the length of intersection of such paths with the CT dataset. This is suggested as an area of future work. In the current implementation the optimization times are probably too long. This time is dependent on three main factors. (i) The voxel size and number of voxels in the CT dataset determines the number of intersections stored in the ray tracing matrix file, which affects the time to calculate the DRR WEPL map. This has to be recomputed for each iteration. (ii) The number of pixels in the two images, between which the difference is trying to be minimized, affects the time to calculate the cost function. (iii) The stopping tolerance of the optimization directly affects how long the optimization takes to converge. The first of these, computation of the DRR WEPL map using the ray tracing details, was the major factor for slow optimization times in our implementation. This could be significantly reduced, however, by producing field-specific calibration curves over much smaller volumes. The proton radiograph would be acquired in the same direction as the intended treatment beam, so only those tissues in the field of view would need to be optimized. It is also possible that an alternative optimization scheme, such as gradient descent, could also speed up the minimization. Another limitation of our specific set of measurements was the spatial resolution (pitch between diodes up to 1 cm) of the proton imager. This could be easily improved by using one of the various promising technologies currently available such as fluorescent screens coupled to charge-coupled device cameras (Ryu et al 2008, Muraishi et al 2009, Bentefour et al 2013; commercial flat-panel detectors (Telsemeyer et al 2012); or complementary metal oxide semiconductor (CMOS) active pixel sensors (CMOS APS) (Gelover-Reyes et al 2011, Seco andDepauw 2011, Poludniowski et al 2014). Additionally, production of the DRR WEPL map is reliant on the accuracy of the CT. As such, removal or reduction of artifacts such as beam hardening or streaking (with improved reconstruction algorithms or dual-energy CT) must be considered before producing the DRR. Clinical application The optimization scheme developed in this work is insensitive to the method of production of the proton radiograph. Although we used the time-resolved single detector proton radiographic method, the optimization scheme could, in theory, be used for proton tracking radiographic devices. Provided the ray tracing of the DRR can be effectively modeled (i.e. the source-to-isocenter and source-to-imager distances are known), patient-specific curves could be generated. We envisage the proton treatment planning workflow being updated as follows: (i) acquire an x-ray CT for treatment planning; (ii) acquire a proton radiograph in the intended treatment beam direction; (iii) generate a patient-specific HU-RSP calibration curve by optimization; and (iv) create a treatment plan using the patient-specific calibration curve. Summary and conclusions In this work we have shown that it is possible to construct a robust optimizer that can produce patient-specific calibration curves, using the information from simple singledetector proton radiographs. Following validation with a set of perfect (containing no noise) synthetic datasets, the robustness of the optimizer was confirmed using CT datasets containing noise. We then demonstrated the technique with real proton radiographs produced using a simple diode array and with the time-resolved radiographic technique. For a phantom composed of real tissues, the optimization procedure was able to reduce the mean error between a DRR WEPL map and a proton radiograph from −2.1% using a generic calibration curve to 0.2% using an optimized calibration curve. Using this optimized calibration curve, produced using proton radiography through the entire patient, it was demonstrated that a more accurate estimate of the WEPL at a therapeutic depth could also be made. By splitting the phantom in half, the RMSE difference between the DRR WEPL map and an independent DE WEPL map decreased from 2.0 to 1.4% using the optimized calibration curve. With further development, we believe that this procedure could be introduced as part of the proton treatment planning workflow. A proton radiograph would be acquired at the time of the planning x-ray CT and calibration curves would be generated on a patient-by-patient basis.
7,838.6
2015-03-07T00:00:00.000
[ "Physics", "Medicine" ]
Localised States of Fabry-Perot Type in Graphene Nano-Ribbons This book collects some new progresses on research of graphene from theoretical and experimental aspects in a variety of topics, such as graphene nanoribbons, graphene quantum dots, and graphene-based resistive switching memory. The authors of each chapter give a unique insight about the specific intense research area of graphene. This book is suitable for graduate students and researchers with background in physics, chemistry, and materials as reference. Introduction Graphene has been spoken of as a "'wonder material"' and described as paradigm shifting in the field of condensed matter physics [1]. The exceptional behavior of single layer graphene is down to its charge carriers being massless, relativistic particles. The anomalous behavior of graphene and its low energy excitation spectrum, implies the emergence of novel electronic characteristics. For example, in graphene-superconductor-graphene junctions specular Andreev reflections occur [1] and in graphene p-n junctions a Veselago lens for electrons has been outlined [2]. It is clear that by incorporating graphene into new and old designs that new physics and applications almost always emerges. Here we investigate Fabry-Perot like localized states in graphene mono and bi-layer graphene. As one will no doubt appreciate, there are many overlaps in the analysis of graphene with the studies of electron transport and light propagation. When we examine the ballistic regime we see that the scattering of electrons by potential barriers is also described in terms of transmission, reflection and refraction profiles; in analogy to any wave phenomenon. Except that there is no counterpart in normal materials to the exceptional quality at which these occur, with electrons capable of tunneling through a potential barrier of height larger than its energy with a probability of one -Klein tunneling. So, normally incident electrons in graphene are perfectly transmitted in analogy to the Klein paradox of relativistic quantum mechanics. A tunable graphene barrier is described in [3] where a local back-gate and a top-gate controlled the carrier density in the bulk of the graphene sheet. The graphene flake was covered in poly-methyl-methacrylate (PMMA) and the top-gate induced the potential barrier. In this work they describe junction configurations associated with the carrier types (p, for holes and n for electrons) and found sharp steps in resistance as the boundaries between n-n-n and n-p-n or p-n-p configurations were crossed. Ballistic transport was examined in the limits of sharp and smooth potential steps. The PMMA is a transparent thermoplastic that has also been used to great effect in proving that graphene retains its 2D properties when embedded in a polymer heterostructure [4]. The polymers can be made to be sensitive to a specific stimulus that leads to a change in the conductance of the underlying graphene [4] and it is entirely likely that graphene based devices of the future will be hybrids including polymers that can control the carrier charge density. In [5] an experiment was performed to create a n-p-n junction to examine the ballistic regime. Oscillations in the conductance showed up as interferences between the two p-n interfaces and a Fabry-Perot resonator in graphene was created. When there was no magnetic field applied, two consecutive reflections on the p-n interfaces occurred with opposite angles, whereas for a small magnetic field the electronic trajectories bent. Above about 0.3 Tesla the trajectories bent sufficiently to lead to the occurrence of two consecutive reflections with the same incident angle and a π-shift in the phase of the electron. Thus, a half period shift in the interference fringes was witnessed and evidence of perfect tunneling at normal incidence accrued. Quantum interference effects are one of the most pronounced displays of the power of wave quantum mechanics. As an example, the wave nature of light is usually clearly demonstrated with the Fabry-Perot interferometers. Similar interferometers may be used in quantum mechanics to demonstrate the wave nature of electrons and other quantum mechanical particles. For electrons they were first demonstrated in graphene hetero-junctions formed by the application of a top gate voltage [6]. These were simple devices consisting mainly of the resonant cavity, and with transport channels attached. These devices exhibited quantum interference in the regular resistance oscillations that arose when the gate voltage changed. Within the conventional Fabry-Perot model [7,8], the resistance peaks correspond to minima in the overall transmission coefficient. The peak separation can be approximated by the condition 2k F L = 2πn. The charge accumulates a phase shift of 2π after completing a single lap (the round-trip) 2L in the resonant cavity, where k F is the Fermi wave vector of the charges, and L is the length of the Fabry-Perot cavity. This is the Fabry-Perot-like resonance condition: the fundamental resonance occurs when half the wavelength of the electron mode fits inside the p-n-p junction representing the Fabry-Perot cavity. The simplest electron cavity, but still very effective, for the Fabry-Perot resonator may be formed by two parallel metallic wire-like contacts deposited on graphene [9]. There in a simple two terminal graphene structure there are clearly resolved Fabry-Perot oscillations. These have been observed in sub-100 nm devices. With a decrease of the size of the graphene region in these devices, the characteristics of the electron transport changes. Then the channel-dominated diffusive regime is transferred into the contact-dominated ballistic regime. This normally indicates that when the size of the cavity is about 100 nm or less the Fabry-Perot interference may be clearly resolved. The similar Fabry-Perot interferometer for Dirac electrons has been recently developed from carbon nanotubes [10]. Earlier work on the resistance oscillations as a function of the applied gate voltage led to their observation in the p-n-p junctions [6,11]. It was first reported by Young and Kim [6], but the more pronounced observations of the Fabry-Perot oscillations have been made in the Ref. [11]. There high-quality n-p-n junctions with suspended top gates have been fabricated. They indeed display clear Fabry-Perot resistance oscillations within a small cavity formed by the p-n interfaces. The oscillations arise due to an interference of an electron ballistic transport in the p-n-p junction, i.e. from Fabry-Perot interference of the electron and hole wave functions comprised between the two p-n interfaces. Thus, the holes or electrons in the top-gated region are multiply reflected between the two interfaces, interfering to give rise to standing waves, similar to those observed in carbon nanotubes [12] or standard graphene devices [13]. Modulations in the charge density distribution change the Fermi wavelength of the charge carriers, which in turn is altering the interference patterns and giving rise to the resistance oscillations. In the present work we consider a simplest model of the Fabry-Perot interferometer, which is in fact the p-n-p or n-p-n junction formed by a one dimensional potential. We develop an exact quasi-classical theory of such a system and study the associated Fabry-Perot interference in the electron or hole transport. Although graphene is commonly referred to as the "'carbon flatland"' there has been a feeling of discontent amongst some that the Mermin-Wagner theorem appeared to be contradicted. However, recent work shows that the buckling of the lattice can give rise to a stable 3D structure that is consistent with this theorem [14]. In what follows we present the general methodology for analysis of graphene nanoribbons using semiclassical techniques that maintain the assumption of a flat lattice. However, it should be mentioned that the effects found from these techniques are powerful in aiding our understanding of potential barriers and are an essential tool for the developing area of graphene barrier engineering. The natural state of graphene to accommodate defects or charged impurities is important for applications. The p-n interfaces described above may be capable of guiding plasmons and to create the electrical analogues of optical devices to produce controllable indices of refraction [15]. In Part I of this chapter we investigate the use of powerful semiclassical methods to analyze the relativistic electron and hole tunneling in graphene through a smooth potential barrier. We make comparison to the rectangular barrier. In both cases the barrier is generated as a result of an electrostatic potential in the ballistic regime. The transfer matrix method is employed in complement to the adiabatic WKB approximation for the Dirac system. Crucial to this method of approximation for the smooth barrier problem, when there is a skew electron incidence, is careful consideration of four turning points. These are denoted by x i , i = 1, 2, 3, 4 and lie in the domain of the barrier. The incident electron energy in this scattering problem belongs to the middle part of the segment [0, U 0 ], where U 0 is the height of the barrier, and essentially the incident parameter p y should be large enough to allow normal and quasi-normal incidence. Therefore, between the first two turning points, x 1 and x 2 , and also between the next two, x 3 and x 4 there is no coalescence. Two columns of total internal reflection occur which have solutions that grow and decay exponentially. Looking away from the close vicinity of the asymptotically small boundary layers of x i , there exists five domains with WKB type solutions (See Fig. 2): three with oscillatory behavior and two exhibiting asymptotics that are exponentially growing and decaying. Combining these five solutions is done through applying matched asymptotics techniques (see [16]) to the so-called effective Schrödinger equation that is equivalent to the Dirac system (see [17], [18]). This combinatorial procedure generates the WKB formulas that give the elements of the transfer matrix. This transfer matrix defines all the transmission and reflection coefficients in the scattering problems discussed here. When the energies are positive around potential height 0.5U 0 , electronic incident, reflected and transmitted states occur outside the barrier. Underneath the barrier a hole state exists (n-p-n junction). The symmetrical nature of the barrier means that we see incident, reflected and transmitted hole states outside the barrier when the energies are negative and close to one-half of the potential height U 0 < 0. Thus, underneath the barrier there are electronic states (a p-n-p junction). Incorporated into the semiclassical method is the assumption that all four turning points are spatially separated. Consequently, the transverse component of the momentum p y is finite and there is a finite width to the total internal reflection zone. This results in a 1-D Fabry-Perot resonator, which is of great physical importance and may aid understanding in creating plasmonic devices that operate in the range of terahertz to infrared frequencies [19]. Quantum confinement effects are crucial at the nano-scale and plasmon waves can potentially be squeezed into much smaller volumes than noble metals. The basic description of propagating plasma modes is essentially the same in the 2-D electron gas as in graphene, with the notable exception of the linear electronic dispersion and zero band-gap in graphene [20]. Thus, we predicate that the methods applied here are also applicable to systems of 2-D electron gases, such as semiconductor superlattices. Due to the broad absorption range of graphene, nanoribbons as described here, or graphene islands of various geometries may also be incorporated in opto-electronic structures. In our analysis, if p y → 0 then we have a quasi-normal incidence whereby and first two, x 1 and x 2 , and the second two, x 3 and x 4 , turning points coalesce. In the case of normal incidence, there is always total transmission through the barrier. The vital discovery in this form of analysis is that of the existence of modes that are localized in the bulk of the barrier. These modes decay exponentially as the proximity to the barrier decreases. These modes are two discrete, complex and real sets of energy eigen-levels that are determined by the Bohr-Sommerfeld quantization condition, above and below the cut-off energy, respectively. It is shown that the total transmission through the barrier takes place when the energy of an incident electron, which is above the cut-off energy, coincides with the real part of the complex energy eigen-level of one among the first set of modes localized within the barrier. These facts have been confirmed by numerical simulations for the reflection and transmission coefficients using finite elements methods (Comsol package). In Part II we examine the high energy localized eigenstates in graphene monolayers and double layers. One of the most fundamental prerequisites for understanding electronic transport in quantum waveguide resonators is to be able to explain the nature of the conductance oscillations (see [25], [26], [27]). The inelastic scattering length of charge carriers is much larger than the size of modern electronic devices and consequently electronic motion is ballistic and resistance occurs due to scattering off geometric obstacles or features (e.g. the shape of a resonator micro or nano-cavity or the potential formed by a defect). It is an interesting area of development whereby defects are engineered deliberately into devices to generate a sought effect. In graphene, defects such as missing carbon atoms or the addition of adatoms can lead to interesting and novel effects, e.g magnetism or proximity effects. In the ballistic regime, conductance is analyzed by the total transmission coefficient and the Landauer formula for the zero temperature conductance of a structure (see monographs [25], [26], [27], and papers [28], [29], [30], for example). The excitation of localized eigenmodes inside a quantum electronic waveguide has a massive effect on the conductivity because these modes could create an internal resonator inside the waveguide. This is a very good reason to research the role of localised eigenmodes for quantum resonator systems and 2-D electronic transport in quantum waveguides. Excitation of some modes could result in the the emergence of stop bands for electronic wave propagation in the dispersion characteristics of the system, whereby propagation through the waveguide is blocked entirely. Other modes will result in total transmission. In this review, the semiclassical analysis of resonator eigenstates that are localized near periodic orbits is developed for a resonator of Fabry-Perot type. These are examined inside graphene monolayer nanoribbons in static magnetic fields and electrostatic potentials. The first results for bilayer graphene are also presented in parallel to this. Graphene has generated a fervor throughout the scientific world and especially in the condensed matter physics community, with its unusual electronic properties in tunneling, charge carrier confinement and the appearance of the integer quantum Hall effect (see [31], [33], [34]), [35], [17])). Its low energy excitations are massless chiral Dirac fermion quasi-particles. The Dirac spectrum, that is valid only at low energies when the chemical potential crosses exactly at the Dirac point (see [31]), describes the physics of quantum electrodynamics for massless fermions, except that in graphene the Dirac electrons move with a Fermi velocity of v F = 10 6 m/s. This is 300 times smaller than the speed of light. Graphene is a material that is easy to work with, it has a high degree of flexibility and agreeable characteristics for lithography. The unusual electronic properties of graphene and its gapless spectrum provide us with the ideal system for investigation of many new and peculiar charge carrier dynamical effects. It is also conceivable, if its promise is fulfilled, that a new form of carbon economy could emerge based upon exploitation of graphenes novel characteristics. The enhancements in devices are not just being found at the nano and micron-sized levels, though these hold the most potential (e.g. the graphene transistor, metamaterials etc), but in composites [36], electrical storage [37], solar harvesting [38] and many more applications. Following this train of thought, graphene is also a viable alternative to the materials normally used in plasmonics and nanophotonics. It absorbs light over the whole electromagnetic spectrum, including UV, visible and far-infrared wavelengths and as we have mentioned, it is capable of confining light and charge carriers into incredibly small volumes. Thus, there are a range of applications where band gap engineering is not required and it is satisfactory to directly use nanoribbons of graphene as optical-electronic devices. In the analysis of graphene one also expects unusual Dirac charge carrier properties in the eigenstates of a Fabry-Perot resonator in a magnetic field. For example, two parts of the semiclassical Maslov spectral series with positive and negative energies, for electrons and holes, correspondingly, with two different Hamiltonian dynamics and families of classical trajectories are apparent. Semiclassical analysis can provide insight into the aforementioned physical systems and good quantitative predictions on quantum observables using classical insights. Application of semiclassical analysis in studying the quantum mechanical behavior of electrons has been demonstrated in descriptions of different nano-structures, electronic transport mechanisms in mesoscopic systems and, as another example, the quantum chaotic dynamics of electronic resonators [25], [26], [27], [39], [40], [41], [42] and many others. However, it is important to state that the first semiclassical study on two-dimensional graphene systems only recently appeared in [43], [44], [45]. In [43] a semiclassical approximation for the Green's function in graphene monolayer and bilayers was discussed. In [44] and [45] bound states in inhomogeneous magnetic fields in graphene and graphene-based Andreev billiards were studied by semiclassical analysis, accordingly. This was carried out with one-dimensional WKB quantization due to total separation of variables. In the second half of this review, the semiclassical Maslov spectral series of the proliferation of high-energy eigenstates (see [48], [49] [50]) of the electrons and holes for a resonator formed inside graphene mono and bilayer nanoribbons with zigzag boundary conditions, is specified. These states are localized around a stable periodic orbit (PO) under the influence of a homogeneous magnetic field and electrostatic potential. The boundaries of the nanoribbon act with perfect reflection to confine the periodic orbit to isolation. This system is a quantum electron-hole Fabry-Perot resonator of a type analogous to the "bouncing ball" high-frequency optical resonators found in studies of electromagnetics and acoustics. The asymptotic analysis of the high-energy localized eigenstates presented here is similar to ones used for optical resonators (see [50], [51], [54], and [55]). In this review, the semiclassical methods presented focus upon the stability of POs and electron and hole eigenstates that depend on the applied magnetic field. We construct a solitary localized asymptotic solution to the Dirac system in the neighborhood of a classical trajectory called an electronic Gaussian beam (Gaussian wave package). In PO theory there are similarities between the asymptotic techniques used here and those used in the semiclassical analysis (see, for example, [27] (chapters 7, 8) or [39] and cited references). Further, the stability of a continuous family of closed trajectories in asymptotic proximity to a PO, confined between two reflecting interfaces, is studied. The classical theory of linear Hamiltonian systems with periodic coefficients gives the basis to study the stability using monodromy matrix analysis. The asymptotic eigenfunctions for electrons and holes are constructed only for the stable PO as a superposition of two Gaussian beams propagating in opposite directions between two reflecting points of the periodic orbit. A generalized Bohr-Sommerfeld quantization condition gives the asymptotic energy spectral series (see [46] and [47], [48], [49], [50], [51] and [55]). This work highlights that the single quantization condition derived herein for the quantum electron-hole graphene resonator fully agrees with the asymptotic quantization formula of a quite general type spectral problem in [51]. It is worth drawing attention to the fact that in a semiclassical approximation for the Green's function in a graphene monolayer and bilayer, the relationship between the semiclassical phase and the adiabatic Berry phase was discussed in the paper [43]. Our asymptotic solutions, for rays and Gaussian beams, possess the adiabatic phase introduced by Berry [64]. The importance of Berry-like and non-Berry-like phases in the WKB asymptotic theory of coupled differential equations and their roles in semiclassical quantization were discussed in [57], [58], [59]. Our results are a special class of POs that occur for graphene zigzag nanoribbons in a homogeneous magnetic field and piece-wise electrostatic potential that is embedded inside the nanoribbon. They are found by giving, to the leading order, a description of the general form of asymptotic solution of Gaussian beams in a graphene monolayer or bilayer. The key point in the asymptotic analysis is the quantization of the continuous one-parameter (energy) family of POs. For one subclass of lens-shaped POs, these localized eigenstates were evaluated against eigenvalues and eigenfunctions that have been computed by the finite element method using COMSOL. For a selectively chosen range of energy eigenvalues and eigenfunctions, agreement between the numerical results and those computed semiclassically is very good. In the graphene Fabry-Perot resonator, the electrostatic potential does not play a role of confinement, it behaves more like an inhomogeneity, but in some cases an electrostatic potential helps to make a family of POs stable. In this chapter, we describe the tunneling through smooth potential barriers and the asymptotic solutions for a Dirac system in a classically allowed domain. This is done using WKB methods. We then go on to investigate the classically disallowed domain and tunneling through the smooth barrier. The asymptotic WKB solutions are presented for scattering and for quasi-bound states localized within the smooth barrier. The second part of the chapter, goes into detail about high energy localized eigenstates in monolayer and bilayer graphene. The graphene resonator is described when it is subjected to a magnetic field and ray asymptotic solutions outlined. Finally, the construction of periodic orbits, stability analyses and quantization conditions are thoroughly examined. A numerical analysis is given that compares the analytical techniques and results with those found using finite element methods. The rectangular barrier In a conventional metal or semiconductor there are no propagating states connecting regions either side of the barrier (regions I and I I I). To get through the barrier an electron has to tunnel through the classically forbidden region and the tunneling amplitude depreciates exponentially as a function of the barrier width. Thus, transport between I and I I I is strongly suppressed. However, in each of the three regions of a barrier in a graphene system, the valence and conduction band touches, meaning that there are propagating states connecting I and I I I at all energies. There is no such suppression of the transport at energies incident and below the barrier. At normal incidence transmission is always perfect. Potential barriers for single quasiparticle tunneling in graphene can be introduced by designing a suitable underlying gate voltage or even as a result of local uniaxial strain [68]. In the following we denote the angle of incidence with respect to the barrier to be θ 1 . We are interested in the dependence of the tunneling transmission on this incidence angle. To illustrate quantum mechanical tunneling one must extract the transmission coefficient from the solution to the graphene barrier problem. The transmission coefficient is the ratio of the flux of the particles that penetrate the potential barrier to the flux of particles incident on the barrier. We demonstrate a rectangular barrier as described in detail in the Reviews by Castro Neto et al [60] and Pereira Jr et al [69]. The problem can be described by the following 2D Dirac system (see, for example, [60]) where x = (x, y) and u, v are the components of the spinor wave function describing electron localization on the sites of sublattice A or B of the honeycomb graphene structure, v F is the Fermi velocity, the symbol (, ) means scalar product,h is the Planck constant andσ = (σ 1 , σ 2 ) with Pauli matrices If we assume that the potential representing the barrier does not depend on y, i.e. U = U(x), then we can look for a solution in the form where p y means value of the transverse momentum component describing the angle of incidence. Then, we obtain the Dirac system of two ODEs The particle incident with energy E < U 0 from the left of the barrier has wavevectors k 1 , q, and k 2 to the left, in the barrier and to the right of the barier, respectively. These regions are denoted I, I I and I I I, respectively. In the symmetric barrier k 1 = k 2 = k. Region I I lies between −L and L, where ±L defines the width of the barrier. The wave functions are defined for each of the three regions below: where we have introduced the wave function, as is done in [31]. The coefficients c 1 , c 2 and a 1 , a 2 are related by means of the transfer matrix, c = Ta. The transfer matrix has unique properties, which are demonstrated in Appendix B. In regions I and I I I the angle of incidence in momentum space is given by, θ 1 = arctan k y /k x and in region I I, θ 2 = arctan k y /q x . In regions I-I I I the valence and conduction bands touch. This allows propagating states to connect the regions at all energies and there is no suppression of transport at the energies below the height of the barrier. There is also perfect transmission at normal incidence. The graphene rectangular barrier can be thought of as a medium with a different refractive index to its surroundings. In an optical analogy, the refractive index of the barrier is 1 − U 0 /E [8]. At the interface of the barrier the incidence angle splits into transmitted and reflected waves with the transmitted wave propagating with angle θ 2 through the barrier. The wave inside the barrier is multiply reflected between −L and L. The parallel wave vector inside the barrier is given by, y and the wave vectors outside the barrier are defined as, The wave functions in regions I and I I are matched at x = −L. Likewise, the wavefunctions between regions I I and I I I are matched at x = L. It is not necessary to match the derivatives, as is done in an analysis using the Schrödinger equation. One requires the wave functions to be continuous at the boundary of each region to generate relationships between the coefficients, a 1,2 , b 1,2 and c 1,2 . We seek solutions such that The elements of the transfer matrix for the rectangular barrier are found to be, where we make the substitutions α = e iθ 1 + e −iθ 2 and β = e −iθ 2 − e −iθ 1 and their complex conjugate forms are denoted byᾱ = e iθ 2 + e −iθ 1 andβ = e iθ 2 − e −iθ 1 . If we assume that the incident wave approaches from the left, then a 1 = 1, a 2 = r 1 and c 1 = t 1 , where r 1 is the reflection coefficient and t 1 is the transmission coefficient. If the incident wave approaches from the right then c 1 = r 2 , c 2 = 1 and a 2 = t 2 . We find that t 1 = t 2 = t and the transmission The reflection coefficients are determined as r 1 = −T 21 /T 22 and r 2 = T 12 /T 22 . The transmission probability is as usual given by |t 1 | 2 with the definition |t 1 | 2 + |r 1 | 2 = 1. At normal incidence the carriers in graphene are transmitted completely through the barrier (Klein tunneling). However, the carriers can be reflected by a potential step when the angle of incidence increases and a non-zero momentum component parallel to the barrier ensues. Thus, the transmission of charge carriers through the potential barrier is anisotropic. When a beam of electrons is fired at an angle into the barrier, it splits up into transmitted and reflected beams, with multiple reflections occurring at the edges of the barrier. As is usual in quantum mechanics, the transmission is found by stipulating that there must be continuity between the wavefunctions. In the above this demand for continuity at the extremities of the barrier allowed us to find the coefficients of the wavefunctions. Thus, using these results and following the work of Castro Neto et al [60], the total transmission as a function of the incident angle is given by T(θ 1 ) = tt * : When the tunneling resonance condition 2Lq x = nπ is met, where n is an integer, T = 1. This statement means that a half-integer amount of wavelengths will fit into the length of the potential barrier. The absolute transmission is the manifestation of Klein tunneling, which is unique for relativistic electrons, and it should occur when an incoming electron starts penetrating through a potential barrier of height, U 0 (which is far in excess of the electrons rest energy). The transport mechanism in a graphene tunneling structure is unique. This perfect transmission at incidence normal to the barrier is due to the pseudo-spin conservation that gives no backscattering. In order to attain an interference effect between the two interfaces an oblique incidence angle is required and it is under this prerequisite that multiple interference effects emerge. Thus, the potential barrier is analogous to two interfaces at −L and L and also a Fabry-Perot interferometer [5]. The analogy of the graphene rectangular barrier to the Fabry-Perot resonator when θ 1 = 0 extends to the potential barrier operating like an optical cavity. In region I I the incoming wave can interfere with itself and with constructive interference, resonances will occur where T(θ 1 = 0) = 1 [5]. The potential barriers for single quasiparticle tunneling in graphene are usually created by suitably changing the underlying gate voltage. In the next section we investigate the smooth barrier and expect that there will be similar scattering behavior as through the rectangular barrier. We seek to explore the similarities and the differences between the two. The smooth barrier Consider a scattering problem for the Dirac operator describing an electron-hole in the presence of a scalar potential representing a smooth localized barrier with the height U 0 (see Fig.2). It is convenient to use the dimensionless system The generalization of a smooth potential barrier with Gaussian shape (we assume that p y > 0). The Dirac electron and hole states arising in resonance tunneling are shown. The quasibound states are to be found above the green strip, |E| < p y , where bound states are located. Quasibound (metastable) states are confined by two tunneling strips between x 1 , x 2 and x 3 , x 4 , whereas the bound states are located between x 2 and x 3 . in which we omitted the sign of tilde for brevity. In physical dimensions the energy is U 0 E, the potential is U 0 U(x), the y-component of the momentum is p y U 0 /v F , and the dimensionless Planck constant (small WKB parameter) is given by h =hv F /U 0 D, where U 0 is the height of the potential barrier (|U(x)| < 1 for x ∈ R) and D is a characteristic scale of the potential barrier with respect to the x-coordinate. Typical values of U 0 and D are within the ranges 10-100meV and 100-500nm. For example, for U 0 =100meV, D = 264nm, we have h = 0.025 and also we assume that p y > 0. The six zones are separated by the four red diagonal lines, E = ±p y and E = ±p y + U 0 . We will now discuss the right hand side of this diagram. In zone 1 (orange shading), E < −p y , there is total transmission and exponentially small reflection. The asymptotic solutions are of an oscillatory nature everywhere in this zone. In zone 2 (blue), −p y < E < p y with the cut-off energy at E = ±p y . In zone 2 there is no propagation outside the barrier. However, there are oscillatory solutions within the barrier. Zone 3 (green) is the zone of Klein tunneling. Here, p y < E < U 0 − p y and there are oscillatory solutions everywhere. Zone 4 (hexagons), U 0 − p y < E < U 0 + p y , is devoid of propagation through the barrier. There is total reflection and exponentially small transmission in this zone. The fifth zone (sand color) is limited to E > U 0 + p y and is characterized by total transmission, exponentially small reflection and the asymptotic solutions are oscillatory everywhere. The sixth zone is one of no propagation and exponentially decaying or growing asymptotic solutions, U 0 < E < p y . In Fig. 3, six zones (horizontal strips in Fig. 3b) are shown that illustrate different scattering regimes for the smooth barrier scattering problem. These six zones are exactly the same as for the rectangular barrier with the height U 0 . In zone 1 E < −p y , we have total transmission and exponentially small reflection, asymptotic solutions are of oscillatory type everywhere. In zone 2, −p y < E < p y (E = ±p y is the cut-off energy), there is no propagation outside the barrier, however there are oscillatory solutions within the barrier. In the zone 3, p y < E < U 0 − p y , there are oscillatory solutions everywhere (zone of the Klein tunneling). In zone 4, U 0 − p y < E < U 0 + p y , there is no propagation through the barrier, we have total reflection and exponentially small transmission. In zone 5 E > U 0 + p y , we have total transmission and exponentially small reflection, asymptotic solutions are of oscillatory type everywhere. Finally, in the zone 6, U 0 − p y < E < p y , there is no propagation, everywhere asymptotic solutions are of exponential type, decaying or growing. Firstly, we study the scattering problem for zone 3 (see Fig.2). In this case, there are 5 domains with different WKB asymptotic solutions: The regions Ω 1 , Ω 3 and Ω 5 , in which (E − U(x)) 2 − p 2 y > 0, will be referred to as classically allowed domains, whereas Ω 2 and Ω 4 , in which (E − U(x)) 2 − p 2 y < 0, are classically disallowed domains. Note that as p y → 0 for fixed value of E, the turning points coalesce. We exclude this possibility in our analysis. It is worth to remark that for fixed p y , when E moves down from zone 3 to zone 2, the turning points x 1 and x 4 disappear (x 1 → −∞, x 4 → +∞) such that inside zone 2 we have only x 2 and x 3 . When we move down from zone 2 to zone 1, the turning points x 2 and x 3 disappear. When E moves up from zone 3 to zone 4, the turning points x 2 and x 3 coalesce and disappear such that inside zone 4 we have only x 1 and x 4 . When we move up from zone 4 to zone 5, the turning points x 1 and x 4 coalesce and disappear. WKB asymptotic solution for Dirac system in classically allowed domain The WKB oscillatory asymptotic solution to the Dirac system in the classically allowed domains is to be sought in the form (see [16]) with real S(x) Substituting this series into the Dirac system, and equating to zero corresponding coefficients of successive degrees of the small parameter h, we obtain a recurrent system of equations which determines the unknown S(x) (classical action) and ψ j (x), namely, where I is the identity matrix and S ′ = p x . The Hamiltonian H has two eigenvalues From now on we will omit the dependence on x of U, S, and quantities derived from them. It turns out to be convenient to use different e 1,2 instead with In this way we will be able to solve problems of electron and hole incidence on the barrier simultaneously. Note that, irrespective of whether E > U or E < U, The classical action S(x) is given by the sign indicating the direction of the wave, with + corresponding to a wave traveling to the right. For electrons and holes one can seek a solution to the Dirac system zero-order problem in the form with unknown amplitude σ (0) . The solvability of the problem (H − EI)ψ 1 = − Rψ 0 requires that the orthogonality condition < e 1 , R(σ (0) e 1 ) >= 0 must hold, written as a scalar product implied with complex conjugation, and from this one obtains the transport equation for σ (0) : It has a solution where a branch of the analytic function √ z is taken that satisfies the condition For higher-order terms, one can seek a solution to where from (15), σ (j) 2 is given by Then, from the orthogonality condition, where c j = const. Below we assume that p x > 0, corresponding to a wave traveling in the positive x-direction. Thus, to the leading order we have This asymptotic approximation is not valid near turning points where S ′ = 0 (see Fig. 1) at x = x j , j = 1, 2, 3, 4 where e iθ = ±i and cos θ = 0, while at x = a, b we have E = U. The WKB asymptotic solution, derived in this section, is valid for the domains Ω i , i = 1, 3, 5. WKB asymptotic solution for Dirac system in classically disallowed domain The WKB asymptotic solution to the Dirac system in the classically disallowed domain is to be sought in the form with S(x) real. As in section 4, we obtain a recurrent system of equations which determines the unknown S(x) and ψ j (x), namely, where S ′ = q x , and the matrix R is as in (14). The Hamiltonian H is not Hermitian. It has two eigenvalues and not orthogonal eigenvectors Hl 1,2 = h 1,2 l 1,2 , where Thus, the function S(x) in a classically disallowed domain is given by Again, for the sake of simplicity, we shall use different l 1,2 where For electrons and holes one can seek a solution to the Dirac system zero-order problem in the form with unknown amplitude σ (0) . Solvability of the problem requires that the orthogonality condition must hold The vector l 1 is the eigenvector of H, whereas l * 1 is the eigenvector of H * . From the orthogonality condition one obtains the transport equation for σ (0) It has a solution For higher-order terms, we have (H − h 1 I)ψ j = − Rψ j−1 and one should seek solution in the form where σ (j) 2 is given by Then, from the orthogonality condition, < l * 1 , R(σ (j) Below we assume that q x > 0. Thus, to the leading order in classically disallowed domains we have where This asymptotic approximation is not valid near turning points q x = 0. The WKB asymptotic solution, derived in this section, is valid for the domains Ω i , i = 2, 4. WKB asymptotic solution for scattering through the smooth barrier Consider a problem of scattering through the smooth barrier (see Fig. 2) under the assumption that |E| > |p y | and all four turning points x i , i = 1, 2, 3, 4 are separated. In this case we have again 5 domains Ω i , i = 1, 2, ..., 5 to describe 5 WKB forms of solution to the leading order. In considering a graphene system, if E > 0 we observe incident, reflected and transmitted electronic states at x < a and x > b, whereas under the barrier a < x < b we have a hole state (n-p-n junction, see Fig. 2). To formulate the scattering problem for transfer matrix T, here we present the WKB solutions in the domains 1 and 5 The barrier is represented by the combination of the left and right slopes. The total transfer matrix T, that is d = Ta, is given by with T R and T L the transfer matrices of the right and left slopes (see formulas (137), (143) in Appendix C), respectively, and The entries of the matrix T read (see formulas (121), (134), (144) in Appendix C) where s i = √ 1 − e −2Q i /h , i = 1, 2. They satisfy the classical properties of the transfer matrix and if a 1 = 1, a 2 = r 1 , d 1 = t 1 , d 2 = 0, then If a 1 = 0, a 2 = t 2 , d 1 = r 2 , d 2 = 1, then The transmission coefficient t = 1/T 22 , looks exactly like the formula (131) in [18]. Total transmission takes place only for a symmetric barrier when However, it is worth noting that r 1 (p y ) = r 2 (−p y ). It is clear that if P(E) + hθ = hπ(n + 1 2 ), n = 0, 1, 2, ... , then we have total transmission |t 1 | = 1. WKB asymptotic solution for complex resonant (quasibound) states localized within the smooth barrier Consider a problem of resonant states localized within the smooth barrier (see Fig. 2). In the first case when the energy of the electron-hole is greater than the cut-off energy (E > E c = |p y |), we have five domains Ω i , i = 1, 2, ..., 5 and five WKB forms of solution to the leading order. To determine the correct radiation conditions that are necessary for the localization, we present WKB solutions in the domains 1 and 5 If a 1 = 0, d 2 = 0, then and as a result we obtain Bohr-Sommerfeld quantization condition for complex energy eigen-levels (quasi-discrete) for |p y | < E < U 0 . Solutions to this equation are complex resonances E n = Re(E n ) − iΓ n , where Γ −1 n is the lifetime of the localized resonance. What is important is that the real part of these complex positive resonances is decreasing with n, thus showing off the anti-particle hole-like character of the localized modes. For these resonances we have Γ n > 0. From (45), we obtain the important estimate that is the equivalent of the formula (14) in [35]. Namely, w is the transmission probability through the tunneling strip, ∆t is the time interval between the turning points x 2 and x 3 , and P ′ is the first derivative of P with respect to energy. If p y → 0, then Q → 0, and Γ n → +∞, that is opposite to [35] (to be exact, the estimate for Γ n in [35] works only for a linear potential when p y is not small). For the second set of real resonances, when the energy of the electron-hole is smaller than the cut-off energy (E < |p y |), we have 2 turning points x 2 and x 3 . Between them there are oscillatory WKB solutions or and outside decaying solutions, By gluing these WKB solutions together through the two boundary layers near x 2 and x 3 , using the results in sections 5.1, 5.2 and the Appendix C, we eliminateā 1,2 andd 1,2 and obtain the homogeneous system ic 1 +c 2 e i h P = 0, Thus, we derive the Bohr-Sommerfeld quantization condition for real energy eigen-levels (bound states) inside the cut-off energy strip for 0 < E < |p y |. Numerical results Based upon the analytical descriptions in the preceding sections for the smooth barrier, we present the results for the energy eigenvalues and eigenfunctions. These are shown in Fig's. 4-6 and compare favorably with those obtained through finite difference methods, as detailed in [71]. The energy dispersion curves, E n (p y ), are shown for the complex resonant and real bound states for h = 0.1 and potentials of different widths. In Fig. 4(a), the energy levels are illustrated for the potential, U = 1/coshx, with n = 0, 1, ...., 15. For complex resonant states the real parts are shown. It must be emphasized that in zone "3", which is restricted by E < U 0 − p y and E = p y with p y > 0, the complex quasibound states reside. The bound states are located in zone "2", which lies between E = ±p y and E = U 0 − p y . In zone "3" there are nine complex resonances. In Fig. 4(b), the results for a narrower potential of U = 1/cosh2x can be seen (all other parameters being the same as in Fig. 4 (a)). In this case, there are four complex resonances in zone "3" and n = 0, 1, ...., 9. The lifetimes of the local resonances, Γ n , are shown in Fig. 5 (a) and (b) for the same two potentials as described in Fig. 4. The complex quasi-bound states that are witnessed in zone "3" in Fig. 4 are shown in Fig. 5 for Γ n . The thinner potential allows less complex bound states. The bound states have infinite lifetimes. Both types of states are confined within the barrier in the x-direction, while the motion in the y-direction is controlled by the dispersion relations. In Fig. 6 we present the transmission probabilities |t| 2 for the two potentials. There are nine tunneling resonances, i.e. complex quasi-bound states within potential barrier defined as U = 1/coshx that correlate with those shown in Fig. 4 in zone "3". Likewise, for the thinner barrier there are four complex quasi-bound states. These resonance states are a clear indication of the Fabry-Perot multiple interference effects inside the barrier. Graphene resonator in a magnetic field We consider a spectral problem for the Dirac operator describing the electron-hole quantum dynamics in a graphene monolayer without a gap, in the presence of a homogeneous magnetic field A and arbitrary scalar potential U(x) (see [31]) where x = (x 1 , x 2 ), and u, v are the components of the spinor wave function that describes electron localization on the sites of sublattice A or B of a honeycomb graphene structure. Here, e is the electron charge, c is the speed of light, A is magnetic potential in axial A = B/2(−x 2 , x 1 , 0) or Landau gauge A = B(−x 2 , 0, 0) (magnetic field is directed along the x 3 axis), and v F is the Fermi velocity. The symbol <, > means a scalar product, andh is the Planck constant (which is a small parameter (h → 0) in semiclassical analysis). The vector σ = (σ 1 , σ 2 ) with Pauli matrices corresponds to the K Dirac point of the first Brillouin zone (see [31]). The case of the second K ′ Dirac point withσ * = (σ 1 , −σ 2 ) is treated similarly and is not considered here. Figure 7. A periodic orbit inside the graphene nanoribbon resonator with magnetic field and electrostatic potential (electronic trajectory). Magnetic field is directed along the x 3 axis, the electrostatic field is piece-wise linear U(x 2 ) = β|x 2 |. We study the high energy spectral problem, using the semiclassical approximation, for a vertical graphene nanoribbon confined between two flat reflecting interfaces L 1,2 (see Fig.7). It is assumed that the spinor wave function satisfies zigzag boundary conditions on the interfaces L 1,2 : u| L 1 = 0, v| L 2 = 0. It will be discussed later that the electrostatic field U(x 2 ) = β|x 2 | makes the orbit shown in Fig. 7 periodic and stable. In the gener al case, as it was noted earlier for the Schrödinger operator (see [55]), if high-energy localised eigenstates are sought, which decay exponentially away from the resonator axis AB, the separation of variables will not help construct an exact solution due to the difficulty of satisfying the boundary conditions. Ray asymptotic solution The WKB ray asymptotic solution to the Dirac equation is sought through consideration of the eigenvalue problem associated with Hφ = Eφ. The magnetic vector potential A = B/2(−x 2 , x 1 , 0) is given in terms of the axial gauge for a magnetic field. The Hamiltonian of the Dirac system (see equations (2) and (10)) takes the form for monolayer graphene, In contrast, for bilayer graphene the Hamiltonian takes the form, where g ≈ 0.4eV/υ F is the interlayer coupling. We consider the case when bilayer graphene has Bernal stacking as shown in Fig. 6. Bernal-stacked bilayer graphene occurs with half of the carbon atoms in the second layer sitting on top of the empty centers of hexagons in the first layer. An external electric field can tune its bandgap by up to 250meV [32]. This form of structure of bilayer graphene can be experimentally created using chemical vapor deposition [32]. Considering the low energy states of electrons, we can reduce the 4 × 4 matrix describing the bilayer graphene to a form similar to that of monolayer graphene [56]. The only difference from the monolayer form is the squaring of the off-diagonal entries and the inclusion of a band mass for bilayer electrons. It is now convenient to introduce some dimensionless variables. The coordinate system x ⇒ xD, where D is a characteristic scale associated with a change in the potential (and correspondingly U(x) ⇒ U(xD)). Then, we write U = U(xD)/E 0 ; where we define the characteristic energy scale as E ⇒ E 0 E. For single layers of graphene the small parameter, h << 1, is h = υ Fh /U 0 D. In double layer graphene it is slightly different; h = αD/ √ 2mU 0 , with the magnetic length, as a function of the applied magnetic field, given to be α = αD/ √ 2mE 0 with α = eB/c. We now write the dimensionless forms of the one and two layer graphene systems as (with the tildes omitted for brevity), and The solution for monolayer graphene is sought in the same form as equation (11). Substituting this series into the Dirac system, and equating to zero the corresponding coefficients of successive degrees of the small parameter h, we obtain a recurrent system of equations which determines the unknown S(x) and ψ j (x). The Hamiltonian H has two eigenvalues. In the domain Ω e = {x : E > U(x)}, the Hamiltonian eigenvalue h 1 = U(x) + p on the level set h 1 = E describes the dynamics of electrons. The corresponding classical trajectories can be obtained from the Hamiltonian systemẋ = H e p ,ṗ = −H e x , x = (x 1 , x 2 ), p = (p 1 , p 2 ), with an equivalent Hamiltonian (see [48]) on the level set H h = 0. The Hamiltonian dynamics with h 1,2 or with H e,h are equivalent (see [48]).Classical action S(x) satisfies the Hamilton-Jacobi equation in the monolayer case to be Likewise, in the case of bilayers, and The Hamiliton-Jacobi equation is satisfied in the two-layers of graphene case by, The requires that the orthogonality condition with complex conjugation < e 1 , R(σ (0) (x)e 1 ) >= 0 must hold, where where, Using the basic elements of the techniques described in [48], from the orthogonality condition one obtains the transport equation for σ (0) (x). The geometrical spreading for an electron or hole with respect to the Hamiltonian system with h 1,2 = U ± v F p has a solution Monolayer : Bilayer : where where we have introduced θ, which is the angle made by the velocity of the particle trajectory with the x 1 axis: Here −θ/2 is the adiabatic phase for monolayer graphene, as introduced by Berry [64]. Chirality results in a Berry phase of θ in bilayer graphene and the confinement of electronic states. Conservation of chirality in monolayer graphene means that the particles cannot backscatter and this leads to normal incidence transmission equal to unity. This is not the case in bilayer graphene and backscattering can occur. Construction of eigenfunctions, periodic orbit stability analysis and quantization conditions Let x 0 = (x 1 (s), x 2 (s)) be a particle (electron or hole) classical trajectory, where s is the arc length measured along the trajectory. Consider the neighborhood of the trajectory in terms of local coordinates s, n, where n is the distance along the vector, normal to the trajectory, such that where e n (s) is the unit vector normal to the trajectory. Introducing ν = n/ √ h = O(1), we seek an asymptotic solution to the Dirac system related to (2) where S 0 (s) and S 1 (s) are chosen similar to [55], [66] as they give a linear approximation for solution to the Hamilton-Jacobi equation (55) (see [55], [66])). The parameter for monolayers a(s) = E − U 0 (s) and for Bernal bilayers, In the following γ i (s), i = 1, 2 are the Cartesian components of e n (s). Following [54], [55], [65], and [66], we apply the asymptotic boundary-layer method to the Dirac system (2). We allow that the width of the boundary layer is determined by |n| = O( √ h) ash → 0. We assume that we deal with a continuous family of POs symmetric with respect to both axes (see Fig. 4). Thus, the trajectory of the PO consists of two symmetric parts between two reflection points A and B. We seek the asymptotic solution of the eigenfunction for electrons and holes localized in the neighborhood of a PO as a combination of two Gaussian beams where we have defined the Berry phase to be, e iθ = γ 2 − iγ 1 , and e 1,2 = (1, e iθ )/ √ 2. Here, each beam is related to the corresponding part of the periodic orbit. Namely, ψ 1 is determined by z = z 1 (s), p = p 1 (s) for 0 < s < s 0 , describing the electrons propagation along the lower part of the orbit from A to B, whereas ψ 2 is determined by z = z 2 (s), p = p 2 (s) for s 0 < s < 2s 0 , for the electrons propagation along the upper part of the orbit from B to A. The complete derivation of the electronic Gaussian beams for monolayer graphene can be found in the work of Zalipaev [66]. Following the methodology developed in the previous work [66], we state the problem in terms of the function z(s) and write the Hamiltonian in its terms, with the Hamiltonian, The above are the same for both mono and bilayer graphene, but with different d(s) (and a(s), as mentioned above), where ρ(s) is the radius of curvature of a trajectory. Thus, (z 1 (s), p 1 (s)) and (z 2 (s), p 2 (s)) define (z(s), p(s)) for 0 < s < 2s 0 . The asymptotic localized solution of Gaussian beam ψ(s, n) is constructed in an asymptotically small neighbourhood of the PO. This solution is to be periodic with respect to s ∈ R with the period 2s 0 , and satisfies the zigzag boundary conditions. The reflection coefficient R is derived in the short-wave approximation, and given by where γ is the angle of incidence, and δ 1 = θ(s 0 + 0) − θ(s 0 − 0). In the bilayer graphene system the reflection coefficient is, The localized solution can be constructed if z(s), p(s) is a complex (in the complex phase space C 2 z,p ) quasi-periodic Floquet solution of Hamiltonian system (65) with periodic coefficients (see [50], [54]). Namely, for the monodromy 2 × 2 matrix M, describing the mapping for the period 2s 0 , a Floquet solution for arbitrary s is defined as The structure of the monodromy matrix M is given by the following product where M 1 = M 1 (s 0 ) and M 2 = M 2 (2s 0 ) are fundamental matrices of the system (65) describing the evolution (z(s), p(s)) for 0 < s < s 0 and s 0 < s < 2s 0 , correspondingly. The reflection matrices at points A and B (see Fig. 6), R A and R B are given by , where γ is the angle of incidence of the trajectory at the points A, B. To attain R A and R B , the classical action S of the phase function at the reflecting boundary requires continuity to be set between the incident and the reflected beams (see [50], [54], [51]). In a general case, the entries of M 1,2 are to be determined numerically as the Hamiltonian system has variable coefficients. It is worth to remark that all the multipliers in (71) are symplectic matrices. Thus, the monodromy matrix M is symplectic. The classical theory of linear Hamiltonian systems with periodic coefficients states that, if |TrM| < 2, we have a stable PO (elliptic fixed point, for example, see [27]), and ||M n || < const for arbitrary n ∈ N. Then, there exist two bounded complex Floquet's solutions for −∞ < s < +∞, namely, (z(s), p(s)) and (z(s),p(s)) with Floquet's multipliers λ 1,2 = e ±iϕ (0 < ϕ < π), which are solutions of These solutions (z(s), p(s)) and (z(s),p(s)) may be obtained as follows. If |TrM| < 2, the monodromy matrix has complex eigenvectors w = (w z , w p ) andw = (w z ,w p ) Then, the first solution is determined by Satisfaction of the solution to the zigzag boundary condition at the interface L 1 (u| L 1 = 0), as well as the requirement that the solution should be periodic with respect to s ∈ R with the period 2s 0 lead to a generalized Bohr-Sommerfeld quantization condition determining semiclassical asymptotics of the high energy spectrum. Namely, after the integration around the closed loop of PO, the total variation of the classical action S and the phase of the amplitude of ψ 0 must be equal to 2πm 1 . Thus, we obtain the quantization condition for electrons and holes in the form Monolayer : ∆ = π − γ 2 , where m 1,2 ∈ N are the longitudinal and the transversal quantization indexes, and for electrons we have +, for holes −. The index m 2 and factor ∆ appear due to the variation of the phase of σ m 2 (s, ν) (see the formulas in [66]). Here in the left-hand side in (72) Numerical results In this section we concentrate on the example for monolayer graphene with piece-wise linear potential U(x 2 ) = β|x 2 |. The numerical techniques used in this section are described in [55]. We deal with the Dirac system (2) by incorporating the Landau gauge A = B(−x 2 , 0, 0). Thus, using dimensionless U, E, α and dimensionless coordinates, the Dirac system is written in the following form The energy in eV is given by U 0 E, where U 0 = v Fh /hD = 6.59meV/h is the characteristic scale of the potential U. Here we assume that D = 10 −7 m. A new small dimensionless parameter h (0 < h << 1) is supposed to be predetermined. The magnetic induction amplitude is given by B = αcU 0 /v F eD = α/h 6.5910 −2 T. Consider, as an example, a family of continuous POs which are symmetric with respect to both axes, with two reflection points A, B (see Fig. 1). The formulas describing electronic POs as solutions of the corresponding integrable system with the Hamiltonian in the Landau gauge on the level set H = 0, are easily obtained and given by for the lower part 0 < t < t 0 , and for the upper part 0 < t < t 0 . Here π 1 and π 2 are the initial values of the components of momentum p 1 and p 2 at the point A, and Ω = α 2 − β 2 . It is important that α > β. In this case a drift motion of electrons and holes takes place in the positive direction of the x 1 axis, from the point A to the point B (see Fig. 6). This fact helps to construct POs. We assume that everywhere in a domain, in which we construct asymptotic solutions for electronic eigenfunctions, that the inequality E > U(x 2 ) holds. In equations (75-76) π 1 is a fixed parameter, whereas π 2 and t 0 as functions of π 1 are determined uniquely by the equations f 1 (t 0 , π 1 , π 2 , β) = D, f 2 (t 0 , π 1 , π 2 , β) = 0. The formulas, describing PO holes with Hamiltonian on the level set H = 0, are given by for the upper part 0 < t < t 0 , and for the lower part 0 < t < t 0 . It is worth to remark that holes move along a clockwise direction of PO whereas electrons run counter-clockwise around the PO contour. Thus, we have for electrons and holes the continuous family of POs with respect to parameter π 1 which look like lens-shaped contours. As soon as the parameter π 1 has been determined from the generalized Bohr-Sommerfeld quantization condition (72), the semiclassical energy levels for electrons and holes are computed by For the lens-shaped class of POs the high-energy semiclassical localized eigenstates were tested successfully against the energy eigenvalues and the eigenfunctions computed by the finite element method using COMSOL (see [70]). The boundary conditions were used in the following numerical experiments. In Fig's. 9 and 10, the electronic eigenfunction density component |u| 2 is shown that was computed semiclassically for the the states m 1 = 27, m 2 = 0, 1, 2 with E = 2.2538, 2.2812, 2.3078 for h = 0.025, α = 1.0, β = 0.5. It is worth to remark that in this case the localization of eigenfunction density components takes place in close neighborhood of PO. In all shown figures computed by semiclassical analysis one can easily see a white contour of PO. In Fig. 9 (a) dependence of TrM/2 on π 1 -(a) and dependence of Im(Γ) on s -(b), for the state m 1 = 27, m 2 = 0 with E = 2.2538 α = 1, β = 0.5, h = 0.025 are shown. Conclusion In this review we have outlined our work on the semiclassical analysis of graphene structures and introduced some new results for monolayer and bilayer graphene. We have outlined a range of new asymptotic methods and a semiclassical analysis of Dirac electron-hole tunneling through a Gaussian shaped barrier that represents an electrostatic potential. We started by analyzing the rectangular barrier and have found some important differences between it and the smooth barrier. Namely, the smooth barrier exhibits a quasi-discrete spectrum and complex bound states that do not exist for the rectangular barrier (in zone "3" in Fig. 3). In both types of barrier Klein tunneling occurs. The WKB approximation deals with the asymptotic analysis of matched asymptotic techniques and boundary layers for the turning points in the barrier. The main results of this work are eloquent WKB formulas for the entries in a smooth barrier transfer matrix. This matrix explains the mechanism of total transmission through the barrier for some resonance values of energy of skew incident electrons or holes. Moreover, it has been shown that the existence of modes localized within the barrier, and exponentially decaying away from it, for two discrete complex and real sets of energy eigenlevels can be determined by the Bohr-Sommerfeld quantization condition. It was shown that the total transmission through the barrier takes place when the energy of an incident electron or hole coincides with a real part of the complex energy eigenlevel of one among the set of localized modes. These facts were confirmed by numerical simulations done by finite elements methods and have been found to also be in excellent agreement with the results found using finite difference methods as in [71]. We have also applied the Gaussian beam methods, originated by Popov [51] and expanded by Zalipaev [66] to quantum problems, to describe monolayer and bilayer graphene. We have constructed eigenfunctions and defined the stable periodic orbit conditions and the quantization conditions. The reflection and transmission coefficients of monolayer and bilayer graphene have been derived within the context of semiclassical physics in full. It is clear that these methods can offer good insights into the behavior of the graphene Fabry-Perot resonator. Such systems will find applications in plasmonics, and nanoribbon heterostructures made from graphene are promising to emerge. The kind of bilayer structure analyzed here can be created by chemical vapor deposition [32] and this opens up the road to a flurry of geometrically optimized graphene resonator systems, whether acting in isolation or as part of a composite, or array. Appendix A. Transfer and scattering matrix properties for a smooth step Let us formulate this scattering problem in terms of transfer matrix T for the left slope of the entire barrier (see [67]) and Taking into account the conservation of the x-component of the probability density current (see equation (8) in [17] or (18) in [18]) we obtain that Thus, for the slope transfer matrix T it holds that or As a result we have |T 11 | = |T 22 |, |T 12 | = |T 21 |, |det(T)| = 1. For the scattering matrix we have and From (90) we obtain that thus, the scattering matrix is unitary. If the entries of S are known, then, T = −S 11 /S 12 1/S 12 S 21 − S 11 S 22 /S 12 S 22 /S 12 , Time-reversal symmetry in scattering through the graphene barrier would mean that are both asymptotic solutions to the Dirac system, and In what follows that Thus, S 12 = S 21 , and If a 1 = 1, a 2 = r 1 , d 1 = 0, d 2 = t 1 , then If a 1 = 0, a 2 = t 2 , d 1 = 1, d 2 = r 2 , then Appendix B. Transfer and scattering matrix properties for a smooth barrier Let us formulate this scattering problem in terms of transfer matrix T for the entire barrier. The definition of T is given by (81),(82), and looks the same Ta = d. However, for the barrier we have and For the scattering matrix S we have and From (105) we obtain that If the entries of S are known, then, Taking into account the time-reversal symmetry in scattering through the graphene barrier, we obtain S = S T , and 16. Appendix C. WKB asymptotic solution for tunneling through a smooth step. Left slope tunneling Let us assume that E > E c , where E c = |p y | is the cut-off energy. In the case |E| < E c there is no wave transmission through the barrier. On the other side, we assume that E < U 0 − δE, and δE is chosen such as to avoid coalescence of all four turning points. Consider a scattering problem through a smooth step that is the left slope of the barrier. Assume that the right slope in Fig. 1 does not exist, that is U(x) = U 0 if x > x max . In this case we have three domains Ω i , i = 1, 2, 3 with the only difference for Ω 3 = {x : x 2 < x < +∞}. Thus, to the leading order, in the domain Ω 1 we have a superposition of waves traveling to the left and to the right. In the domain Ω 2 we have exponentially decaying and growing contributions. In the domain Ω 3 we have where d = T L a. It is worth remarking that the electron state in x < a transfers into a hole state for x > a. To determine the unknown entries of the transfer matrix T L (see Appendix A), we have to match the principal terms of all asymptotic expansions by gluing them through asymptotically small boundary layers at x = x 1 and x = x 2 . To perform matching asymptotics techniques in this case we introduce a new variable U(x) − E = ξ and derive an effective Schrödinger equation. Then, we have where α = α(ξ) = dξ/dx > 0. Changing u, v as follows we obtain the system of −ihαW ξ + ip y V + ξW = 0, and ihαV ξ − ip y W + ξV = 0. Next, differentiating the first equation with respect to ξ and eliminating V, we obtain a second order ODE for W Then, after we have found W, we have Both boundary layers for two turning points ξ = −|p y | and ξ = |p y | are determined by following scale, well-known in WKB asymptotics for turning points in 1D Schrödinger equation as h → 0 (see for example [16]), On the other side, this scattering problem for the equation (112) written as effective Schrödinger equation may be represented to leading order as follows w = 1 2(ξ 2 − p 2 y ) 1/4 |p y |α 2 a 1 sgn(p y ) e i h Φ − (ξ) for ξ < −|p y |, where for ξ > |p y |. According to the method of comparison equations described in [61] and [62], we seek asymptotic solutions, uniform with respect to |p y |, as follows and the function z(ξ) is determined by z ′2 (a 2 − z 2 /4) = q(ξ). where The asymptotics include the parabolic cylinder function D ν (z) that is a solution to h 2 y zz + (h(ν + 1/2) − z 2 /4)y = 0. Right slope tunneling Now let us formulate the scattering problem with transfer matrix T R for the right slope. Taking into account that α = | dξ dx |, the problem for transfer matrix written in terms of solution to the effective Schrödinger equation may be represented as follows for ξ > |p y |, for ξ < −|p y |. If w is a solution to (138), then w * is the solution to (114). Thus, the coefficients from (139), (140) are connected by Hence, we have Since d = T R a, we obtain where It is worth to remark that Q 1 and Q 2 differ as the function α(ξ) behave differently for the same segment ξ → (−|p y |, |p y |) for left and right slopes of non-symmetric barrier.
16,658.6
2013-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
Causal hierarchy in modified gravity We investigate the causal hierarchy in various modified theories of gravity. In general relativity the standard causal hierarchy, (key elements of which are chronology, causality, strong causality, stable causality, and global hyperbolicity), is well-established. In modified theories of gravity there is typically considerable extra structure, (such as: multiple metrics, aether fields, modified dispersion relations, Horava-like gravity, parabolic propagation, etcetera), requiring a reassessment and rephrasing of the usual causal hierarchy. We shall show that in this extended framework suitable causal hierarchies can indeed be established, and discuss the implications for the interplay between"superluminal"propagation and causality. The key distinguishing feature is whether the signal velocity is finite or infinite. Preserving even minimal notions of causality in the presence of infinite signal velocity requires the aether field to be both unique and hypersurface orthogonal, leading us to introduce the notion of global parabolicity. Herein we shall seek to generalize the causal hierarchy beyond standard general relativity, to various modified theories of gravity, including multi-metric models, Einsteinaether models, Hořava-like models, modified dispersion relations, etcetera. In doing so we shall partially revise and in some cases extend the work of reference [21], and shall furthermore develop a general notion of curved-spacetime parabolic PDEs, and do so in a manner that still maintains desirable causality properties. Because we wish to keep reasonably close to standard general relativity, we shall focus on ideas that can be closely related to Lorentzian metrics, and shall avoid more general pseudo-Finsler constructions [22][23][24][25][26][27][28]. Finally, we should point out that modifying the causal hierarchy has a potentially serious impact on at least some classes of "black hole mimickers" [29][30][31][32][33][34][35]. The standard general relativity causal hierarchy The standard general relativistic causal hierarchy is pedagogically outlined in many places. For instance, good sources are Hawking-Ellis pages 189-206 [1], Sachs-Wu pages 257-259 [2], [3], and the discussions in references [4,5]. At a more technical level, see Penrose pages 11-38 [6] and reference [7] and the online discussion in reference [8]. See also the recent refinements in references [9,10]. While overall there is good agreement on final results, sometimes definitions are cryptomorphic -that is, sometimes definitions and theorems are interchanged. There is universal agreement on two of the more basic levels of the causal hierarchy. • Chronology condition: There are no closed timelike curves. • Causality condition: There are no closed non-spacelike curves. The absence of closed timelike curves forbids time travel, but already for the closed non-spacelike curves one should sub-divide the discussion into two cases: (i) closed non-spacelike curves with a timelike segment; (ii) closed null curves. -2 -For closed non-spacelike curves with a timelike segment any observer on the timelike segment would be able to receive a message before it was sent; this is clearly undesirable. Furthermore, any closed non-spacelike curve with a timelike segment can be deformed into a timelike curve. So closed non-spacelike curves with a timelike segment lead to violation of the chronology condition. In counterpoint, closed null curves are less directly problematic. (Receiving a signal at exactly the same time that it is sent is certainly odd, but not in and of itself logically problematic.) On the other hand, infinitesimal perturbations of closed null curves typically lead to closed non-spacelike curves with a timelike segment (see, e.g., propositions 6.4.4 and 6.4.5 of reference [1]), which is logically undesirable, and so indirectly problematic. For this reason it is generally considered desirable, even at the purely kinematic level, to forbid closed null curves as well. Strong causality is the next step in the usual hierarchy. Perhaps the best way of characterizing stable causality is in terms of the Alexandrov topology based on the chronological diamonds I(x, y). Assuming that the spacetime is time-orientable, so that one has a meaningful notion of "future-pointing", define I + (x), the chronological future of x, as the set of all points that can be reached from x by a future directed timelike curve. Similarly, define I − (x), the chronological past of x, as the set of all points from which one can reach x by following a future directed timelike curve. Then I(x, y) = I − (x) ∩ I + (y) is the chronological diamond based on {x, y}. As depicted in figure 1, the intersection of two chronological diamonds is itself a chronological diamond. (See, e.g., Penrose [6] page 33.) Thus the set of chronological diamonds can be used as a basis for a topology. That is, the collection of arbitrary unions of chronological diamonds defines a topology of open sets on spacetime in which a set is open by definition if and only if it is the union of an arbitrary number of chronological diamonds -this is the Alexandrov topology (the causal topology). • Strong causality condition: The Alexandrov topology is Hausdorff. 1 In fact, the strong causality condition is entirely equivalent to the statement that the Alexandrov topology reproduces the usual manifold topology [6]. The strong causality condition can also be rephrased as follows: Surrounding any point x there is an open set U (in the manifold topology) such that any timelike curve that starts at x and then leaves U cannot ever re-enter U . (That is: non-spacelike curves cannot return too topologically "close" to where they start from.) Overall, the strong causality condition is equivalent to the statement that the light cones allow one to reconstruct the usual spacetime manifold topology. Causal diamonds J(x, y), as opposed to chronological diamonds I(x, y), work with nonspacelike curves instead of timelike curves, and lead to closed sets instead of open sets. Define J + (x), the causal future of x, as the set of all points that can be reached from x by a future directed non-spacelike curve. Similarly, define J − (x), the causal past of x, as the set of all points from which one can reach x by following a future directed non-spacelike curve. Then J(x, y) = J − (x) ∩ J + (y) is the causal diamond based on the two timelike separated points {x, y}. So far we have defined a set of causality conditions which ensure the absence of pathologies in the spacetime we are considering. Given that we are interested in theories where the spacetime is a dynamical object, we can always perform a small perturbation of our spacetime. Therefore we may also want to rule out those spacetimes that are arbitrary close to violating the hierarchy of causality conditions above. To this end, we construct a partial ordering on the space of Lorentzian metrics L(M ) by saying that one metric [ĝ] ab is "wider" than another second metric [g] ab , denoted [ĝ] ab > [g] ab , if all non-spacelike vectors in the second metric are strictly timelike in the first metric. This partial ordering can be used (in a completely standard fashion) to define open intervals in the set of all Lorentzian metrics, and these open intervals can be used to -4 -define a sub-basis for a topology on the set of Lorentzian metrics one can define on the spacetime manifold -this topology is typically called the C 0 open topology. For the discussion in the next section it will be important to keep in mind that, being a partial order, not every pair of metrics need to be comparable. With this in mind, the next step in the hierarchy, the stable causality condition, can be defined in at least 3 equivalent ways: • There exists a global time function τ (x) whose gradient is everywhere timelike. (So, adopting − + ++ signature, the vector −g ab ∇ b τ is future-pointing timelike. This definition of stable causality implies that for any future-pointing timelike vector V a one has V a ∇ a τ > 0.) • There is a metric wider than the physical metric such that the wider metric satisfies the causality condition. The second and the third statements are clearly equivalent as they are merely different ways of express the same physical concept. The proof that the first statement is equivalent to the other two is not trivial, but can be found in standard textbooks, see e.g. [3] for the equivalence between the first and second statement, or [1] for the equivalence between the first and third statement. (Note that some authors use the strong causality condition instead of the chronology/causality condition in the last definition.) The last step in the standard causal hierarchy, global hyperbolicity, can also be defined in at least 3 equivalent ways: • (Causality condition) + (causal diamonds are compact). • Wave equations with suitable initial data have unique solutions. • The spacetime is foliated by spacelike Cauchy hypersurfaces. For a technical discussion see references [8][9][10]. We shall soon see that the last of these conditions, the existence of a foliation by suitably redefined and suitably modified Cauchy hypersurfaces, is one of the more straightforward causality conditions to work with once one steps far beyond standard general relativity. Overall the message is this: Precise technical details may differ between various sources, but the basic physics the same. The standard general relativistic causal hierarchy is set up so as to successively exclude various phenomena that for one reason or another might be considered "unphysical". We shall now seek to extend this framework beyond standard general relativity. Multi-metric frameworks Perhaps the most straightforward extensions of the usual causal hierarchy occur in multi-metric frameworks. Multi-metric extensions of general relativity have a long and quite complicated history -over the last decade one of the key examples of this type of model has been the dRGT "massive gravity" models [36][37][38][39][40][41], though earlier attempts go back several decades [42]. Central to all of these models are multiple Lorentzian metrics {[g i ] ab } N i=1 (some dynamical, some possibly non-dynamical background metrics) interacting in various ways. 2 To extend the usual causal hierarchy to such a multi-metric framework at minimum one would want to apply the usual causal hierarchy to each effective metric separately, and to demand that you cannot violate causality by switching metrics [g i ] ab part way through whatever physical process you are interested in. Perhaps the simplest way to formulate this is to redefine the notion of chronological/causal curves as follows: (i) A piecewise chronological curve is one that can be split into connected segments each one of which is timelike with respect to at least one of the metrics [g i ] ab . (ii) A piecewise causal curve is one that can be split into connected segments each one of which is non-spacelike with respect to at least one of the metrics [g i ] ab . With these definitions in place we can immediately generalize the chronology and causality conditions in multi-metric framework as follows: • Piecewise chronology condition: There are no closed piecewise chronological curves. • Piecewise causality condition: There are no closed piecewise causal curves. We can similarly modify the definitions of chronological/causal past/future, and the definitions of chronological/causal diamonds, so as to define strong causality in terms of piecewise timelike curves. It is useful to break down the piecewise chronology/causality conditions above into equivalent conditions that are often simpler to work with: • The piecewise chronology condition being satisfied by is equivalent to each of the individual metrics [g i ] ab independently satisfying the chronology condition, plus the compatibility condition between all the pairs of metrics that • The piecewise causality condition being satisfied by {[g i ] ab } N i=1 is equivalent to each of the individual metrics [g i ] ab independently satisfying the causality condition, plus the compatibility condition between all the pairs of metrics that Thence a piecewise chronological curve, having a tangent vector V a that is timelike with respect to at least one of the individual metrics [g i ] ab , must satisfy V a ∇ a τ > 0, and so cannot be closed. Thence, this definition implies that all the future-pointing propagation cones must lie on the same side of the constant-τ hypersurfaces. Furthermore, the two definitions that rely primarily on the existence of the partial order > also generalize naturally, as one just needs to replace L(M ) by L(M ) N , the latter equipped with the product order: One can then show (see Appendix A) that these definitions are equivalent. There is an alternative way of proceeding in case the set (that is, the propagation cone of [g wide ] ab contains the union of all the propagation cones of the individual [g i ] ab ), then we can apply the usual definitions of general relativity the metric [g wide ] ab : B 1 : There exists a global time function whose gradient is everywhere timelike with respect to [g wide ] ab . B 2 : There is a metric wider than [g wide ] ab such that the wider metric satisfies the causality condition. -7 - providing slightly different characterizations of stable causality, which are not equivalent in general. It is straightforward to realize that the equivalence between these triplets requires (at a minimum) the existence of [g wide ] ab . However, there is a situation which, besides from making these two characterizations comparable from the perspective of logical implication, is interesting from a physical standpoint, namely when is a totally ordered set with respect to >. Under this assumption, the two triplets are indeed equivalent, as shown in Appendix A. Intuitively, one can see that the necessary condition mentioned above for these being equivalent is satisfied, as When it comes to global hyperbolicity all of the 3 standard versions of this notion can easily be adapted to the multi-metric framework, although again there are two possible versions which are not fully equivalent in general: These are the minimum requirements for reformulating the causal hierarchy in a multimetric framework. It is heartening to see that at least in these multi-metric situations the standard notions of chronology/causality can be successfully adapted without too much violence. -8 -For current purposes we will take "modified dispersion relations" to mean the following: • For each propagating mode you are interested in, pick a preferred rest frame [43]. (This does not [for current purposes] necessarily have to be the same preferred frame for each propagating mode. More on this point later.) • In that preferred rest frame, go to the eikonal approximation to justify writing a dispersion relation ω = f (k), which will not be Lorentz invariant since then it would be an "ordinary" dispersion relation [43,44]. With dispersion relation in hand, the phase and group velocities are defined by A "signal" is conventionally defined as an abrupt change in the propagating mode [45], mathematically modelled by a Heaviside step-function, which contains arbitrarily high wavenumbers in Fourier space. In view of this the "signal velocity" can usefully be defined as the infinite wavenumber limit of the phase velocity [45]; see also [24]. There are two quite distinct cases: (i) the signal velocity is finite but not the same for all propagating modes; (ii) the signal velocity is infinite for at least one of the propagating modes. These two cases lead to rather different causal structure, and significantly different causal hierarchies, as will be discussed in Sections 8.1 and 8.2 below. Before we move on let us consider several different physical models/frameworks within which the above situations are realized. Einstein-aether frameworks We shall first consider the gravity-aether sector, and then the matter-aether sector. Gravity-aether sector At its most basic, Einstein-aether theories contain both a metric g ab and a normalized aether field u a interacting via the most general Lagrangian leading to 2nd-order field equations for both g ab and u a [46]: Since Eötvös type experiments very strongly constrain the universality of free-fall (and so the [weak] equivalence principle) in the most elementary of the Einstein-aether frameworks one further assumes that the aether field does not directly couple to the matter sector. In view of the existence of both metric g ab and aether field u a , one can always define a wider metric [ The wider metric [g wider ] ab is often referred to as a disformal transformation of the base metric g ab . It is also sometimes referred to as a speed-c metric, where c = √ E 2 + 1.) One is then temped to immediately impose the standard general relativistic causal hierarchy on [g wider ] ab , but there is a technical point to investigate first. How quickly do changes in the aether field propagate through the spacetime? In reference [47], see also [48], the linearized propagation modes of the combined metricaether system have been investigated. Decomposing into spin-2, spin-1, and spin-0 modes, the relevant wave equations are all second order, and therefore for all modes one has v group = v phase = v signal . Specifically, (see reference [48]), we have: Spin-1: Spin-0: Here as usual c 123 = c 1 + c 2 + c 3 , and similarly for c 13 and c 14 . The key points are that, barring accidental degeneracies, these signal velocities will be finite, and that these signal velocities are all defined with respect to the same aether field/preferred frame. This makes it straightforward to identify this situation as a particular case of the multimetric framework discussed in Section 3. We can then define, in addition to the base metric g ab , three new metrics [ } is totally ordered (which follows from the total order of R with its standard ordering). Thus, we are precisely in the situation in which the two possible definitions of stable causality and global hyperbolicity in the multi-metric framework are equivalent. This observation justifies the use of the standard general relativity causal hierarchy, but now applied to the metric in the metric-aether sector of the Einstein-aether framework. Matter sector Suppose now that in some extended Einstein-aether framework one introduces direct couplings between the matter sector and the aether field. In view of observational Eötvös type constraints on violations of the equivalence principle, these direct couplings will certainly be small, but they could in principle be there. What would this do? There is the quite likely possibility that the aether-matter couplings might generically, (either at tree level or due to higher-order quantum loops), explicitly break Lorentz invariance in the matter sector -either due to different propagation speeds for different particle species with standard dispersion relation or due to the introduction of higherspatial-derivative terms, leading to modified dispersion relations of the type discussed above in section 4. One would then get back the dichotomy between finite signal speed and infinite signal speed. Which of these two options applies depends on specific details of the model, and cannot be decided without a case by case investigation. The finiteness versus infinitness of the signal speed leads to significantly different causal structures, see Sections 8.1 and 8.2 below. Hořava-like frameworks The key characteristics of Hořava-like frameworks are the assumed existence of a chronon field (a global time function, often called a khronon field) defining a preferred foliation with respect to which all propagating modes exhibit infinite signal speed. Foliation and signal speed Hořava gravity was developed as an attempt at ameliorating the renormalizability problems of quantum gravity by violating Lorentz invariance at intermediate stages of the calculation [49][50][51][52][53][54][55][56]. Effectively one is using Lorentz symmetry breaking as a quantum field theory regulator [57,58]. One keeps 2 time derivatives (to avoid the Ostragowsky instability) but implements at least 6 space derivatives to guarantee power-counting renormalizability. The splitting of spacetime into space+time is taken to be the same for all of the propagating modes, so one is explicitly choosing a preferred foliation of spacetime by spatial 3-surfaces just to set up the formalism. The preferred global time coordinate is typically called the chronon field. Thus the key feature of Hořava-like frameworks (in the flat spacetime limit) is the explicit violation of Lorentz invariance leading to PDEs of the form While for the group velocity As long as one is dealing with finite polynomials in the wavenumber, the signal velocity, (the infinite-wavenumber limit of the phase velocity), is now infinite, as is the infinitewavenumber limit of the group velocity. So the causal structure is certainly not of the usual Lorentzian type. Note that finite signal speed is typically associated with the Einstein-aether framework rather than the Hořava framework, especially if one insists on 2nd order field equations both in both the gravity and matter sectors. However, one can also have finite signal speeds with higher-order non-polynomial field equations, for example, when the dispersion relations interpolate between two limit speeds. Notably, an example of this behaviour can be found in analogue spacetimes based on relativistic BECs [59]. Moving away from the flat spacetime limit, each of spatial hypersurfaces in the preferred foliation is a 3-manifold (often called a leaf of the foliation) on which you can construct some Euclidean signature Laplacian ∆ 3 = g ac P ab ∇ b P cd ∇ d , where we have defined the projector P ab = g ab −V a V b . You can then bootstrap the propagation equations to curved spacetime -something similar to Going to the eikonal limit the resulting dispersion relation is polynomial in wavenumber leading to an infinite signal velocity [60]. The causal structure is then certainly not of the usual Lorentzian type, and the generalized causal hierarchy will be discussed in Section 8.2. Relation between Einstein-aether and Hořava frameworks The major difference between Einstein-aether and Hořava frameworks is that Einsteinaether theories are based on a preferred threading by integral curves of the aether field u a whereas Hořava models are based on a preferred foliation by constant chronon hypersurfaces τ (x). Also, the Einstein-aether framework explicitly asks for secondorder equations of motion. Nevertheless there are scenarios where the two formalisms overlap -one simply needs to drive the vorticity of the aether to zero and to restrict attention to low wavenumbers. See particularly reference [61]. How do we generalize this to curved spacetime? Assuming the existence of a preferred foliation, with leaves labelled by a chronon field similar in spirit to that of Hořava-like frameworks, then each of the spatial hypersurfaces is a 3-manifold on which you can construct a Euclidean signature Laplacian ∆ 3 . • By choosing a 4-vector V a transverse and future pointing with respect to the spatial slices (that is, V a ∇ a τ > 0 you can then bootstrap the parabolic PDEs to curved spacetime: Going to the eikonal approximation, the dispersion relation is again of the form ω ∝ k 2 , again leading to an infinite signal velocity. Note that the existence of an infinite signal velocity is intimately related to the preferred foliation. • If in contrast one chooses V a ∇ a τ = 0 then the PDE reduces to a collection of uncoupled elliptic PDEs, one on each preferred time slice, (one on each leaf of the foliation). • Finally choosing V a ∇ a τ < 0 results in an anti-diffusion equation (a backwardsin-time diffusion equation). Though the motivation (and many specific details) are now different, the existence of a preferred foliation and infinite signal speed is shared with the Hořava-like models. The causal structure is certainly not of the usual Lorentzian type, and the generalized causal hierarchy will be discussed in Section 8.2. Now that we have reviewed the relevant frameworks let us then look to consider the different scenarios they can lead to for what regards the propagation of signals. Signal velocities In the previous sections we have analyzed three possible frameworks, those arising from Einstein-aether theories, Hořava-like systems, and parabolic equations. The main difference among these frameworks is the speed of propagation of signals being either finite or infinite. An example of the first possibility is Einstein-aether theory. If the aether-matter couplings are such as to still lead to second-order wave equations, then one would still have a set of nested signal cones satisfying v group = v phase = v signal (as above for the metric-aether modes). One would then (as above) define [g wide ] ab based on the fastest of the signal velocities, and again apply a suitably modified version of the standard general relativity causal hierarchy. The situation in Hořava-like and parabolic frameworks where one or more of the signal velocities is infinite is qualitatively different and considerably trickier. (The "signal metric" introduced above becomes degenerate, and the "signal cones" widen out as far as possible to become "signal planes"). Let us now study the two cases of finite and infinite propagation speed in detail. Finite signal velocities On general grounds, if all the signal velocities are finite, then the causal hierarchy is extremely similar to that for the multi-metric framework. As per the discussion for modified dispersion relations, pick any propagating mode, and go to the appropriate rest frame. Then we can (pointwise) define the "signal metric" with the associated "signal cones" ||d x|| = v signal |dt|. Each propagating mode can be associated with a Lorentzian signal metric of this form, so from a chronology/causality -14 -point of view any situation with finite signal velocities can be interpreted as an example of the multi-metric framework. The minor modifications we previously discussed for extending the causal hierarchy to the multi-metric framework will also apply in situations where all the signal velocities are finite. In general situations one can use the definitions of stable causality and global hyperbolicity, respectively; if [g wide ] ab exists, one can use instead the definitions (again, these different prescriptions will be equivalent if the set of metrics is totally ordered). Infinite signal velocities Suppose we have an aether covector field u a (not at this stage necessarily hypersurface orthogonal) with respect to which some excitation has infinite signal speed. Then: • Define the analogue of chronological curves in terms of the tangent being futurepointing with respect to the aether, t a u a < 0. (No metric is required to establish this.) • Define the analogue of causal curves in terms of the tangent being non-pastpointing with respect to the aether, t a u a ≤ 0. (No metric is required to establish this.) We shall first show that for infinite signal speeds sensible causal behaviour implies that the aether field has to be hypersurface orthogonal. We shall also show that for infinite signal speeds sensible causal behaviour implies that the aether field is unique. Hypersurface orthogonality of the aether To demonstrate the need for hypersurface orthogonality of the aether (in the presence of infinite signal velocity) suppose the the contrary -that aether is not hypersurface orthogonal, then its vorticity is nonzero: ω a = ε abcd u b u [c,d] = 0. Pick any point p in the manifold and go to Riemann local coordinates, pointing u a in the t direction, (1, 0, 0, 0), and ω a in the z direction, (0, 0, 0, 1). Then Here we note |u| 2 = −1 + O([δx] 2 ) and ω a = (0, 0, 0, ω) + O([δx] 2 ). Now in these coordinates consider the closed topologically circular curve C : x a (λ) = 0, r * cos(θ(λ)), r * sin(θ(λ)), 0 . As long as ω = 0 we can choose r * small enough to safely ignore the O(r 3 * ) term. Then choose the sign of dθ dλ to be the same as the sign of ω and one has a closed chronological curve. Thus if ω = 0 in the presence of infinite signal velocities the causal hierarchy fails at the very first step. That is: to preserve the chronology condition in the presence of infinite signal speed the relevant aether field with respect to which infinite signal velocity is defined must have zero vorticity and so be hypersurface orthogonal. So we can set Uniqueness of the aether field Suppose now that we have two propagating modes, both with infinite signal speeds, but defined with respect to two distinct hypersurface orthogonal aether fields u a 1 and u a 2 . We want to show that u a 1 = u a 2 if any sensible notion of causality is to survive. For the current argument it is good enough to work in flat Minkowski space. Pick any point p in spacetime and go to coordinates where 1 2 (u a 1 + u a 2 ) is pointing in the t direction, (1, 0, 0, 0), and the spatial 3-vectors (u 1 ) i and (u 2 ) i are pointing in the ±x directions, (±1, 0, 0). Then for some −1 < v < 1 we have g ab = η ab ; (u 1,2 ) a = γ(−1; ±v, 0, 0), (8.8) where γ = (1 − v 2 ) 1/2 so that |u 1,2 | 2 = −1. Causal structure Hence in the presence of infinite signal velocities the preservation of even the most basic notions of causality implies that there is a global time function τ (x) whose normalized gradient (8.7) is unique and causally well-behaved in the sense described below: • Define the analogue of chronological curves in terms of the tangent being futurepointing with respect to the global time function, t a ∇ a τ > 0. (No metric is required to do this.) • Define the analogue of causal curves in terms of the tangent being non-pastpointing with respect to the global time function, t a ∇ a τ ≥ 0. (No metric is required to do this.) With these definitions there automatically are no closed chronological curves -thence a variant of the chronology condition is built into this formalism. There are closed causal curves (infinite speed communication) but this is not a problem since by assumption there is a unique global time function to keep things under control. Using the modified notions of chronology and causality defined above we can still define the notions of chronological past I − (x) and chronological future I + (x), but these are no longer cone-like, they are instead half-spaces. One can similarly define "chronological intervals" I(x, y) = I − (x) ∩ I + (y) but these are no longer diamonds, -17 -they are instead slabs -indeed in terms of the preferred global time function one has I(x, y) = τ −1 (τ (y), τ (x) . There is still an induced Alexandrov topology but is now not Hausdorff. (Any 2 distinct points x and y on the same global time slice, τ (x) = τ (y), can never be separated by disjoint open sets in this Alexandrov topology since I ± (x) = I ± (y).) • For infinite signal velocities there is no analogue of strong causality. • For infinite signal velocities stable causality needs significant revision. (While by assumption we have a global time function τ (x) in the most general setting there is not necessarily a Lorentzian metric available to define timelike, and there is certainly by construction no "wider" metric to deal with.) • For infinite signal velocities the concept of global hyperbolicity needs significant revision. (The causality condition is definitely violated and the causal diamonds are no longer diamonds they are slabs; so they are certainly not bounded and certainly not compact.) What is instead do-able for infinite signal velocities is to introduce the new concept of global parabolicity, which is now defined as demanding a foliation by (parabolic) Cauchy hypersurfaces (crossed by chronological curves once and once only). This implies in particular that for any point x we demand that is the entire spacetime, so that diffusion equations with suitable initial data (on any fixed but arbitrary slice of the preferred foliation) have unique solutions. Universal horizons Working in Hořava-like models (the key ingredients being the existence of a preferred foliation and infinite signal velocity) leads to a new concept -that of a universal horizon [62][63][64]; see also references [21,60]. The most basic form of universal horizon arises in static and spherically symmetric situations iff the gradient of the chronon is anti-parallel to the gradient of the radial coordinate, ∇ a τ = −χ 2 ∇ a r. They also appear in slowly rotating black hole solutions in Hořava gravity [65]. If this happens, then for any chronological curve the tangent vector, by definition satisfying t a ∇ a τ > 0, also satisfies t a ∇ a r < 0. Thus all chronological curves are trapped and forced to move "inwards". Thence the universal horizon is a constant chronon hypersurface, (a leaf of the preferred foliation), that is simultaneously a constant r hypersurface. See figure 2. To properly define "static" one also needs to have a metric g ab available, in order to map the Killing vector into a covector, K a = g ab K b , to which one can apply the hypersurface orthogonality constraint. One also needs the metric to define the notion of the Killing vector being timelike (sufficiently far away from the black hole region). Under these conditions the static Killing vector is 4-orthogonal to ∇r, and so the universal horizon can equally well be defined by K a ∇ a τ = 0, see [21]. Moreover since the static Killing vector is hypersurface orthogonal it induces a natural "Killing time" coordinate, t, in terms of which we have K a ∝ ∇ a t. So at the universal horizon one has g ab ∇ a τ ∇ b t = 0; that is, gradients of the "chronon time" and "Killing time" are 4-orthogonal (note however that ∇ a t is spacelike inside the Killing horizon). Because one has a Lorentzian metric g ab available, one can introduce the usual notions of i − , i 0 , and i + , and also J ± . Asymptotically, as one moves "outwards" on the constant chronon hypersurface corresponding to the universal horizon one must approach i + . (This is not what one would naively expect in "normal" situations, an asymptotic approach to i 0 . See figure 2. This observation can be modified to develop a general definition of universal horizon.) A generic condition for defining a universal horizon is that it is a constant-chronon leaf of the preferred spacetime foliation that contains i + . So a universal horizon would not be a Cauchy hypersurface, since causal curves that intersect J + would not necessarily intersect the universal horizon. In contrast, under "normal" conditions constant-chronon leaves of the preferred spacetime foliation asymptote to i 0 ; so they would be Cauchy hypersurfaces. This compatible with the constructions developed in [21]. Now is the universal horizon a Cauchy horizon? This depends, very delicately, on precise technical definitions. The key thing about Cauchy horizons is that something odd is happening to the "initial data" needed to define time-evolution into the future. Is there something odd with universal horizons? One issue is this: Since i + lies on the universal horizon, then anything outside the universal horizon can influence physics at i + . But then given the assumed infinite signal speeds used to define the universal horizon, anything anywhere in the domain of outer communication can influence physics anywhere on the universal horizon. This is certainly odd behaviour. More precisely -the leaves of the preferred foliation before formation of the universal horizon all asymptote to i 0 , and so to get a well-defined Cauchy problem at worst one needs to impose some regularity condition at spacelike infinity i 0 . In contrast, after the universal horizon forms one needs new extra "initial data" (corresponding to some regularity condition at future timelike infinity i + ) in order to set up a well-defined Cauchy problem. It is in this precise technical sense that the universal horizon can be considered to be a Cauchy horizon. There are various ways of rephrasing this in a more formal manner. For instance: If x lies on a universal horizon then J − (x) = J − (i + ). Furthermore, if the event x precedes formation of the universal horizon, then x ∈ I − (i + ) and J + ⊂ I + (x). The overall message is clear: Preferred foliations combined with infinite signal speeds lead to unusual but internally consistent notions of causal hierarchy. The tachyonic anti-telephone Let us now connect the discussion herein back to the famous Benford-Book-Newcomb article on the "tachyonic anti-telephone" [11]. What Benford-Book-Newcomb did was to explicitly show that (superluminal communication) + (relativity principle) =⇒ (causality violation) (10.1) Specifically they constructed a closed loop, built out of a combination of timelike segments and superluminal signals, such that the reply arrived before the query was sent. Now turn this logic around: That is, if on the one hand you want some form of superluminal communication, and on the other hand you want some sensible notion of causality, then you must have some sort of extra structure that goes beyond standard special relativity or general relativity. -20 -If (in a "fundamental" theory) that extra structure is non-dynamical, then it goes against the grain of Einstein's requirement for "no prior geometry". 3 However in the presence of external constraints (eg, the Casimir vacuum between parallel plates) the external constraints provide a natural class of preferred frames. (This is why, for instance, the Scharnhorst effect [12,13], superluminal photon propagation in the Casimir vacuum, is not a problem in terms of causality -the presence of the parallel plates breaks the 3+1 Lorentz symmetry down to a 2+1 Lorentz symmetry by picking out a preferred direction, the spacelike normal to the plates.) If that extra structure is dynamical, (be it multi-metric, Einstein-aether, Hořava, or something else), then there are significant observational constraints that should be taken into account. In short -the situation is not a free-for-all -there are tolerably good proposals compatible with good causal structure and effective superluminal signalling, but any such model needs careful phenomenological analysis. On a more positive note, the presence of such extra structure, (specifically and quite explicitly in the case of imposing a preferred foliation), automatically enforces Hawking's chronology protection conjecture thereby keeping the universe safe for historians [66][67][68][69][70]. Discussion and Conclusions The framework we have developed above allows one to mathematically extend the usual Lorentzian causal hierarchy to multi-metric spacetimes, Einstein-aether models, Hořava-like spacetimes, modified dispersion relations, and parabolic PDEs. The key dividing point in the analysis is whether the signal velocity is finite or infinite. When the signal speed is finite, a variant of the usual general relativistic causal hierarchy can be formulated. When the signal speed is infinite, a significantly modified causal hierarchy must be formulated in terms of a global time function (chronon). Preserving even minimal notions of causality in the presence of infinite signal velocity requires the aether field to be both unique and hypersurface orthogonal, leading us to introduce the notion of global parabolicity. Either case provides a logically coherent framework for dealing with "superluminal" signalling while still maintaining a consistent approach to causality. 3 There is a messy terminological issue here: When Einstein was developing general relativity in the 1910s he was using the (with hindsight) unfortunate phrase "general covariance". In modern terminology one distinguishes two separate concepts "coordinate invariance" and "no prior geometry". With enough work, it is now realized that anything can be made "coordinate invariant"; one just needs enough non-dynamical background structure. The physics of what Einstein was trying to get at with his phrase "general covariance" was actually what we would now call "no prior structure" -everything (apart from signature, topology, and a few coupling constants) should be dynamical. A Proofs of technical propositions in multi-metric frameworks Let us first show that the three definitions of stable causality in each of the triplets are just the usual definitions for stable causality valid in general relativity but applied to the metric [g wide ] ab ; the equivalence of the three definitions of stable causality within the framework of general relativity is discussed for instance in [9,10]. These metrics can more formally be written as For any vector u a these metrics will satisfy N ]. Geometrically the conditionĴ + i (p) ∩Ĵ − j (p) = ∅ means that none of the past propagation cones of any of the [g i ] ab can intersect any of the future propagation cones of any of the [g i ] ab . This implies that all of the future propagation cones must lie on the same side of some hypersurface with normal n a ∝ ∇ a τ . (We do not need to explicitly find this hyperplane, we just need to know that it exists.) Then for all vectors [V i ] a that are future-pointing timelike with respect to the respective metric [g i ] ab we have [V i ] a n a > 0 and so [V i ] a ∇ a τ > 0. • A 2 ⇐⇒ A 3 : This implication and its converse are as straightforward as they are in standard general relativity [9,10]. Let us now show that the two triplets is a totally ordered set. More specifically, we can show that: • Finally we note that in general relativity, stable chronology (stability of the chronology condition under C 0 perturbations) and stable causality are often used interchangeably due to their equivalence [9,10]. For completeness, we can show that a similar statement holds for their piecewise generalizations in a multi-metric framework: • Equivalence of piecewise stable chronology and piecewise stable causality: It is clear that the latter implies the former.
9,711.8
2020-05-18T00:00:00.000
[ "Philosophy" ]
Lipid Extraction from Spirulina sp. and Schizochytrium sp. Using Supercritical CO2 with Methanol Microalgae are one of the most promising feedstocks for biodiesel production due to their high lipid content and easy farming. However, the extraction of lipids from microalgae is energy intensive and costly and involves the use of toxic organic solvents. Compared with organic solvent extraction, supercritical CO2 (SCCO2) has demonstrated advantages through lower toxicity and no solvent-liquid separation. Due to the nonpolar nature of SCCO2, polar organic solvents such as methanol may need to be added as a modifier in order to increase the extraction ability of SCCO2. In this paper, pilot scale lipid extraction using SCCO2 was studied on two microalgae species: Spirulina sp. and Schizochytrium sp. For each species, SCCO2 extraction was conducted on 200 g of biomass for 6 h. Methanol was added as a cosolvent in the extraction process based on a volume ratio of 4%. The results showed that adding methanol in SCCO2 increased the lipid extraction yield significantly for both species. Under an operating pressure of 4000 psi, the lipid extraction yields for Spirulina sp. and Schizochytrium sp. were increased by 80% and 72%, respectively. It was also found that a stepwise addition of methanol was more effective than a one-time addition. In comparison with Soxhlet extraction using methylene chloride/methanol (2:1, v/v), the methanol-SCCO2 extraction demonstrated its high effectiveness for lipid extraction. In addition, the methanol-SCCO2 system showed a high lipid extraction yield after increasing biomass loading fivefold, indicating good potential for scaling up this method. Finally, a kinetic study of the SCCO2 extraction process was conducted, and the results showed that methanol concentration in SCCO2 has the strongest influence on the lipid extraction yield. Introduction Fossil fuels, which include coal, petroleum, and natural gas, comprise 81% of the total energy consumed by mankind in 2014 [1]. Compared to other energy types, developing and producing fossil fuels is the most economical way of energy production. Also, the energy density of fossil fuels is above other alternative energy sources, which means fossil fuels are the most efficient energy sources. However, the use and consumption of fossil fuels has caused many problems. Because fossil fuels are considered to be a nonrenewable and nonsustainable form of energy, their production will decline and eventually be exhausted. Since the combustion of fossil fuels emits greenhouse gases (GHGs), the use of fossil fuels is a major contributor to global warming [2]. The extraction of fossil fuels also causes other forms of environmental degradation [3]. Because of these disadvantages, developing sustainable and cleaner energy forms is a global necessity and has become an urgent pursuit in scientific research. Biodiesel is a renewable energy form that can be used as a substitute for fossil fuels. It is considered to be one of the best candidates to replace fossil diesel, because it can be directly used in any compression ignition engine without major modifications [4]. Biodiesel can be used in its pure form, but due to its high production cost and poor cold flow properties, it is often used as a diesel additive. Commercially used biodiesel is primarily composed of fatty acid methyl esters (FAMEs) that are produced via transesterification of triglycerides obtained from plants, animal fats, and microalgae [5]. BioMed Research International Among all of the biodiesel production feedstocks, microalgae are considered to be the most promising feedstock due to their high rate and efficiency of photosynthesis. Microalgae can accumulate high lipid content in their biomass (up to 77% of total biomass), depending on the type of microalgae [6]. Compared to other available feedstocks for biodiesel production, microalgae have higher oil productivity and require less land area for cultivation. Most importantly, commercial-scale microalgae farms would not be in competition with food production [5]. Nevertheless, the main drawback of microalgae as a feedstock for biodiesel production is the energy intensive and costly production process, due mainly to the harvesting and the lipid extraction processes [7]. It is therefore necessary to develop effective methods to reduce the cost of these processes. For lipid extraction, two methods are often used: organic solvent extraction and supercritical CO 2 (SCCO 2 ) extraction. Organic solvent extraction requires a large volume of expensive and toxic organic solvents. It also requires an energy intensive lipidsolvent separation process after extraction, and not all of the solvent can be recycled. In addition, the lipid extraction rate is slow [8]. Compared to organic solvent extraction, SCCO 2 extraction has several advantages. SCCO 2 extraction is faster and has a higher selectivity toward biodiesel desirable lipid fractions, and the solvent-liquid separation process is not required, since SCCO 2 will become gas after pressure is reduced [8]. The main drawback of SCCO 2 extraction is its high equipment installation cost and its high energy requirement [9]. Previous researcher has demonstrated effective results for SCCO 2 lipid extraction from microalgae. Halim et al. [10] studied the effect of SCCO 2 on extracting lipid from the marine microalga Chlorococcum sp. They found that 80 min of SCCO 2 extraction at 4400 psi and 30 ∘ C resulted in a maximum lipid yield of 7.1 wt% of dry biomass, which was equivalent to 5.5 h of hexane extraction using a Soxhlet apparatus. Mendes et al. [11] studied the SCCO 2 lipid extraction of Arthrospira maxima with the addition of ethanol. They found that the addition of a polar modifier such as ethanol could greatly improve the results of SCCO 2 extraction. Under the optimal experiment conditions of 60 ∘ C and 5000 psi, the addition of ethanol could increase the -linolenic acid (GLA) extraction yield from 0.05 wt% of dry biomass to 0.44%. Other researchers also have reported high efficiency supercritical CO 2 extraction [12][13][14]. However, many of these experiments were conducted at the bench scale, and few of them used microalgae masses beyond 200 g. For a larger amount of microalgae sample, the effectiveness of SCCO 2 extraction is not well investigated. Moreover, most of the SCCO 2 extraction experiments were conducted at high pressures (at or above 5000 psi). Operating at such high pressures at commercial scale is more costly and could be potentially dangerous. Further, adding a polar modifier to the SCCO 2 extraction process has been demonstrated to improve extraction efficiency [11]. Lorenzen et al. [15] conducted an industrial scale lipid extraction on the microalga Scenedesmus sp. using SCCO 2 at low pressure (1740 psi) without any modifier or cosolvent in the extraction process, resulting in a long extraction time requirement (9 h) for a high lipid extraction yield. The goal of this study is to investigate the scaling up potential of SCCO 2 extraction for microalgae. Our study focused on SCCO 2 extraction at a larger scale (from 200 g to 1 kg) at moderate pressures (below 5000 psi) with a modifier (methanol) added. A comparison between SCCO 2 extraction and Soxhlet extraction was also conducted. The fatty acid profile in the extracted lipids was analyzed in order to predict the quality of biodiesel produced. Finally, a kinetic model was established to analyze the effects of pressure, methanol concentration, and CO 2 flow rate on the lipid extraction yield. Optimizing the Operating Pressure of SCCO 2 Extraction. The SCCO 2 extraction unit used in this research is shown in Figure 1(a). Figure 1(b) is a schematic representation of the supercritical extraction process. 200 g of dry microalgae biomass is first placed in an extractor. An air-driven gas booster was used to pump CO 2 from a cylinder to the extractor and increased the pressure in the extractor. The operating pressure has a significant impact on the SCCO 2 extraction process, because the solubility of SCCO 2 is a function of the pressure. A pressure regulator was installed at the exit of the extractor in order to reduce the pressure in the separators, where the pressure was kept at 350 psi. The CO 2 became a supercritical fluid and was able to extract lipids from the microalgae in the extractor, returned to a gas phase in the separators, and then was pumped back to the extractor by the air-driven gas booster. The extracted lipids were automatically collected in the separators. A valve was installed at the bottom of each separator to collect lipids after the operation. After the extraction process, the CO 2 in the system was pumped back to the CO 2 cylinder for recycling. Although two separators were used in the process, most of extracted lipids were collected in the first separator. The purpose of the second separator was to provide extra retention time to ensure pure CO 2 was pumped back to the CO 2 cylinder. The entire system was maintained at 50 ∘ C. In order to investigate the effect of pressures, three different operating pressures were tested during the SCCO 2 extraction process: 2000 psi, 3000 psi, and 4000 psi. Since the CO 2 flow in the system was controlled by the air-driven gas booster, the pressure in the extractor also affected the CO 2 flow rate in the system. The corresponding CO 2 flow rates for the gas booster operated under 2000 psi, 3000 psi, and 4000 psi were 7.5, 7.1, and 6.8 standard cubic feet per minute (scfm), respectively, based on the performance curves of the gas booster. The residence time of CO 2 in the extractor can be roughly estimated using the ratio of CO 2 mass in the 5 L extractor to the CO 2 mass flow rate. Under 2000 psi, 3000 psi, and 4000 psi of pressure, the residence times of CO 2 were calculated as 0.13, 0.18, and 0.20 h, respectively. Two different species of microalgae were tested in the SCCO 2 extraction. Spirulina sp., a blue green alga commonly used as a food supplement or animal nutrition [16] with a lipid content of less than 33.3% dry weight of biomass (as reported by the producer), was purchased from Ojio5 (India). The other species was Schizochytrium sp., which is a marine microalga characterized by its high content of polyunsaturated fatty acids [14], had a lipid content of 25% dry weight of biomass (as reported by the producer), and was purchased from Xiamen Huison Biotech Co. (China). The lipid content of this microalga was determined after docosahexaenoic acid (DHA) extraction conducted by the company. Both microalgae were obtained as dry powders and were used directly in the extraction without pre-treatment. CHNS analysis and TGA analysis were conducted for both species to determine the elemental composition in microalgae biomass. For Spirulina sp., the composition was 45.83 wt% C, 6.52 wt% H, 10.73 wt% N, 0.66 wt% S, 20.76 wt% O, 6.46 wt% moisture, and 9.04 wt% ash. For Schizochytrium sp., the composition was 62.57 wt% C, 9.55 wt% H, 0.82 wt% N, 0 wt% S, 22.89 wt% O, 0.48 wt% moisture, and 3.69 wt% ash. These results were generally consistent with published data [17,18]. Lower nitrogen and sulfur contents in biomass are preferred, since the biodiesel produced by that biomass will have lower exhaust emissions (such as NO x and SO 2 ) when used in a diesel engine [17]. Lower oxygen percentage in biomass is also desirable, since it can affect the stability of the produced biodiesel [17]. The SCCO 2 extraction experiments were operated for 2, 4, and 6 h, and the extraction yield was measured after each time period during operation. The lipid extraction yield was calculated using the mass loss of the dried biomass before and after extraction, using the following equation: where i is the initial weight of the dried biomass (g) and f is the final weight of the dried biomass (g). The equation could overestimate the lipid extraction yield, since nonlipid components such as chlorophyll could also be extracted; however the equation was considered to be appropriate to compare the effectiveness of the extraction method for the tested conditions. Due to the physical characteristics of the extraction system, extracted lipids remaining in the extractor could not be easily measured; therefore the total mass of lipids was not used in the evaluation. For comparison, lipid extraction using the organic solvent method was also investigated. A Soxhlet apparatus was charged with 25 g dry biomass, and 150 mL of methylene chloride/methanol (2:1, v/v) was refluxed for 6 h [19]. The extraction yield was calculated according to (1). An independent samples t-test was conducted to determine whether the difference was significant for the two extraction methods. Investigating the Addition of Methanol and Its Effect on SCCO 2 Extraction. Because SCCO 2 can be treated as a nonpolar solvent, it will be difficult for SCCO 2 to extract lipid complexes that are linked to the proteins on the microalgae cell membrane. Therefore, mixing SCCO 2 with a polar solvent such as methanol might be a good way to increase the efficiency of SCCO 2 extraction [20]. Methanol was chosen instead of ethanol due to its relative inexpensiveness, lower boiling point, and good solvation potential for polar components. To investigate the effect of adding methanol to the SCCO 2 extraction, 200 mL of methanol was added to the 5 L extractor before the extraction, so that the volume ratio of methanol/SCCO 2 was 1:25 v/v. Experiments were conducted on both species mentioned previously. A stepwise addition of methanol was also tested, where 200 mL of methanol was added at 0, 2, and 4 h, for a total addition of 600 mL. The operating pressure used was similar to the previous experiments. The lipid extraction yield was measured and compared with SCCO 2 extraction without the addition of methanol. Investigating the Biomass Loading and Its Effect on SCCO 2 Extraction. As stated previously, many experiments in the literature focused on small quantities of microalgae biomass. To assess the potential of scaling up the extraction process, the mass of microalgae in the extractor was increased from 200 g to 1 kg, while maintaining the porosity and the surface area. SCCO 2 extraction was evaluated to see if similar lipid extraction yields could be achieved with larger biomass loading rates. The operating pressures and the amount of methanol added in the extractor were similar to those in the previous experiments. The residence time of CO 2 in the extractor was kept constant when the biomass increased (see Section 2.1). An independent samples t-test was conducted to determine whether the difference was significant for different biomass loadings. Characterization of the Extracted Lipids. The fatty acid profile of lipids in microalgae can affect the quality of biodiesel produced from microalgae lipids. Normally, cisunsaturated fatty acids are favored over saturated fatty acids because the FAMEs derived from cis-unsaturated fatty acids often have advantageous cold flow properties [21]. The fatty acid profiles of the lipids from the two species were reported previously in published data [22,23]. Therefore, matching the fatty acid profile in the extracted lipids and comparing to reported data in the literature can be a good indicator of the lipid extraction effectiveness of SCCO 2 . The extracted lipids from the two species were characterized using gas chromatography (GC). Before the characterization, a derivatization step was conducted to convert the different compounds in the lipids to FAMEs. 1 mL of extracted lipid sample was collected in a test tube, and 1 mL of toluene and 2 mL of 1% sulfuric acid in methanol were added into the test tube. This test tube was then sealed and stored in a 50 ∘ C oven for 8 h. After cooling, 5 mL of 5% sodium chloride in water was added to the test tube to encourage phase separation. After phase separation, 1 mL of the top organic phase was collected in a GC vial for analysis. An Agilent 7890B GC containing a DB-23 column (60 m length and 0.15 m film) and a flame ionization detector was used with helium as the carrier gas. The FAME mixture (C8-C24) (Sigma Aldrich) was used as the calibration standard with methyl myristate as the internal standard. The operating conditions of the GC system were selected according to the literature [24]. Kinetic Study of the SCCO 2 Extraction. The lipid extraction from microalgae using SCCO 2 can be expressed using first-order kinetic as follows [10]: where M e is the amount of extracted lipids at time t (g/g biomass), M i is the total lipid content in microalgae cell (g/g biomass), t is the extraction time (h), and k (h −1 ) is a time constant correlated to the lipid mass transfer from the microalgae cells to SCCO 2 . For a typical mass transfer problem, the time constant k is a function of Reynolds number (Re) and Schmitt number (Sc), as shown in the following equation: In SCCO 2 extraction, the values of the dimensionless numbers Re and Sc are dependent on pressure, temperature, concentration of methanol, and CO 2 flow rate. Therefore, the time constant k can be expressed as where P is pressure (psi), T is temperature ( ∘ C), [ME] is the concentration of methanol in SCCO 2 (v/v), and Q is the flow rate of CO 2 (scfm). To simplify the model, the effect of temperature was eliminated in (4) due to the constant temperature used in this research. The extracted lipids were treated as a single substance instead of a compound. It was assumed that the all of the extractable lipids could be extracted eventually; therefore all of curves under different operating conditions will have the same asymptotes. M i of the two microalgae species was assumed to be 0.25 g/g biomass. In addition, the methanol concentration in SCCO 2 was assumed to be constant (4%) during the extraction process. Also, the methanol concentration during the stepwise addition was also assumed to be constant (4%). To study the effects of P, [ME], and Q on the time constant k and lipid extraction yield, we assumed a first-order equation to express the relation: where a, b, c, d are unknown constants. These four constants can be estimated by fitting the kinetic model with the experimental data obtained from the previous sections. MATLAB programming platform was used to perform the model fitting, and these constants were determined by minimizing the sum of squared errors between experimental and calculated values of lipid extraction yield. The minimization was performed using the MATLAB toolbox fmincon. Since the lipid mass transfer coefficients may be different for different microalgae species, the estimation of the constants was conducted separately for the two species. To validate the kinetic model, SCCO 2 extraction data obtained by a previous research conducted by Nobre et al. [25] was also modeled. The authors of the research investigated the lipid extraction yield from microalga Nannochloropsis sp. under different pressures, flow rates, and ethanol concentrations in a laboratory scale SCCO 2 extraction system (5 cm 3 extraction vessel). To perform the model fitting, the ethanol concentration was used in place of [ME] (methanol) in (5). The data used for parameter fitting included the lipid extraction yields at 40 ∘ C, pressures of 1800, 2900, or 4350 psi, CO 2 flow rates of 6.64×10 −3 or 1.18 ×10 −2 scfm, and ethanol concentrations of 5, 10, or 20 wt%. M i used for Nannochloropsis sp. was 0.45 g/g biomass, based on the reported results by the authors, and the biomass loading of Nannochloropsis sp. was 1.25 g. After obtaining the constants, a sensitivity analysis was conducted to analyze the influence of P, [ME], and Q on the simulated results. Sensitivity analysis was carried out by calculating the lipid mass transfer coefficient (k) and lipid extraction yield at 4 h under the conditions where the value of one parameter was changed by 50% without changing other parameters. The experiment conditions used in the sensitivity analysis were 4000 psi of pressure, 200 mL of methanol with stepwise addition, and 6.8 scfm of CO 2 flow rate. For microalga Nannochloropsis sp., the experiment conditions used were 4350 psi of pressure, 1.18×10 −2 scfm of CO 2 flow rate, and 5 wt% of ethanol concentrations [25]. Lipid Extraction Yields of SCCO 2 under Different Operating Pressures. Different operating pressures of SCCO 2 resulted in a significant difference in lipid extraction effectiveness from Spirulina sp. (Figure 2(a)). As operating pressures increased, the extraction yield increased, and 4000 psi SCCO 2 resulted in the highest lipid extraction yield. A similar trend was observed in SCCO 2 extraction from Schizochytrium sp. (Figure 2(b)). These results suggest that higher operating pressures should be selected when conducting SCCO 2 extraction; therefore 4000 psi was chosen as the operating pressure for subsequent experiments. Lipid Extraction Yields of SCCO 2 with Addition of Methanol. As shown in Figure 3, adding 200 mL of methanol as a cosolvent increased the lipid extraction yield of SCCO 2 for both species in the first 2 h. However, after the initial 2 h, the effectiveness of extraction due to the added methanol decreased and eventually became insignificant after 4 h of extraction. These results indicated that the methanol added to the SCCO 2 extractor might be washed away with the SCCO 2 flow during the extraction process resulting in no increase in lipid extraction yield observed after 6 h of extraction. These results suggested that adding methanol once may not be a good method. To examine this possibility, a stepwise addition of methanol was applied in subsequent experiments. SCCO 2 lipid extraction with a stepwise addition of methanol was also tested by adding 200 mL of methanol every 2 h into the extractor to keep the methanol concentration in SCCO 2 at 4% (v/v), as shown in Figure 4. The stepwise addition of methanol significantly increased the lipid extraction yield for both species compared with pure SCCO 2 extraction. An 80% increase in lipid extraction yield was found for Spirulina sp., while a 72% increase was found for Schizochytrium sp. These results suggest that the stepwise addition of methanol overcame the decreasing methanol concentration and was more effective than adding methanol once. Average yields of 0.164 ± 0.006 and 0.188 ± 0.003 were obtained after 6 h of SCCO 2 extraction with stepwise methanol addition for Spirulina sp. and Schizochytrium sp., respectively. Soxhlet Lipid Extraction Using Organic Solvents. The Soxhlet extraction of lipids was conducted for both microorganisms and compared with the highest extraction yield obtained from the SCCO 2 extraction, which was achieved at 4000 psi of SCCO 2 with methanol added sequentially ( Figure 5). The Soxhlet extraction resulted in higher lipid extraction yield for both species. For Schizochytrium sp., the methanol-SCCO 2 extraction produced 91.4% of the lipid extraction yield obtained from Soxhlet extraction after 6 h, but the difference was not statistically significant. For Spirulina sp., the methanol-SCCO 2 extraction produced 79.6% of the lipid extraction yield obtained from Soxhlet extraction, which was statistically significant. While these results indicate that methanol-SCCO 2 extraction is not superior to Soxhlet extraction, the methanol-SCCO 2 method demonstrated comparable extraction yield while avoiding the use of a toxic solvent such as methylene chloride. Investigating the Scale-Up Potential. To investigate the scale-up potential of methanol-SCCO 2 extraction, experiments were conducted with increased biomass loadings of up to 1 kg while maintaining other operating conditions (4000 psi, 200 mL methanol added sequentially every 2 h). As shown in Figure 6, increasing the biomass loading from 200 g to 1 kg resulted in a slight decrease in lipid extraction yields for both species. The lipid extraction yield for Schizochytrium sp. decreased by 7.4%, while lipid extraction yield for Spirulina sp. decreased by 7.8%, although neither difference was statistically different. These results demonstrated that the loading amount of microalgae biomass could be increased in SCCO 2 extraction without a significant loss of extraction yield. Characterizing the Fatty Acid Profile in Extracted Lipids. For Spirulina sp., the fatty acids extracted by the Soxhlet method differed significantly from those extracted by SCCO 2 , particularly in the composition of decanoic acid (C10), palmitic acid (C16), and total unsaturated fatty acids ( Table 1). The total unsaturated fatty acids were the sum of myristoleic acid (C14:1), palmitoleic acid (C16:1), oleic acid (cis C18:1), linoleic acid (cis C18:2), and linolenic acid (C18:3). On the other hand, the methanol-SCCO 2 extraction resulted in a similar fatty acid composition to the Soxhlet method. The Soxhlet method was presumed to be most representative of complete lipid extraction. The fatty acid compounds were mostly C16 and C18, while total of 47.25% of unsaturated fatty acids was found in the extracted lipids. The SCCO 2 method extracted significantly lower amounts of unsaturated fatty acids compared to the Soxhlet method, while greatly increasing the content of C10 compounds. However, when combining SCCO 2 with methanol, the fatty acid profile of the extracted lipids was similar to that obtained by the Soxhlet method, with C16 and C18 as the main compounds and total unsaturated fatty acids of 45.11%. These results indicate the content of unsaturated fatty acids in extracted lipids was greatly affected by the polarity of the extraction fluid in that increasing the polarity of the extraction fluid increased the total unsaturated fatty acids extracted. For Schizochytrium sp., it can be observed that the three extraction methods used in this research resulted in 8 BioMed Research International similar fatty acid compositions with no significant differences ( Table 2). Unlike Spirulina sp., no unsaturated fatty acids were found in Schizochytrium sp.; therefore the difference induced by the polarity of extraction fluid became insignificant, which resulted in the similar fatty acid profiles. Kinetic Study of the SCCO 2 Extraction. The estimated values of a, b, c, and d after model fitting with MATLAB are shown in Table 3. The parity plots of calculated lipid extraction yields versus experimental lipid extraction yields for all experiments in this research are presented in Figure 7. For Schizochytrium sp., an average error of 12.7% was found between the experimental data and calculated values while for Spirulina sp., the average error was 16.6%. Figure 8 shows the comparison of simulated and experimental lipid extraction yields at 2000 and 4000 psi, as well as 4000 psi with the addition of methanol. Generally good agreement was observed for both species. However, for Spirulina sp., the simulated results were more accurate under high operating pressures; therefore 4000 psi was chosen as the operating pressure for sensitivity analysis. Applying the kinetic model to the results from Nobre et al. [25], the estimated values of a, b, c, and d are shown in Table 3. The parity plot of calculated lipid extraction yields versus experimental lipid extraction yields (Figure 9) showed an average error of 35.6%, indicating the model was less accurate for the SCCO 2 extraction of Nannochloropsis sp. However, model agreement improved between simulated values and experimental data when the extraction was conducted at high operating pressures ( Figure 10). Results of the sensitivity analysis suggest that the influences of pressure, methanol concentration, and CO 2 flow rate on lipid extraction yield were species dependent (Table 4). However, for all species the methanol concentration (or ethanol concentration) in SCCO 2 had the strongest influence on the lipid extraction yield, followed by the operating pressure. While the CO 2 flow rate showed some influence on the lipid extraction yield for Schizochytrium sp., it was insignificant for Spirulina sp. and Nannochloropsis sp. Discussion In our study, results showed that the effectiveness of SCCO 2 lipids extraction increased with an increasing pressure for both species. This is an expected result and matches the findings of the work published by other researchers [26]. It is generally believed that, at a constant temperature, increasing the pressure increases the density of SCCO 2 , which increases the solubility of lipids [8]. Thus, a higher operating pressure of SCCO 2 would produce a stronger lipid extraction power. Therefore, the highest operating pressure that the equipment could sustain, which was 4000 psi, was chosen as the optimal operating pressure for subsequent experiments. The effect of methanol as a cosolvent in SCCO 2 extraction was shown to increase the lipid extraction yield at least for the first 2 h. The reason for the improved extraction yield can be explained by the polarity of CO 2 and methanol. The CO 2 molecule is a nonpolar molecule and can attach to nonpolar acylglycerols in microalgae cytoplasm by Van der Waals forces during the extraction process [27]. However, some of the acylglycerols in microalgae were associated with proteins on cell membrane via hydrogen bonds. The Van der Waals forces were not strong enough to disrupt the hydrogen bonds. On the other hand, methanol molecules are polar and can easily form hydrogen bonds with the proteins on the cell membrane and replace those lipids, leaving the lipids to be extracted by SCCO 2 [27] thus increasing lipid extraction yield when methanol was added. Nevertheless, the effectiveness of the methanol addition decreased over time. It was assumed that the methanol added to the SCCO 2 extractor was washed away with the SCCO 2 flow during the extraction process, since methanol was easily soluble in SCCO 2 . Because the SCCO 2 would return to gas status in the separator, methanol would accumulate in the separator, which led to the reduction of methanol concentration in the extractor. In order to overcome this problem, a stepwise addition of methanol was applied for the SCCO 2 extraction, and the results indicated that adding methanol sequentially to the SCCO 2 extractor resulted in higher extraction efficiency compared to adding methanol at once. Soxhlet extraction using methylene chloride/methanol (2:1 v/v) was reported to be a superior effective lipid extraction method and a good substitute for the Bligh and Dyer method for its lower toxicity. The lipid extraction yields after 6 h of Soxhlet extraction reached 0.21 g/g biomass and 0.22 g/g biomass for Spirulina sp. and Schizochytrium sp. These results were consistent with the reported lipid content on the label (below 33 wt% for Spirulina sp. and 25 wt% for Schizochytrium sp.). The methanol-SCCO 2 extraction method was not significantly different from, or only slightly less effective than, the Soxhlet extraction. Further, the methanol-SCCO 2 extraction excelled in its low toxicity and requirement of organic solvent, since only 600 mL of methanol was used for a 5 L extraction chamber, which can be loaded with 1 kg of biomass (0.6 mL/g). By comparison, the Soxhlet extraction required 100 mL of methylene chloride and 50 mL of methanol for every 25 g of biomass (6 mL/g). Methylene chloride usage, which is considered carcinogenic, was also avoided. Overall, the results indicated that the methanol-SCCO 2 extraction method can produce high lipid extraction yield and might be a good replacement for the organic solvent extraction. In addition, further improvements could be considered to increase the efficiency of the methanol-SCCO 2 extraction method, such as optimizing the methanol amount added at each stage of stepwise addition, or optimizing the temperature provided for the extraction system. Increasing pressure could also be an option, although it may also increase the capital and operating expenses. The effect of biomass loading rate was also investigated, and it was found that a statistically insignificant decrease in lipid extraction yield occurred when the biomass loading rate was increased from 200 g to 1 kg. The decrease in yield may be explained by the fact that increasing the biomass loading increases the packing density of the microalgae. This might hinder overall extraction as the lipids may reabsorb on the microalgae surface or cause nonhomogeneous extraction via fluid channeling effects [28]. However, the decrease in yield was less than 8% for both species implying that scale-up can be successful and that the methanol-SCCO 2 Extraction yield (g / g biomass) Time (h) and experimental lipid extraction yields (symbols) from microalga Nannochloropsis sp. [25] under different conditions. Red square, 2900 psi of pressure, 6.64×10 −3 scfm of CO 2 flow rate, r 2 = 0.960; blue circle, 4350 psi of pressure, 6.64×10 −3 scfm of CO 2 flow rate, r 2 = 0.931; green triangle, 4350 psi of pressure, 1.18×10 −2 scfm of CO 2 flow rate, 5 wt% of ethanol concentration, r 2 = 0.802; black inverted triangle, 4350 psi of pressure, 1.18×10 −2 scfm of CO 2 flow rate, 20 wt% of ethanol concentration, r 2 = 0.982. extraction method is feasible for 1 kg biomass loadings with our apparatus. For Spirulina sp., the fatty acid compositions obtained from the Soxhlet and methanol-SCCO 2 method supported published data [22,29], while the SCCO 2 extraction without methanol showed a significant decrease in the extraction of palmitic acid and total unsaturated fatty acids. These results indicate that using SCCO 2 alone was insufficient in obtaining the ideal fatty acid composition from Spirulina sp. due to its ineffectiveness in targeting the lipids associated with cell membrane proteins. Adding methanol to SCCO 2 increased the extracting power of SCCO 2 by extracting all types of fatty acids in the cells. For the Schizochytrium sp., the three extraction methods did not show significant differences. These results also supported published data except for the composition of DHA [23,30]. Since the microalgae used in these experiments were obtained after a DHA extraction process, the absence of DHA was expected. In general, methanol-SCCO 2 extraction could recover lipids with fatty acid composition similar to that extracted by the Soxhlet extraction, proving the effectiveness of this method. In the kinetic study of the SCCO 2 extraction, the estimated values of a, b, c, and d varied greatly as a function of species and operating parameters. However, the sensitivity analysis of the kinetic modeling showed similar results for all species, indicating that both the operating pressure and concentration of methanol (or ethanol) in SCCO 2 greatly affected the lipid extraction yield. Therefore, to increase lipid extraction yield, the addition of a polar modifier (such as methanol) in the SCCO 2 process might be preferred to increasing pressure, since the increase in pressure would also increase the operating cost. On the other hand, it is not recommended to increase the CO 2 flow rate since it had little or no influence on the lipid extraction yield and the mass transfer coefficient. Similar results were also found by Nobre et al. [25] where the authors noted that the flow rate of CO 2 had no influence on lipid extraction yield. Conclusions SCCO 2 combined with methanol demonstrated a high efficiency in lipid extraction from microalgae, as well as a high potential for process scale-up. The high lipid extraction yield and the fatty acid composition in the extracted lipids indicated that the methanol-SCCO 2 extraction method can be used as a good substitute for the Soxhlet extraction method, while the requirement of organic solvents was significantly reduced. At same time, the lack of usage of toxic organic solvents such as methyl chloride showed that the methanol-SCCO 2 extraction was safer than organic solvent extraction, suggesting it might be a preferable method in commercialscale extraction. It should be noted that dry biomass was used in this research. Since the drying process is typically energy intensive, it would be preferable if wet biomass was used in the extraction process. Future research involving the extraction of wet biomass is needed. In addition, further studies regarding the optimization of the rate of methanol addition during the extraction process, as well as temperature effects, should be investigated in order to optimize both the extraction efficiency and economic feasibility of supercritical CO 2 lipid extraction from microalgae. Data Availability The figures and tables used to support the findings of this study are included within the article. Conflicts of Interest The authors declare that they have no conflicts of interest.
8,303.4
2018-12-09T00:00:00.000
[ "Chemistry" ]
Cell activation-based screening of natively paired human T cell receptor repertoires Adoptive immune therapies based on the transfer of antigen-specific T cells have been used successfully to treat various cancers and viral infections, but improved techniques are needed to identify optimally protective human T cell receptors (TCRs). Here we present a high-throughput approach to the identification of natively paired human TCRα and TCRβ (TCRα:β) genes encoding heterodimeric TCRs that recognize specific peptide antigens bound to major histocompatibility complex molecules (pMHCs). We first captured and cloned TCRα:β genes from individual cells, ensuring fidelity using a suppression PCR. We then screened TCRα:β libraries expressed in an immortalized cell line using peptide-pulsed antigen-presenting cells and sequenced activated clones to identify the cognate TCRs. Our results validated an experimental pipeline that allows large-scale repertoire datasets to be annotated with functional specificity information, facilitating the discovery of therapeutically relevant TCRs. www.nature.com/scientificreports/ Low-throughput approaches, including single-cell sorting and limiting dilution [24][25][26][27][28] , and even random α:β pairing combined with functional screening of small T cell populations 29 , have been used effectively to discover clinically relevant TCRs. Affinity maturation has also been used to generate therapeutic TCRs, albeit with an attendant risk of serious off-target reactivity against autologous pMHCs [30][31][32] . Another study reported similar single-cell cloning and coculture techniques to interrogate large-scale libraries comprising natively paired TCRs, but in this case, the engagement of soluble pMHCs was used prior to coculture with donor-matched antigenpresenting cells (APCs) 21 . Due to the important differences between pMHC recognition and T cell activation 33 , as well as the potential difficulties associated with generating recombinant pMHCs for every peptide antigen, a more direct screening method using cell-based coculture could be preferable for mapping functional features of the repertoire of human TCRs. For this purpose, we developed a high-throughput platform incorporating single-cell TCR sequencing and functional screening directly against cell-expressed antigens, using infectious mononucleosis (IM) as a disease model [34][35][36] . Our workflow comprised native TCR gene cloning into lentiviral display vectors, followed by activation-based screening of the expression libraries in SKW3 cells 37,38 and the subsequent identification of antigen-specific TCRs via high-throughput sequencing (HTS). The ability to screen thymically selected repertoires comprehensively and repeatedly in this manner should expedite a path to immune discovery and personalized medicine. Results High-throughput gene capture and screening of native TCRα:β pairs. In a previous study, we used an emulsion-based sequencing platform to capture natively paired TCRα:β cDNA amplicons, which were subsequently cloned into expression vectors, displayed in Jurkat cell libraries, and screened using multimeric pMHCs in conjunction with FACS and HTS 34 . This methodology was founded on our established techniques for sequencing, cloning, and screening B cell receptors (BCRs) [39][40][41][42][43] . In the present study, we developed a modified protocol to display natively paired TCRα:β libraries in SKW3 cells, enabling activation-based screening and the characterization of functionally optimal TCRs. The natively paired TCRα:β amplicons interrogated here were obtained previously from two individuals with IM ( Fig. 1A) 34 . Briefly, we used a flow-focusing device in the earlier study to encapsulate single T cells inside emulsion droplets containing poly-dT-coated magnetic beads to capture polyadenylated mRNAs ( Fig. 1B) 34,40,41,[43][44][45] . Magnetic beads with colocalized TCRα and TCRβ mRNAs were then recovered and used as templates in an overlap extension RT-PCR that physically linked single-cellderived TCRα and TCRβ genes ( Fig. 1B) 34,46 . The resulting TCRα:β libraries were PCR-amplified in the current study to add restriction enzyme sites for cloning into a lentiviral expression vector and subsequently transduced into SKW3 cells (Fig. 1C,D) for activation-based screening via FACS and HTS to link functionality with the gene sequences of individual TCRs (Fig. 1E,F). Evaluation of restriction enzyme cloning sites for TCRα:β genes. We tested and validated a set of silent and non-silent mutations that introduced restriction enzyme sites into the TCRα and TCRβ variable leader and constant regions (Fig. 2, Supplementary Tables 1 and 2). The effects of these mutations were evaluated in a mammalian lentiviral display system (Fig. 3A,B). Expression constructs encoding a single TCR (TM9) specific for the human leukocyte antigen (HLA)-B*07:02-restricted HIV-1 Nef epitope RPQVPLRPM (RM9) were packaged into lentiviral particles and transduced into J.RT3-T3-5/CD8 + (J.RT3/CD8) cells, which were subsequently purified on the basis of mCherry expression via FACS. A monoclonal anti-human TCR antibody and fluorescently labeled tetrameric complexes of RM9/HLA-B*07:02 were used to quantify expression of the TM9 TCR. Four optimal mutations were selected for inclusion in the final expression construct to enable direct and efficient library-scale cloning of HTS-captured TCRs ( Fig. 3 and Supplementary Fig. 1). Generation of TCRα:β libraries expressed in SKW3 cells. TCRα:β amplicons incorporating the optimal mutations were cloned into the lentiviral pLVX-EF1α-IRES-mCherry vector as reported previously 34 . Briefly, we performed an overlap extension RT-PCR incorporating multiplex primers to amplify and fuse singlecell-derived TCRα and TCRβ chains via a linker sequence 34,46 . We also used a suppression PCR to prevent the random association of unfused TCRα and TCRβ amplicons during the bulk semi-nested PCR (Fig. 4A) 10 . As expected, agarose gel electrophoresis revealed dominant unlinked TCRα and TCRβ chain bands after the overlap extension RT-PCR (Fig. 4B), which required specific amplification of the subdominant linked TCRα:β amplicons in the mixture. In contrast to our previous fully nested PCR strategies to amplify linked heavy and light chains from expressed BCRs 34,39,41-44 , our TCR cloning strategy was based on unidirectional expression, introducing the potential for unlinked TCRα and TCRβ chains to associate randomly during the bulk seminested PCR. To mitigate this issue, we tested a suppression PCR 10 . This approach uses blocking oligonucleotides encoding nonsense sequences at the 5' ends to prevent amplified single TCRα and TCRβ cDNAs from associating via overlap extension by eliminating sequence homology without affecting natively paired TCRα:β amplicons previously linked in the single-bead emulsion ( Fig. 4A-C). We validated this strategy using a series of control PCRs. Similar amounts of linked TCRα:β material were detected using either unlinked TCRα and TCRβ amplicons (Fig. 4C, PCR #3) or fully linked TCRα:β genes (Fig. 4C, PCR #4) as templates in standard semi-nested PCRs. In the presence of blocking primers, however, the overlap extension RT-PCR product was amplified successfully (Fig. 4C, PCR #1), whereas unfused amplicons no longer yielded a linked TCRα:β product (Fig. 4C, PCR #2). We also compared the prevalence of EBV-specific TCRs from our two donors after cloning the corresponding HTS-based libraries into J.RT3/CD8 cells using either the standard semi-nested PCR or the suppression PCR 34 . In line with our expectations based on the elimination of non-natively paired TCRs, the frequencies of J.RT3/CD8 cells that bound the cognate pMHC tetramer were an order of magnitude higher after cloning with the suppression PCR ( Supplementary Fig. 2 www.nature.com/scientificreports/ Activation-based screening of TCRα:β expression libraries. TCRα:β amplicon libraries were cloned into the lentiviral vector expression system with all the required elements for full-length TCRα:β expression on the cell surface 34 , including an internal ribosomal entry site (IRES) and an mCherry marker to detect successful transduction via FACS. After cloning the amplified TCRα:β genes into the lentiviral vector backbone using AgeI and BstBI, the linker region was swapped using MluI and SpeI to incorporate a linear DNA construct containing the remaining portions of the TCRβ constant region, a ribosomal skip teschovirus-derived sequence (P2A), and a modified TRAV8-2 leader sequence incorporating an MluI site (Fig. 2, Supplementary Fig. 4) for TCRα expression 34,47 . We generated at least 10 6 transformants in each cloning step to maintain library diversity (Supplementary Figs. 5 & 6). The full-length TCRα:β expression plasmids were then packaged into lentiviral particles and transduced into SKW3 cells for functional evaluation, illustrated here with reference to Donor 1. TCRα:β-SKW3 libraries expressing mCherry were expanded after purification via FACS. In contrast to primary T cells, these immortalized libraries can be screened over multiple rounds of panning against APCs. T2 cells transduced to express HLA-B*08:01 (T2-B8) were pulsed with the EBV BZLF1 peptide RAKFKQLL (RAK) at a concentration of 1 mM and cocultured with mCherry + TCRα:β-SKW3 cells for 24 h. Antigen-specific TCRα:β-SKW3 cells were detected via upregulation of the activation marker CD69 (Fig. 5). Coculture parameters were optimized using the EBV EBNA3A-specific LC13 TCR, which recognizes the FLRGR AYG L (FLR) epitope restricted by HLA-B*08:01 ( Fig. 5A) 48 . RAK-specific TCRα:β-SKW3 cells were enriched > 10-fold compared with the presort libraries after a single round of purification via FACS (Fig. 5B). In addition, there was minimal background activation and minimal reactivity against FLR-pulsed T2-B8 cells, despite initial coculture in the presence of donor-mismatched MHCs (Fig. 5B). www.nature.com/scientificreports/ Rapid bioinformatic detection of antigen-specific TCRs. HTS-based library analyses were used to identify RAK-specific TCRs. Data were processed as reported previously 34 . Briefly, raw FASTQ files were quality-filtered and annotated using MIXCR 49 . Out-of-frame V(D)J reads were excluded from the dataset, and productive in-frame reads were paired by Illumina ID. Reads were compiled using CDR3 and VJ gene identity. CDR3β sequences were clustered to 96% nucleotide identity after excluding singleton reads to minimize errors introduced via HTS and/or PCR. CDR3β amino acid sequences were used to track TCR clones 42 . The frequency of each individual clone in each sorted sample was calculated to evaluate the functional performance of each library. We also calculated the enrichment ratio for each CDR3β. Two known antigen-specific TCR clones identified by sequence analysis were enriched > 10-fold compared with the parental mCherry + TCRα:β-SKW3 library and comprised the bulk of the response against cell-displayed RAK/HLA-B*08:01 (Supplementary Table 4). Discussion T cell immunotherapy holds great promise for the treatment of various cancers and infectious diseases, but comprehensive molecular platforms are required to characterize the antigen specificity, functionality, and translational potential of individual TCRs. In this study, we developed and validated a high-fidelity cloning strategy to enable activation-based screening of natively paired TCRα:β gene libraries against peptide-pulsed APCs. These efforts allowed us to link the efficacy of signal transduction in response to cognate antigen encounter with sequence information across the somatically rearranged genome via an integrated experimental and bioinformatics pipeline, facilitating the discovery of naturally selected and optimally potent TCRs. Our screening technology was adapted from previous work that enabled the physical identification of antigenspecific TCRα:β clones based on the engagement of multimeric pMHCs 34 . We report here the design and validation of mutations enabling the introduction of restriction enzyme sites for high-throughput cloning of TCRα:β genes into mammalian display vectors and the elimination of randomly unassociated TCRα and TCRβ genes via a suppression PCR. The latter greatly enriched our libraries for natively paired TCRs 10 . Although other methods can be used to enhance the fraction of TCRα:β genes 39-43 , the incorporation of a suppression PCR in our workflow proved to be compatible with the use of standard in-line cassettes for gene expression, harmonizing with www.nature.com/scientificreports/ established methods for single-cell isolation 15,29,50-53 and microfluidic encapsulation 16,54 . Moreover, our libraries were genetically diverse, with each containing an average 17,241 unique clones after stringent quality filtering and clustering of homologous TCRs. Importantly, our approach enabled the facile cloning of physically linked TCRα:β genes into various display systems, including Jurkat cells, which can be screened using multimeric pMHCs 34 , and SKW3 cells, which can be screened using the activation-based method reported here. Such immortalized/renewable TCRα:β library screening techniques will be essential for the analysis of peptide specificity, MHC restriction, and the biophysical www.nature.com/scientificreports/ properties of the corresponding TCRs, which ultimately govern T cell behavior in vivo. However, we found it difficult to barcode cells for the inclusion of singleton TCRα:β genes, which compromised our ability to calculate the efficiency of recovery via HTS. We are currently planning to resolve these issues by incorporating single-cell barcoding techniques and to extend the scope of our work by recovering information on gene transcription and protein expression in association with individual TCRs. Importantly, our SKW3 cell expression system allowed us to screen TCRα:β gene libraries functionally, measuring responsiveness via the upregulation of CD69. No such activation was observed using Jurkat cell lines in our earlier study 34 . This advance is critically important for immune discovery, because the ability to deliver an activation signal is not equivalent across all antigen-specific TCRs 33 . It can also be difficult in some cases to produce large quantities of soluble pMHCs 21,29,55 . A key feature of our approach was the standardized assessment of functionality using an immortalized cell line. Library screening on this basis can eliminate potential bias arising from the heterogeneity of primary T cells, although it should be noted that other effector readouts may also afford high levels of sensitivity 56,57 . In addition, our method could be adapted for use with cancerous or infected cells rather than peptide-pulsed APCs. Accordingly, it should be feasible to screen for reactivity against target cells presenting biologically relevant densities of disease-associated pMHCs, thereby enhancing the discovery of protective and translationally efficacious TCRs. There were some limitations to our approach. In particular, the gene capture and sequencing process revealed that repertoire diversity was reduced to around tens of thousands of TCRα:β clonal clusters after excluding singletons in each library, which are more error-prone than TCRα:β clones observed more than once). The degree of loss from the native repertoire was difficult to quantify, because our native immune libraries contained an undetermined number of TCRα:β clones, and our conservative bioinformatic filtering excluded singletons that could represent bona fide TCR clones. The natural occurrence of restriction enzyme sites used in the cloning process also likely resulted in the destruction of a small fraction of TCRs (estimated based on prevalence at < 1%). It is further notable that we transduced our TCRα:β libraries into SKW3 cells using viral particles at a very low exposure frequency, estimated in the region of 1-5%. This approach was designed to ensure that each cell integrated only one TCR. Our future efforts will focus on the use of CRISPR-targeted TCRα:β engineering to improve efficacy by integrating TCR genes at defined sites, which was recently demonstrated to be highly effective in similar activation-based screening assays for the analysis of libraries displaying synthetically generated TCRs 58 . In summary, we have developed a high-throughput platform for the identification of functionally responsive antigen-specific TCRs. Our workflow builds substantially on previous reports that linked somatically rearranged gene sequences with antigen specificity via library screening against soluble pMHCs 34 . In particular, we anticipate that molecular-scale functional screening will accelerate bench-to-bedside immune discovery, facilitating the clinical delivery of personalized therapies for various diseases via the rapid isolation of naturally selected and highly potent antigen-specific TCRs. Methods Introduction of restriction enzyme cloning sites. A monoclonal TCR (TM9) specific for the HLA-B*07:02-restricted HIV-1 Nef epitope RM9 59 was expressed in the lentiviral vector pLVX-EF1α-IRES-mCherry (Takara Bio, Mountain View, CA) to evaluate the functional performance of restriction enzyme cloning site mutations (Supplementary Tables 1 and 2). Leader sequences were modified for TRAV and TRBV. Restriction enzyme sites were introduced as detailed in Supplementary Tables 1 and 2. Expression of the TM9 TCR was quantified via flow cytometry using anti-human TCRα/β-Alexa Fluor 488 (clone IP26; BioLegend, San Diego, CA) and fluorescently labeled tetrameric complexes of RM9/HLA-B*07:02. Human samples and cell culture. Donor 1 presented with high fever, fatigue, body aches, and headache, with a maximum illness severity of 3, as described previously 34,35 . Donor 2 presented with fever, tender cervical lymph nodes, sore throat, and fatigue. These donors were enrolled in a prospective study of primary EBV infection at the University of Minnesota (IRB 0608M90593) 35 . Venous blood samples were processed via density gradient centrifugation over ACCUSPIN System-Histopaque-1077 (Sigma-Aldrich, St. Louis, MO) to collect PBMCs, which were subsequently cryopreserved at 1 × 10 7 cells/mL in heat-inactivated fetal bovine serum (Thermo Fisher Scientific, Waltham, MA) containing 10% dimethyl sulfoxide (Sigma-Aldrich, St. Louis, MO). PBMCs were thawed and density-adjusted to 0.5 × 10 6 Generation of natively paired TCRα:β expression libraries. TCRα:β cDNA libraries generated in a previous study 34 were amplified and modified to incorporate restriction enzyme sites using a two-step, seminested PCR. An initial semi-nested suppression PCR incorporating blocking oligonucleotides complementary to the 3' ends of the unfused TCRα and TCRβ products was performed using a HotStart GoTaq Polymerase System (Promega, Madison, WI). A second semi-nested PCR was then performed using a KAPA HiFi HotStart PCR Kit (Roche, Basel, Switzerland). PCR products were recovered using agarose gel electrophoresis and purified using a 1.5% SYBR Safe Agarose Gel (Thermo Fisher Scientific, Waltham, MA). www.nature.com/scientificreports/ PCR products were cloned into a modified version of the commercially available pLVX-EF1α-IRES-mCherry Vector (Takara Bio, Mountain View, CA). TCR amplicons and the expression vector were digested with BstBI and AgeI, and the digestion products were gel-purified and ligated using T4 DNA Ligase (New England Biolabs, Ipswich, MA). Ligation products were purified using a DNA Clean & Concentrator Kit (Zymo Research, Irvine, CA) and transformed via electroporation into competent MegaX DH10B T1 Electrocomp Cells (Thermo Fisher Scientific, Waltham, MA). Plasmids were purified using a ZymoPURE II Plasmid Maxiprep Kit (Zymo Research, Irvine, CA). The rest of the expression cassette was then introduced as an insert between the variable regions of the TCRβ and TCRα genes using SpeI and Mlul (New England Biolabs, Ipswich, MA). The insert contained the remaining portion of the TCRβ constant region, a P2A translation skip motif, and a modified version of the TCRα leader peptide sequence containing an MluI site (Fig. 2) to enable full expression of the corresponding heterodimeric TCRs 34 . . The diluted transfection reagent was then added to the master mix and incubated for 15-20 min at room temperature. The resultant mixture was added dropwise to the flask containing 293FT cells and incubated for 3 days at 37 °C. Supernatants containing lentivirus were centrifuged at 800 × g for 10 min and added at a 3:1 ratio to Lenti-X Concentrator (Takara Bio, Mountain View, CA) in polypropylene centrifuge tubes (50 mL; Thermo Fisher Scientific, Waltham, MA). Lentivirusconcentrator mixtures were incubated overnight at 4 °C and then centrifuged at 1,500 × g for 45 min at 4 °C. The pellets were resuspended in 1 mL of RPMI 1640 medium and stored in two aliquots at − 80 °C. For each library transduction, 3 × 10 6 unmodified SKW3 cells (DSMZ, Braunschweig, Germany) were seeded in 3 mL of RPMI 1640 medium in a single well of a 6-well tissue culture plate (Thermo Fisher Scientific, Waltham, MA), and 400 μL of rapidly thawed lentivirus stock was added in the presence of polybrene at a final concentration of 4 μg/mL (Sigma-Aldrich, St. Louis, MO). Cells were incubated overnight at 37 °C with 5% CO 2 . After transduction, cells were centrifuged at 500 × g for 5 min, resuspended in 10 mL of prewarmed RPMI 1640 medium, transferred to T25 culture flasks, and incubated for 3 days at 37 °C with 5% CO 2 . Cells were then washed twice with PBS and sorted for internal mCherry expression via FACS. PCR-based recovery of TCRβ genes. CD69 + TCRα:β-SKW3 cells were sorted via FACS. For molecular analysis of each library, mRNA was extracted from 2 × 10 6 purified TCRα:β-SKW3 cells using a Direct-zol RNA Kit (Zymo Research, Irvine, CA). TCRβ VDJ regions were amplified using a set of primers targeting the modified TRBV15-1 leader region and TRBC. RT-PCR was performed using SuperScript III Reverse Transcriptase and Platinum Taq (Thermo Fisher Scientific, Waltham, MA). A second primer-extension PCR contained adaptors to add a unique molecular identifier to each sample. cDNA amplicons were run on 1.5% agarose gels, and bands at ~ 450 bp were purified using a Gel DNA Recovery and DNA Clean & Concentrator Kit (Zymo Research, Irvine, CA). Libraries were sequenced using a 2 × 300 MiSeq System (Illumina, San Diego, CA). Bioinformatic analysis. Our bioinformatics pipeline was designed to ensure high-quality data by reducing sequence errors introduced via HTS and/or PCR 34,[39][40][41][42][43][44] . First, raw sequences were quality-filtered to retain only those with a Phred Quality Score of 20 (i.e., 99% base call accuracy) in at least 50% of the reads using Fastxtoolkit/0.0.14 (http:// hanno nlab. cshl. edu/ fastx_ toolk it/). Next, V, D, and J gene annotations were performed using MiXCR/v2.1.12 49 . Out-of-frame V(D)J combinations were excluded from the dataset, and productive inframe junction sequences were paired by Illumina read ID. Reads were then compiled based on CDR3 nucleotide exact-match sequences and V(D)J gene identity. CDR3β sequences were clustered to 96% nucleotide identity (ignoring terminal gaps) using USEARCH/v5.2.32 60 , and only clusters with ≥ 2 TCRα:β nucleotide reads were included in the final dataset. Last, full-length TCRα:β sequences were recreated by stitching together the CDR3α or CDR3β sequences with the respective TRAV or TRBV genes, which had been mapped using the International ImMunoGeneTics (IMGT) Information System database (library imgt.202141-1) 61 www.nature.com/scientificreports/
4,860.8
2023-05-17T00:00:00.000
[ "Biology", "Medicine" ]
Monolithic Porous Organic Polymer‐Photocatalyst Composites for Applications in Catalysis This Review provides a perspective on porous organic polymer‐photocatalyst composites obtained by coupling semiconductors and hydrophilic/hydrophobic polymers which do not modify the properties of the embedded photocatalysts, but can influence the efficiency of the overall catalytic process. Particular attention has been given to polymer composites in the form of monolithic hydrogel/sponge/aerogels obtained by dissolving the polymer in a solvent, which contains the photocatalyst dispersed, inducing gelation or solidification of the solution and subsequently removing the solvent by a drying process. The photocatalytic applications discussed here cover H2 evolution from water splitting, CO2 reduction, and organic synthesis. Indeed, the main aim of this Review is to outline an alternative perspective to the highly studied environmental photocatalytic applications, highlighting the photoactive properties of these composites thanks to the incorporation of semiconductors in the 3D porous structure of organic polymers. Finally the challenges and potential advances associated with the use of porous organic polymer‐photocatalyst composites for future scientific research are outlined. Introduction Nowadays, the continuous consumption of fossil fuels has caused and is still causing a sharp increase in environmental pollution and energy depletion. [1]eterogeneous photocatalysis can be seen as one of the most promising strategies to solve these challenges as, through a photocatalytic process, solar energy is converted into chemical energy.In this way, it is possible to activate redox reactions in mild conditions, thanks to suitable illumination sources, such as visible light or sunlight. [2]ver the years, photocatalysis has made significant progress and has emerged as a promising technology for generating chemical fuels, such as H 2 from water splitting reaction and formic acid, formaldehyde and methanol from CO 2 photoreduction, using photons as the only energy input. [3]oreover, in the context of industrial processes, the synthesis of many organic compounds generally requires the use of high temperatures and high operating pressures.Consequently, the development of photocatalytic synthetic reactions that employ the use of suitable light sources to drive chemical processes at significantly lower temperatures is useful for meeting the needs of "green chemistry". [4]Indeed, the use of photocatalysis allows the possibility to achieve reaction pathways and intermediates that are not accessible under thermal conditions [5] Moreover, it is possible to carry out the "one-pot" synthesis of organic compounds, minimizing the number of operating units (in terms of reactors and equipment for purification of the reaction product) necessary to obtain the desired product. [6]4a,7] It is widely reported that nanometer-sized powder photocatalysts guarantee high photocatalytic activity due to their high surface areas. [1]However, it should be emphasized that nanoparticulate powders not only have a strong tendency to agglomerate (especially in the case of photocatalytic reactions performed in a liquid medium) but also require complex operations to recover the photocatalyst for subsequent reuse. [8]o eliminate these disadvantages it is desirable to immobilize the semiconductors on suitable monolithic supports that combine both macro and nanoscale features in order to ensure high photocatalytic efficiency. [1]In particular, this goal could be achieved by entrapping the photocatalysts in a porous macroscopic support. Generally speaking, natural porous structures (such as sponges belonging to aquatic animals of the phylum Porifera [9] ) could represent a possible inspiration for the design of novel porous materials, and, during the years, new preparation methods for achieving hypercrosslinked porous polymers (HCPs) [10] and porous organic polymers (POPs) were developed. [11]Among them, POPs were recently the object of scientific research because these types of materials can combine the typical properties characterizing porous structures with those typical of polymers. [9]Indeed, the main advantages of POPs with respect to non-porous materials are due to their large specific surface area, well-defined porous structure and the availability of a wide variety of synthetic methods for their preparation. [9]For these reasons, different POPs were prepared and tested in several applications, including adsorption of CO 2 , [12] energy conversion and storage, [13] catalysis [14] as well as drug delivery. [15]ncorporating a photocatalyst into a polymer host matrix is a very common practice.The resulting hybrid-materials are generally defined as a class of polymer based composites.Recently, many reviews dedicated to this material class have appeared in the literature. [16]atalytically active composites based on POPs are among the most studied of this class of composites. [11,17]ithin the large class of photocatalytically active polymerbased composite materials, the polymeric matrix function can be either "active", if the polymer modifies the photocatalyst properties, or "passive", if the matrix only performs the function of immobilizing the photocatalyst.Examples of active matrices are conductive polymers which act as photoelectron transfer pathways and allow to lengthen the charge separation time in the photocatalyst.16d] Furthermore, polymeric matrices hinder the aggregation processes of the catalyst particles [16d,19] and allow their resistance in aggressive environments, and finally they can act as selective filters towards the species on which the catalyst acts (e. g., polar (non-polar) matrices allow the passage of only polar (non-polar) species). [20]he polymer matrices of catalytically active composites are commonly porous materials with a high surface area.In fact, a homogeneously dispersed catalyst on the accessible surfaces of a highly porous matrix, in turn, has a high active surface per unit volume of the composite.Therefore, porous polymer-based composites with embedded photocatalysts can exhibit reaction efficiencies similar to, or better than, those of photocatalytic powders suspended in the reaction medium. [21]20b,d] The 3D architecture of the polymeric matrix can be obtained in many different ways, both during the polymerization process which generates the matrix itself (typical bondforming processes are, for example, radical, anionic, cationic, and condensation reactions), [16f,17d,22] and by suitably treating a commercial or custom-designed polymer.16f,20e] Recently, 3D printing processes have also been employed to obtain porous polymeric matrices. [23]When the formation of the 3D polymer matrix takes place through a bond-forming process, it is necessary to include the catalyst in the reaction environment in which the polymer synthesis takes place.This condition reduces the types of catalysts that can be used for the composite preparation. [1]ven in the case of 3D printing, incorporating the catalyst in the construction phase of the polymeric framework is complex.Conversely, the formation of the polymer porous framework through gelation/solidification processes of polymer solutions, followed by solvent removal, simplifies the formation process of both porous matrix and polymer-catalyst composite, since the catalyst can be incorporated into the polymer solution before gelation/solidification. [16f] The crucial step for 3D porous structure preparation is the solvent removal under conditions to preserve the porosity and integrity of the material.Although an expensive process, solvent removal with supercritical solvents is undoubtedly the best method for obtaining welldefined structures. [24]16f] Several semi-crystalline polymers are capable of forming physical gels and can be converted into porous structures by supercritical drying or freeze-drying. [1,26]The most studied polymers in the literature that give rise to physical gels and aerogels are syndiotactic polystyrene (sPS) and polyimide (PI). [26]f the polymer used as photocatalyst host matrix is hydrophilic, the resulting gel is a hydrogel and the 3D porous structure includes water in the interstitial spaces between the chains. Conversely, if the polymer is hydrophobic, the polymer gel includes an organic solvent and, after the solvent removal, the polymer chains are simply separated by air over the entire 3D volume.In this case, we speak of aerogels (if the sizes of the pores range from a few nm to a few tens of nm) or sponges (if the size of the pores is between 50 and 500 μm). [27]herefore, the main advantage of hydrophilic polymer/ photocatalyst composites is that they can be used in photocatalytic reactions involving water, electrolytes and organic polar reactants.Conversely, hydrophobic/photocatalyst composites are useful for photocatalytic reactions involving reactants with non-polar character. When a commercial or custom-designed polymer is used as host matrix to obtain a photocatalytic composite, the catalyst inclusion is also often achieved by simple immersion of the porous polymer framework in a solvent in which the photocatalyst is dispersed, followed by drying at a temperature that depends on the structural and thermal stability of the polymeric support. [28]17d] In this review, we will provide a perspective on porous organic polymer-photocatalyst composites obtained by coupling semiconductors and hydrophilic/hydrophobic polymers which do not modify the photocatalyst properties, but can influence the catalytic process. Particular attention will be given to polymer composites in the form of monolithic hydrogel/sponge/aerogels obtained by dissolving the polymer in a solvent, which contains the photocatalyst dispersed, inducing gelation or solidification of the solution and subsequently removing the solvent by a drying process. The applications discussed here cover photocatalytic H 2 evolution, CO 2 reduction, and organic synthesis.The main aim was to outline an alternative perspective to the highly studied environmental photocatalytic applications, highlighting the photoactive properties of monolithic polymer composites thanks to the incorporation of semiconductors in the 3D porous structure of organic polymers. It is important to underline that graphitic carbon nitrides, metal-organic frameworks (MOFs) and covalent organic frameworks (COFs) can be considered as nanoscale porous polymers but they have not been described in a monolithic form.So these types of materials are beyond the scope of the current review. Hydrophilic Polymer-Photocatalyst Composites Hydrophilic polymer/photocatalyst composites are based on the use of hydrogels as supports for photocatalysts.As mentioned in the introductory section, it is possible to talk about hydrogels when the 3D porous structure of the polymer is held upright by the water that occupies the interstitial spaces between the polymer chains.In this section, we focused on hydrogel-photocatalyst composites for application in the field of catalysis. Hydrogel (gel made by hydrophilic chains that form a 3D cross-linked polymeric network) has 3D macroporous structure that could swell quickly and retain large volumes of water in its swollen structure. [30]The content of water inside a hydrogel is influenced by several factors, such as its chemical composition and the density of cross-linkers.Since the polymeric chains are hydrophilic, it is possible to realize functional soft materials. [31]he classification of hydrogels depends on the polymers involved, the method of crosslinking and their ionic charge.In general, hydrogels can be prepared by polymer crosslinking which can be physical, chemical, or both.The crosslinking process is carried out in different ways, such as simple mixing, solution casting, bulk polymerization, free radical polymerization, polymerization under UV and gamma irradiation, and the interpenetrating network formation method.Hydrogels are also classified according to ionic charge, into cationic, anionic, and neutral hydrogels.The overall charge of the hydrogel depends on the charge of the polymer used during the preparation step. [32]Hydrogels could be considered ideal hosts for embedding photoactive materials to realize novel composites.Indeed, due to their hydrophilic nature, hydrogel-based photoactive composites may show interesting photoactivities in aqueous media.The hydrogel matrix, which acts as a host for the catalytic species, can be used to control the catalytic processes.In particular, the photocatalytic process can be made selective and more efficient if the permeability of the hydrogel is able to differentiate between the molecules involved in the chemical reaction and modulated by external stimuli.Therefore, the strategy of coupling photocatalysts with hydrogels can contribute to more efficient and environmentally friendly catalytic processes. Figure 1 shows the number of publications for the keywords "hydrogel and photocatalysis".The results indicate an increasing research interest worldwide on this topic. Preparation of hydrogel-photocatalyst composites To confer photocatalytic properties to hydrogels, photocatalytic materials are introduced into the 3D cross-linked polymeric networks.The 3D cross-linked polymeric networks provide a porous structure that limits catalyst leakage into the reaction media (air or water) and facilitates the loading of a large amount of catalyst particles.The nanometer-sized photocatalyst provides active sites for the development of catalytic reactions. Different methods, summarized in Scheme 1, are proposed in the literature for the preparation of hydrogel-photocatalyst composites.These methods can be divided into the following three categories. a) The embedding of photocatalytic particles in hydrogel networks is realized by combining hydrogel precursors with the colloidal suspension of photocatalyst nanoparticles before hydrogel formation.For instance, Mansurov et al. used N,Ndimethylacrylamide (MDAA) and acrylamide (AA) as precursors for the fabrication of hydrogel, and TiO 2 Degussa/Evonik as the photoactive phase.The hydrogels were synthesized at room temperature by free radical polymerization of AA using MDAA as the cross-linker agent and then different amount of TiO 2 colloidal suspension was added to the system. [31]Another example of TiO 2 particles embedded into a hydrogel was reported by Katzenberg et al.In this case, the authors used, for the hydrogel formation, a copolymerization of hydroxyethyl methacrylate (HEMA) and acrylic acid (AA) with ethylene glycol and dimethacrylate (EGDMA) as a crosslinking agent.HEMA, AA, and EGDMA were mixed in water with an aqueous dispersion of anatase TiO 2 nanoparticles. [33]In the work by Kazem et al., polyacrylamide (PAAm) hydrogel was synthesized under sunlight with commercial TiO 2 particles as the initiator, acrylamide as monomer, and N,N'-methylene bisacrylamide as the crosslinker agent. [34]) Photocatalysts are in situ synthesized in hydrogel networks by combining hydrogel with the photocatalyst precursor solution.The photocatalysts are then obtained by oxidation, reduction, or sulfuration of photocatalyst precursor.In this contest, Yang et al. prepared cadmium sulfide/polyacrylamide hydrogels (CdS/PAM).Monomer Acrylamide (AM) and the crosslinker N,N'-methylene-bis-acrylamide (BIS) were dispersed in aqueous media at ambient temperature and stirred until to reach their complete dissolution.Then initiators ammonium persulfate (APS) and sodium bisulfite (SBS) were added to the above solution, respectively, and the final solution was transferred to a plastic dish to carry out polymerization to obtain the PAM hydrogel.To have CdS/PAM composite, the hydrogel was immersed in a solution containing cadmium chloride to allow interaction between amino groups on the polymer chains and Cd 2 + .Subsequently, the hydrogel was treated with and sodium sulfide solution for several hours.During this step, the colorless PAM hydrogel gradually turned yellow, indicating that CdS nanoparticles were successfully synthesized in the hydrogel matrix.[35] Following almost the same process Li et al. synthetized a cationic hydrogel encapsulating CdS particles.[36] c) Self-assembly hydrogels are prepared by chemical reduction of conductive materials, such as graphene oxide (GO), carbon nanotubes, polypyrrole, and poly(3,4-ethylenedioxythiophene) polystyrene sulfonate followed by H-bond and π-π selfassembly.Chen et al. used this method to prepare polyaniline/ TiO 2 -graphene hydrogel (GH) by chemical reduction of graphene oxide (rGO) followed by H-bond and π-π self-assembly.The rGO and PANI act as a transmitter for the photogenerated e À and h + , enhancing the photocatalytic performance.[37] Photocatalytic hydrogen evolution One of the grand challenges of the 21st century is the transition toward more sustainable energy systems.Hydrogen (H 2 ) represents an ideal energy storage medium due to its energy density (142 MJ/kg) and zero-carbon production upon combustion.Hydrogen is also used as a major reactant in important reactions for industrial chemistry, such as carbon dioxide hydrogenation to methanol [38] or ammonia production (Haber-Bosch reaction). [39]However, it must be considered that the extensive use of fossil fuels fed in industrial plants for the production of hydrogen is one of the causes of the sharp decrease in non-renewable sources and the onset of various environmental problems, such as global warming and the greenhouse effect. [40]For these reasons, in the last decade, attempts have been made to develop alternative and effective processes suitable for producing energy from renewable sources such as solar energy.In this context, heterogeneous photocatalysis could be seen as a green method for the production of hydrogen from water splitting reaction.Up to now, different photocatalysts, such as sulfide-, oxide-, and oxynitride-based materials have been proposed for hydrogen production via water splitting under visible and solar irradiation. [41]However, the direct solar-to-hydrogen energy conversion efficiency value for photocatalytic water splitting systems is still very low [42] since photogenerated holes easily recombine with electrons, limiting the reaction efficiency [43] .43a] An ionic hydrogel containing CdS nanoparticles embedded in the 3D hydrogel structure (CdS/HGel PDAM2 ) was prepared and tested by Li et al. in the photocatalytic hydrogen production in presence of triethanolamine or Na 2 SÀ Na 2 SO 3 as sacrificial agent. [36]The experimental results revealed an excellent performance of CdS/HGel PDAM2 composite since a very high hydrogen production rate was achieved (~10 mmol h À 1 g À 1 ) evidencing that the best system was CdS in cationic hydrogel (CdS/HGel PDAM2 ).The excellent photocatalytic performance of CdS/HGelPDAM2 comes from the high swelling performance of the hydrogel structure.So sacrificial agent solution can move freely into the hydrogel.In this way, the molecules of the sacrificial agent can effectively capture the photo-generated holes, inhibiting the electron-hole pairs recombination and improving the reaction activity and stability of the CdS/ HGel PDAM2 photocatalyst. [36]ydrogel-based photocatalyst composed of ZnO/ZnS nanoparticles embedded into the polyvinyl alcohol (PVA) polymeric matrix was prepared by Poliukhova et al. [46] The observed H 2 production rate under UV light was 18.8 μmol • h À 1 from a 0.1 M Na 2 S and 0.1 M Na 2 SO 3 aqueous solution.It has been hypothesized that PVA hydrogel is an excellent support for the photocatalytic production of H 2 due to its high transparency to light, its 3D-type porous structure generated by the interconnection between the polymer chains, and its hydrophilic nature which provides good water transportation inside the hydrogel.For these reasons, PVA-based composite hydrogels have shown good performance in the photocatalytic generation of H 2 and good recyclability. [46]30b] Sai et al. have synthesized functional hybrid polyelectrolyte hydrogels containing self-chromophore amphiphiles, [47] evidencing a hydrogen production rate of 107.4 mol mol cat À 1 h À 1 (sacrificial agent: ascorbic acid) [47] . Photocatalytic CO 2 reduction The photocatalytic conversion of CO 2 into renewable fuels in the presence of solar radiation is now considered an ideal strategy to reduce the concentration of carbon dioxide in the atmosphere to overcome the urgent problems causing global warming.Photoreduction of CO 2 is a complex reaction, which can yield numerous products, such as CO, HCOOH, HCHO, CH 3 OH, CH 4 , and other hydrocarbons. [48]An example of a hybrid photocatalyst hydrogel composed of carbon dot (CD)-decorated BiVO 4 (BVO) and a reduced graphene hydrogel (rGH) (CDdecorated BVO/rGH) was prepared by Ma et al. [49] CO 2 photoreduction performances observed in the presence of CDdecorated BVO/rGH are reported in Figure 2. The yields of CH 4 and CO from CO 2 photoreduction were 32.2 μmol g À 1 and 92.3 μmol g À 1 after 180 min of run time, respectively (Figure 2a).In addition, the CH 4 and CO yields remained almost unchanged after six reuse cycles (Figure 2b).It was argued that the photoexcited electrons are transferred from the conduction band of BVO to the surfaces of the CDs and rGH, where they can reduce CO 2 to CO and CH 4 . [49]eanwhile, the photoinduced positive holes in the valence band of BVO are quenched by triethanolamine, used as sacrificial electron donors. [50]espite the interesting results reported in the literature, research papers dealing with the use of organic hydrogelphotocatalyst composites in CO 2 photoreduction are still limited mainly because of the low solubility of CO 2 in water that limits its conversion under light irradiation.Hence, currently, the research activity on CO 2 conversion is based on the use of aerogels, which are obtained from freeze-drying or CO 2 supercritical drying of hydrogels. [49] Organic synthesis An example of a hydrogel-photocatalyst composite for organic synthesis was reported by Ma et al. who proposed a freezingthawing strategy for the preparation of an alkaline CuOchitosan hybrid hydrogel (CuO@CSÀ H) for the production of lactic acid under visible light at different operating conditions (i. e. KOH concentration, catalyst dosage, reaction temperature and reaction time) (Figure 3). [51]The authors demonstrated that the synergism between CuO and alkaline chitosan hydrogel promoted a highly efficient photocatalytic reforming of xylose to lactic acid and the highest lactic acid yield was about 82 %. [51] A possible reaction pathway for the synthesis of lactic acid via photocatalytic reforming of xylose by CuO@CSÀ H was also proposed [51] (Scheme 2). Xylose was first isomerized to xylulose, and then xylulose was further converted to glycidaldehyde by retro-aldol reaction.Subsequently, lactic acid was obtained from glycidaldehyde. [51]ui et al. prepared hydrogel-photocatalyst composites by freeze-induced encapsulation of ultrathin carbon nitride nanosheets in three-dimensionally ordered porous matrices of polyvinyl alcohol (FG-s-PVA@CNNS3) (Scheme 3). [52]he presence of polyvinyl alcohol hydrogel did not affect the visible light absorption properties of carbon nitride nanosheets and enhanced the rate of photoinduced charge separation/transfer.FG-s-PVA@CNNS3 showed excellent performance in the selective oxidation of glucose to lactic acid and the obtained yield was about 93 %.Furthermore, FG-s-PVA@CNNS3 was proved to have good stability and reusability after ten reuse cycles.Interestingly, FG-s-PVA@CNNS3 also showed very good performances in the photocatalytic oxidation of fructose, mannose, rhamnose, arabinose and xylose, producing lactic acid at high yield (89.1 % from fructose, 88.6 % from mannose, 56.0 % from rhamnose, 71.1 % from arabinose and 70.9 % from xylose respectively. [52] Hydrophobic Polymer-Photocatalyst Composites In the case of hydrophobic polymers as support for photocatalysts, it is possible to speak of aerogels or sponges when the polymer chains are separated from the air throughout the three-dimensional volume occupied. The size of the pores distinguishes an aerogel from a sponge.In detail, sponges are characterized by pores having size between 50 and 500 μm, or even larger.Instead, aerogels are characterized by pores ranging in size from a few nm to a few tens of nm.Furthermore, aerogels are ultra-light and highly porous (the porosity can even reach 99 %) compared to sponges. In this section, we focused on both sponge-photocatalyst and aerogel-photocatalyst composites for photocatalytic reactions. Sponge-photocatalyst composites Solid sponges are materials characterized by large specific surface areas and high porosity. [53]n the field of heterogeneous photocatalysis, polydimethylsiloxane (PDMS) sponges are the most suitable materials to be used as support for different types of semiconductors since PDMS sponges have high transparency to the light wavelengths commonly used in photocatalysis. [54]54b] Preparation of PDMS sponge-photocatalyst composites The methods used for preparing PMDS sponges have been illustrated and discussed extensively in the scientific literature. [9]ince the purpose of this review paper is the analysis of photocatalytic systems obtained by dispersing photoactive phases within the structure of porous polymeric materials, only the synthesis methods that can be used for this purpose are briefly discussed.a) PDMS photocatalysts/sponges can be obtained by simply adding a certain amount of photocatalyst powder into liquid PDMS.The final composite is obtained following a direct curing process, although intermediate steps and additional processing such as a separate heat treatment may be necessary.Adding photocatalysts to a liquid PDMS system before solidification is an approach that allows for easy management of the final shape of the sponge composite. [56]) As an alternative to the use of an already synthesized powder photocatalyst, the semiconductor particles can be prepared from liquid precursor salts dissolved in the liquid PDMS prepolymer [56c] by the sol-gel method, largely employed in the preparation of photocatalysts.[57] This methodology is able to generate a composite characterized by an optimal dispersion of the photocatalyst particles within the PDMS structure.Several examples of this method are available in the scientific literature.[58] b) An interesting alternative for the preparation of PDMS porous sponges is based on the use of a powder photocatalyst previously mixed with a template which is then immersed in a liquid solution containing PDMS prepolymer.[54b] In detail, the photocatalyst is mixed with template particles (in most cases sugar is used).The mixture thus obtained is kneaded in a mold having dimensions and shapes that can be varied in relation to the final geometric shape of the sponge to be realized.The block thus obtained is then dried and immersed in a solution containing the PDMS prepolymer and a curing agent.In this way, the PDMS fills the spaces within the model due to capillary forces.This stage is usually carried out in a vacuum chamber to remove the solvents used in the preparation of the solutions. The block filled with PDMS is then cured at a temperature of 80 °C for the curing time necessary to achieve the polymerization.Subsequently, the sugar/PDMS model is immersed in water to dissolve the sugar and finally dried to obtain the sponge-photocatalyst composite. The inal material consists of photocatalyst particles well embedded within the PDMS matrix.This method was used for the preparation of a PDMS sponge containing ZnO particles dispersed in the polymer framework [59] (Scheme 4).c) A PDMS/photocatalyst composite can also be achieved by dispersing the catalyst in an already solidified PDMS sponge.[54b] In this case, the photocatalyst particles are embedded in a PDMS sponge by impregnating it with a suspension of the photocatalyst dispersed in suitable low-boiling solvents (such as ethanol).After the impregnation step, the solvent was allowed to evaporate at room temperature.In the obtained spongy material, the photocatalyst particles are deposited on the inner surface of the PDMS pores.The use of a PDMS sponge can provide a high surface area for the photocatalyst, minimizing the intraparticle aggregation phenomena that usually occur when the photocatalyst is deposited on the external surface of non-porous solid substrates (such as glass). Porous PDMS sponge/TiO 2 prepared with this method are reported in the literature. [60]) In addition to the semiconductor particles, specific organic dyes can be immobilized in the PMDS-based sponges to give the sponges suitable photocatalytic properties.[19a,61] Indeed, the most common materials used in the field of photocatalyzed reactions were typically molecular systems based on transition metal complexes or organic dyes.[62] Some examples of organic dyes employed as photocatalysts are eosin Y (EY) and rose bengal (RB).[63] While the most extensively studied organometallic complexes are based on iridium-based or ruthenium-based polypyridyl.[64] These organic dyes and organometallic complexes are to be understood as homogeneous photocatalysts capable of absorbing visible light.In detail, in the presence of visible light, an electron is promoted from the most occupied molecular orbital (HOMO) to the lowest unoccupied molecular orbital (LUMO), leading to the formation of a positive hole in the HOMO and an excited electron in the LUMO.62] From an economical point of view, the use of organic dyes (i. e. and RB) is preferable with respect to organometallic complexes. Organic dyes have very high photocatalytic performance and can homogeneously catalyze a wide range of chemical reactions including water splitting, CO 2 reduction, CÀ C coupling reactions, oxidative coupling of amines, heterocycle formation and enantioselective alpha-alkylation. [62]owever, homogeneous catalysis suffers from several disadvantages.Considering that the photocatalyst is dissolved in the reaction medium, complicated steps of separating the photocatalyst from the reaction products are necessary to recover and recycle the photocatalyst in the photoreactor.Furthermore, the possible presence of the photocatalyst in the final product is often unacceptable due to the toxicity of the photocatalyst.Additionally, many dyes used as photocatalysts have poor solubility in a wide range of solvents, limiting their use. [62]A possible solution to these drawbacks could be to incorporate the organic dyes into macroscopic structures such as PMDS-based sponges. To immobilize the organic dyes in a PMDS sponge, in many cases some pre-treatment steps of the sponge are necessary which can vary according to the type of dye to be used.For instance, in the case of RB (Scheme 5), the PDMS sponge is subjected to air plasma treatment to induce the formation of hydroxyl groups on the sponge surface, followed by silanization of the hydroxyl groups using vinyltrimethoxysilane.61b] RB is then incorporated in the modified PMDS sponge by ion exchange.61b] Applications of PDMS based sponges in photocatalytic reactions In this section, the available scientific papers dealing with the use of PDMS-based sponges as supports for photocatalysts are briefly examined. It must be underlined that, in the case of inorganic semiconductor particles (e. g.ZnO and TiO 2 ) embedded into PDMS sponge, the totality of the applications concerns the photocatalytic removal of pollutants (mainly in the degradation of organic dyes) and therefore they are, up to now, studied only for environmental applications.For instance, Zhu et al. reported that the photocatalytic activity of the PDMS/TiO 2 composite in the degradation of methyl orange was significantly higher than the powdered TiO 2 . [65]The authors associated the high removal capacity of PDMS/TiO 2 with the high sorption capacity of the PDMS matrix towards methyl orange molecules (and more generally towards all highly soluble pollutants in water).In this way, the pollutants sorbed by the PDMS can come into intimate contact with the surface of the photocatalyst particles incorporated in the PDMS framework, increasing the photocatalytic reaction rate. Hickman et al. demonstrated that PDMS/TiO 2 composite sponge was able to remove the toxic dye Rhodamine B from an aqueous solution under sunlight, highlighting once again that the removal process is due to synergistic effects between the dye adsorption phenomenon in the sponge matrix and the photocatalytic degradation efficiency of the TiO 2 particles dispersed in the PDMS sponge. [60]The same authors also highlighted that the overall removal efficiency obtained with the composite was comparable to that obtained by dispersing in solution a quantity of TiO 2 equivalent to that embedded in the PDMS sponge.However, the use of the PDMS/TiO 2 sponge avoids the necessity of using post-treatment steps for the separation of TiO 2 particles from the treated solution. Lee et al. [56a] added gold nanoparticles to a porous PDMS sponge containing TiO 2 particles, showing a significantly higher degradation efficiency of the Rhodamine B dye (both under UV and visible light) than the PDMS sponge containing only TiO 2 particles.Furthermore, the recyclability of the PDMS-based sponge for multiple cycles has been demonstrated. 56b] Furthermore, it was recently reported that the PDMS/ZnO sponge-like photocatalyst showed a significantly better photodegradation performance of methylene blue dye under illumination (UV and UV-Vis) than the ZnO-free PDMS sponge. [59]ased on the above-mentioned works, photocatalytic sponges obtained by incorporating semiconductor particles into the PDMS structure can be considered as efficient, highly stable and easily recoverable photocatalytic systems for the decontamination of wastewater containing organic dyes. However, no scientific papers are found in the literature in which the efficiency of PDMS sponges functionalized with inorganic semiconductor particles for photo-assisted organic synthesis reactions is shown. Photocatalytic synthesis of organic compounds under visible light 19a,61] The cross-dehydrogenative coupling (CDC) reaction of the α-CÀ H bond of nitrogen atoms allows to obtain the selective formation of the CÀ C and CÀ X bonds under oxidative conditions. [66]61b] The highest photocatalytic efficiency (Yield = 97 %) was achieved in the presence of ethanol as solvent. 61b] It was argued that RB molecules are excited under visible light irradiation.61b] Li et al. formulated also a modified PDMS-RB sponge.In detail, plasma-treated hydroxyl-rich PDMS sponge was modified with 3-aminopropyltrimethoxysilane, and resulting amino groups were coupled with F moc À Glu(O t Bu)À OH resulting in Glufunctionalized PDMS sponge, which was then subjected to an ion exchange procedure to incorporate RB. [67] The modified PDMS-RB sponge was effective in CPC reactions of tertiary amines with substituted ketones (Scheme 8) but using water as solvent. [67]hese literature papers confirm the efficiency of PDMS-RB sponges as visible light active photocatalytic materials for the synthesis of organic compounds, especially through CDC reactions. It is important to underline that papers dealing with PDMS sponges functionalized with EY dye for photoassisted synthesis of organic compounds are still lacking. Applications of non-PDMS based sponges in CO 2 photoreduction Although PMDS-based sponges are the most used as a support for photocatalysts because of the reasons specified above, a melamine-based sponge has recently been proposed. [68]4a,69] Carbonaceous-based materials (such as graphene and g-C 3 N 4 ) can be easily dispersed in such sponges.Indeed MS/graphene composites have been successfully tested in oilwater separation processes. [70]n the case of heterogeneous photocatalysis applied to processes not aimed at pollutant removal, Yang et al. have formulated a monolithic melamine sponge/g-C 3 N 4 composite (MS/g-C 3 N 4 ) for CO 2 photoreduction. [68]The composite was prepared through an ultrasonic-coating method, In detail, the MS sponge was immersed into a g-C 3 N 4 suspension for 30 min under sonication, and then the excess solution was squeezed out.The obtained sample was finally treated at À 70 °C for 48 h to achieve the g-C 3 N 4 /MS composite with g-C 3 N 4 particles uniformly dispersed within the skeleton of the melamine sponge. The CO 2 photoreduction test was performed by injecting a fixed volume of gaseous CO 2 into the photoreactor containing the photocatalyst in distilled water.The main detected reaction products were CO and CH 4 with together traces of H 2 (7.48 μmol g À 1 h À 1 CO, 3.93 μmol g À 1 h À 1 CH 4 and 0.26 μmol g À 1 h À 1 H 2 ).Noticeably, the g-C 3 N 4 /MS composite showed higher photocatalytic activity than that of g-C 3 N 4 in powder form probably because of the high ability of melamine in the CO 2 adsorption. [71]In this way, the MS sponge can concentrate CO 2 in its framework, allowing a high number of reactant molecules to enter in contact with the photocatalyst surface, consequently enhancing the photocatalytic performances. Aerogel-photocatalyst composites In general, aerogels are porous materials prepared by sol-gel based methods that possess peculiar properties, such as high porosity, extremely low density, high specific surface area and very low thermal conductivity. [72]The first formulations of aerogels were proposed by Kistler [73] who used for the first time the supercritical drying technology as a method of obtaining highly porous structures.Subsequently, thanks to advances in precursor chemistry and drying methods, different types of aerogels characterized by organic, inorganic or even hybrid polymeric chains were prepared. [74]72a,75] Some fields of application of composite aerogels are environmental protection, organic synthesis, [76] biochemical synthesis and biosensors. [77]72a] Indeed, photoactive phases can be embedded in the aerogel framework, providing efficient catalytic materials for photocatalytic oxidation of pollutants. However, in the case of photocatalytic reactions, to obtain high degradation efficiencies of pollutants on aerogel networks, an optimal affinity between the chemical-physical structure of the aerogel and the chemical nature of the contaminant to be removed is a key aspect to consider.In fact, the photocatalytic efficiency increases when the aerogel is able to ensure a high absorption efficiency of the pollutants.16f,20d] Preparation of aerogel-photocatalyst composites Inorganic (mainly based on metal oxide and chalcogenide) aerogel-photocatalyst composites have been studied extensively.It should be noted that most of these composites were prepared using molecule-based sol-gel method, assembly and template methods. [1]Additionally, carbon-based aerogels as support for photoactive phases were also investigated.For example, graphene-based aerogel structures (GA) as a support for photocatalytic particles have been the most investigated due to their high specific surface area, excellent electron mobility, and flexible mechanical properties. [78]ost GA/photocatalyst composites are prepared employing hydrothermal methods [1] (such as hydrothermal self-assembly [79] and in situ hydrothermal growth [80] ). However, it is worth pointing out that all of the above preparation routes for obtaining inorganic or carbon-based materials aerogel/composites are characterized by somewhat complex and multi-step synthesis procedures which are very energy intensive and are difficult to achieve on a large scale. [1]s an alternative, owing to low-cost and facile synthesis strategies, organic polymer aerogels (OPA) were recently considered to achieve effective photocatalytic composites. [1]or this reason, the methods most commonly used to prepare OPA/photocatalyst composites are here briefly described. a) Due to the presence of suitable surface functional groups, some organic polymers (such as polyvinylpyrrolidone and polyvinyl alcohol) have the characteristic of being able to solubilize in water.In this manner, an aqueous suspension containing the photocatalyst particles and the dissolved polymer can be easily prepared and directly transformed into hydrophobic aerogel-photocatalyst composites through a simple freeze-drying method, not going through the intermediate formation of a gel. For example, using this method, polyvinylpyrrolidone/MoS 2 and polyvinyl alcohol/C 3 N 4 composite aerogels were prepared. [81]24a] Moreover, the freeze-drying step requires achieving temperatures below 0 °C (typically down to À 80 °C).Consequently, it is necessary to carry out refrigeration cycles using costly equipment.The freeze-drying step is also timeconsuming since a very high treatment time (up to 48 h in some cases) is required for obtaining the 3D aerogel porous network. b) Recently, in the field of heterogeneous photocatalysis, aerogels based on thermoplastic hydrophobic polymers characterized by a nanoporous or ultramicroporous crystalline phase, such as syndiotactic polystyrene (sPS), [82] have aroused particular interest as an alternative to water-soluble polymers. sPS aerogels are easily prepared from a gel obtained by dissolving the polymer in organic solvents (typically chloroform, benzene and toluene [26] ) and cooling the solution to room temperature.16f] Therefore, during the solvent removal process, the collapsing of the aerogel highly porous network can be avoided, allowing to obtain monolithic aerogels. The preparation procedure for aerogel-photocatalyst composites is the same as that used for the synthesis of pure sPS aerogels but adopting an additional precaution.Specifically, the photocatalyst particles are added to the solution containing the dissolved polymer and maintaining the obtained suspension under sonication to minimize the aggregation of the photocatalyst particles during gel formation at room temperature.20c-f,83] Photocatalytic hydrogen evolution Photocatalytic hydrogen production was mainly studied using inorganic or graphene-based aerogels [1,84] and still scarce is the presence of research papers dealing with the use of photocatalytic composites based on organic polymer aerogels for this type of reaction. Recently, Zhao et al. prepared a polyimide(PI)/Ag composite aerogel by ethanol supercritical drying technique. [85]The PI/Ag aerogel showed a photocatalytic hydrogen production rate of about 166 μmol h À 1 g À 1 from methanol aqueous solution under visible light.The authors also showed that the H 2 production rate observed in presence of PI/Ag aerogel was much higher than that achieved in presence of bare PI aerogel because of the lower charge carriers recombination rate, enhanced visible light optical absorption and large specific area of the PI/Ag aerogel composite. [85]However, it must be considered that the element Ag was introduced into the aerogel structure during the polymerization phase for the generation of the polyimide and not starting from previously synthesized polyimide. Schreck et al. prepared Pd-TiO 2 based monolithic polymer aerogels starting from 3D printed polymeric scaffolds manufactured from a commercially available acrylate-based resin. [86]In detail, colloidal nanoparticle dispersions of TiO 2 and Pd were inserted into quartz tubes containing the 3D-printed polymer scaffolds.Gels were then obtained by heat treatment at 60 °C for 30 minutes in an ethanol-saturated atmosphere and subsequently cooling to room temperature through a successive series of steps.As a final step, the gels were supercritically dried in CO 2 to generate the final aerogel composites that were tested in the H 2 production from a methanol/water vapor mixture via a gas flow reactor under UV light irradiation. [86]The authors observed that the H 2 evolution rate can be increased up to 1200 μmol h À 1 g À 1 by changing the composite aerogel geometry and underlined that the use of 3D printed polymeric scaffolds is a suitable way to manipulate the monolithic aerogel-photocatalyst composites without altering their composition and morphology. [86] Photocatalytic synthesis of organic compounds Despite the presence of some published review articles on the use of porous organic polymers as catalysts for light-driven organic transformations, [87] only a few reports have appeared on organic synthesis reactions employing photocatalysts incorporated into organic polymer aerogels prepared starting from previously synthesized or commercial polymers. 20e] sPS/NÀ TiO 2 composite improved the selectivity and phenol yield compared to NÀ TiO 2 powder.20b] The authors rationalized the obtained results based on the different affinity of benzene (a non-polar compound) and phenol (a polar compound) with sPS aerogel (a non-polar polymer).Indeed, it has been found that the amount of benzene absorbed in the sPS polymer is considerably higher than that of phenol. [88]Therefore, the phenol formed by hydroxylation of benzene by NÀ TiO 2 photocatalyst dispersed in the sPS matrix can easily desorb from the polymer toward the aqueous phase surrounding the sPS/NÀ TiO 2 composite (Scheme 10).In this way, the phenol over-oxidation reactions are prevented. Furthermore, the monolithic sPS/NÀ TiO 2 aerogel is easily recoverable from the liquid medium.20b] The same authors dispersed FeÀ N codoped TiO 2 photocatalyst in sPS monolithic aerogel using the same preparation method and reported, also in this case, an enhancement of benzene conversion and phenol selectivity under UV and visible light with respect to the photocatalyst dispersed in powder form inside the liquid reaction medium. [89]iven the hydrophobic properties of sPS aerogels (high affinity with non-polar reactants and very low affinity with desired products having polar character), the developed sPS/ photocatalyst composites may represent a viable solution that could allow a significant leap forward in the development of innovative green processes for the selective oxidation of aromatic hydrocarbons under mild conditions. Summary and Outlook In this review, we have described the recent developments in the preparation of polymer-photocatalyst composites consisting of photocatalytic active phases embedded in cross-linked porous organic polymers with particular attention to their application in the photocatalytic hydrogen evolution, CO 2 photoreduction and light-driven organic synthesis. The main aim was to outline an alternative perspective to the highly studied environmental photocatalytic applications, highlighting the photoactive properties of these composites thanks to the incorporation of semiconductors in the 3D porous structure of organic polymers.A drastic improvement of the photocatalytic activity, selectivity and stability can be obtained thanks to the combination of the unique physical properties of porous organic polymers, such as high specific surface areas and optimal accessibility of reactants and products in the porous structure of the polymer.Furthermore, thanks to the high and homogeneous dispersion of the photocatalysts in the 3D polymeric structures, the aggregation phenomena between the photocatalytic particles are minimized, obtaining a photocatalytic activity of the composite better than powder photocatalysts. We have presented hydrophilic organic polymers (hydrogels) and hydrophobic organic polymers (sponges and aerogels) as host networks and we have mainly focused our attention on the literature papers concerning organic polymer-photocatalyst composites obtained through "gelling/drying" and direct mixing processes (during or after gel preparation), as well as through impregnation or ion exchange processes between photocatalysts and commercial or already prepared polymers. The following is a summary of some key points about current limits and outlook for catalytic applications of organic polymer/ photocatalyst composites discussed in this review. 1) Thanks to the intrinsic water absorption capacity of hydrogels, the hydrogel-photocatalyst composites could be seen as a smart reactor (that is a photoreactive solid phase made by a porous polymeric matrix, in which the photocatalytic reactions occur, and the photocatalyst) for photocatalytic hydrogen production from water splitting reaction and for a variety of environmentally friendly selective photocatalytic reactions involving reactants and products highly soluble in an aqueous medium.Some research papers report interesting results but these studies were mainly focused on batch systems.On the other hand, the use of highly translucent hydrogelbased photocatalytic composites with accessible photoactive sites can allow the design and realization of large-scale flow column photocatalytic reactors capable of producing the desired products continuously.Furthermore, most of the research papers have mainly focused on the formulation of the photocatalysts to be dispersed in the hydrogels.On the other hand, studies on the intrinsic properties of the hydrogels to be used as hosts are still scarce.Indeed, it is believed that the regulation of the gel network structure as well as the swelling and adsorption properties of hydrogels can contribute synergistically to the photocatalytic performance of the composites.Therefore, further experimental investigations on these aspects are strongly required. 2) From the literature survey, it emerged that only PMDS sponges functionalized with organic dyes have been studied for photocatalytic organic synthesis reactions (specifically crossdehydrogenative coupling reactions). For this reason, it is suggested to try these types of reactions also using inorganic photocatalysts (such as ZnO and TiO 2 -based materials) incorporated in the PDMS sponges that, currently, have only been tested in the photocatalytic removal of pollutants.This would allow the preparation of PDMS/ photocatalyst composites by methods much simpler than those necessary for the dispersion of organic dyes in the porous structure of the sponge.However, particular attention must be paid to the formulation of the inorganic photocatalysts to be embedded in such a way as to make them selective towards the desired products. 3 Aerogel composites based on sPS have aroused particular interest, thanks to the easy preparation method and unexpected phenol yield from benzene photocatalytic hydroxylation.Therefore, sPS/photocatalyst composites can be seen as photoreactive solid phases (smart photoreactors) that could allow a significant leap forward in the development of a green process capable of producing phenol in mild conditions. However, these composites were, at the moment, only tested in the photocatalytic benzene hydroxylation.An in-depth examination of such composites in other photocatalyzed reactions (i.e. selective oxidation reactions involving the activation of CÀ H bonds in aromatic hydrocarbons and CO 2 reduction) to produce oxygenated organic compounds would be very useful and would facilitate more widespread use of these photoreactive solid phases. Interesting results in the production of hydrogen from a methanol/water vapor mixture in a gas flow reactor were observed using polymer aerogel-photocatalyst composites prepared starting from 3D printed polymeric scaffolds manufactured from a commercially available acrylate-based resin.For these photocatalytic systems, both physical and chemical modifications of the composite aerogels surface to increase the reactants adsorption performances are suggested.Moreover, the development of methods that combine the design of the irradiation source and the aerogel structure to maximize the irradiation efficiency and the assessment of photocatalytic performances in prolonged irradiation time are strongly recommended to allow scale-up considerations. Vincenzo Venditto graduated in Chemistry in 1988 at the University of Napoli, where in 1993 he also received his PhD in Chemistry.He is currently Full Professor of Industrial Chemistry at the Department of Chemistry and Biology "A.Zambelli" of the University of Salerno.His main research activities are focused on the physical-chemical and structural characterization of fossil and bio-based polymeric materials.His most recent interest is the design and preparation of photocatalytic composite based on polymeric materials with microporous-crystalline phases for applications in water remediation and catalysis.Vincenzo Vaiano graduated in Chemical Engineering at the University "Federico II" of Napoli in 2000.In March 2006 he received the title of PhD in Chemical Engineering.He is currently Associate Professor of Industrial Chemistry at Department of Industrial Engineering of University of Salerno.In 2005, he conducted research activities at the University of Bradford (UK).Prof. Vaiano is author of several scientific papers dealing with hetero-geneous photocatalysts for different applications, including water pollutants removal and selective synthesis of organic compounds.Olga Sacco is Assistant Professor of Industrial Chemistry at the Department of Chemistry and Biology "A.Zambelli" of the University of Salerno.In 2011, she graduated in Chemical Engineering at the University of Rome "La Sapienza".In February 2014 she received the PhD title in Chemical Engineering.The main research lines are: synthesis and characterization of catalytic materials, phosphors-based nanomaterials, nanostructured photocatalysts and supports, photocatalysis for removing pollutants from water and wastewater, photocatalysts for selective oxidation. Figure 1 .Scheme 1 . Figure 1.Number of publications containing the keywords "hydrogel" and "photocatalysis" in the title since 2000 (Search with Web of Science on May 25, 2023). Figure 4 . Figure 4. Benzene conversion and phenol yield for five reuse cycles at pH = 2.The photocatalytic reactions were performed in 35 mL aqueous solution containing benzene (initial concentration: 25.6 mM), acetonitrile (2.3 mL) as a co-solvent and 2.8 mL of H 2 O 2 (30 wt % in H 2 O) with 3 g/L of sPS/NÀ TiO 2 aerogel under visible light.Reproduced from ref. [20b] Copyright (2022), with permission from Elsevier.
10,939.6
2023-11-20T00:00:00.000
[ "Chemistry", "Materials Science" ]
A low-carbon economic dispatch method for regional integrated energy system based on multi-objective chaotic artificial hummingbird algorithm This paper investigates Regional Integrated Energy Systems (RIES), emphasizing the connection of diverse energy supply subsystems to address varied user needs and enhance operational efficiency. A novel low-carbon economic dispatch method, utilizing the multi-objective chaotic artificial hummingbird algorithm, is introduced. The method not only optimizes economic and environmental benefits but also aligns with "carbon peak and carbon neutrality" objectives. The study begins by presenting a comprehensive low-carbon economic dispatch model, followed by the proposal of the multi-objective chaotic artificial hummingbird algorithm, crucial for deriving the Pareto frontier of the low-carbon economic dispatch model. Additionally, we introduce a TOPSIS approach based on combined subjective and objective weights, this approach harnesses the objective data from the Pareto solution set deftly, curbs the subjective biases of dispatchers effectively and facilitates the selection of an optimal system operation plan from the Pareto frontier. Finally, the simulation results highlight the outstanding performance of our method in terms of optimization outcomes, convergence efficiency, and solution diversity. Noteworthy among these results is an 8.8% decrease in system operational economic costs and a 14.2% reduction in carbon emissions. www.nature.com/scientificreports/ a combined cooling, heating, and power system into two single-objective optimization problems, and proposed an improved firefly algorithm to solve the problem, reducing system carbon emissions and economic costs 9 .Zhi et al. established a rural integrated energy system with economic and system stability as optimization goals and used the Simulated Annealing algorithm for optimization, improving system stability ultimately 10 . While these literary works have shown prowess in transmuting multi-objective optimization challenges into single-objective ones using objective weight coefficients, simplifying model intricacies and refining computational precision, they are not without their pitfalls.The optimization strategies born out of these methods often exhibit a mono-dimensional nature.Precursors like weight coefficients for various objectives remain elusive, resulting in the final optimization outcomes being overly reliant on dispatchers' subjective expertise.This hampers the system's agility to adapt to diverse objectives, such as operational economy and low carbon, under varying scenarios. Currently, multi-objective optimization algorithms are gaining traction in RIES dispatch problems due to their ability to optimize multiple objectives concurrently.By doing so, they yield the Pareto optimal frontier, offering dispatchers a diverse range of choices that cater to various goals.Abuelrub et al. combined the development capability of biogeography-based optimization (BBO) and the exploration ability of particle swarm optimization (PSO) to propose a Greedy Particle Swarm and biogeography-based optimization algorithm (GPSBBO), which they applied to solve the multi-objective optimization design of integrated energy systems with energy storage 11 .Wu et al. proposed the multi-objective non-dominated sorting genetic algorithm (MO-NSGA-II) algorithm and used it to optimize the multi-objective optimization model of the constructed electricity-heat regional integrated energy system, improving the exergic efficiency and reducing pollution emissions of system operation 12 .Nazari et al. introduced a novel multi-objective optimization algorithm termed as the multi-objective multi-verse optimizer (MOMVO).This algorithm aims to enhance exergy efficiency while concurrently minimizing the system's product cost rate.Comprehensive evaluations of the system were conducted through energy, exergy, and exergo-economic perspectives 13 .Wang et al. embedded the Tabu search algorithm (TSA) into the multi-objective genetic algorithm, proposing a multi-objective hybrid optimization (MOHO) algorithm, and used this method to solve the dual-layer optimization model of the regional integrated energy system, effectively reducing the system's economic cost while also improving the system's efficiency 14 .Patwal et al. combined a pumped-storage hydrothermal system with wind energy, solar energy, and battery units, using economic cost and environmental cost as optimization objectives, and applied an improved crossover particle swarm optimization (ICPSO) for optimization.The Pareto optimal solutions have proven the energy-saving and emission reduction benefits of the suggested model 15 .Wang et al. devised an integrated energy system optimization model, rich in renewable energy, and introduced a scenario dominance-based multi-objective evolutionary algorithm (MOEA) to bolster both the system's economics and stability 16 .Liu et al. advanced a multi-objective gravitational search optimization (MOGSO) algorithm, leading to diminished energy consumption in the assessed system 17 .Li et al. integrated a multi-objective whale optimization (MOWO) algorithm to solve their dual-layer robust game model for the regional energy system, witnessing improvements in the system's economic performance and dispatching adaptability 18 . In summary, the utilization of multi-objective optimization algorithms presents solutions to the intricate scheduling challenges faced by regional integrated energy systems.The resulting Pareto frontier furnishes dispatchers with a spectrum of options for the ultimate scheduling blueprint.However, challenges endure.Some algorithms are susceptible to getting trapped in local optima, especially when dealing with complex, constraintladen problems 19 .Additionally, there may be a shortfall in global search capabilities, warranting improvements in their overall performance 20 .These limitations lead to extended problem-solving durations and less-than-ideal optimization outcomes. In essence, multi-objective optimization algorithms present viable solutions to the scheduling challenges of regional integrated energy systems.The derived Pareto frontier offers dispatchers an array of choices for finalizing the scheduling strategy.Nevertheless, a subset of these algorithms is susceptible to entrapment in local optima, especially when faced with intricate constraints.Others lack robust global search capabilities.The efficacy of such algorithms requires enhancement, as their current limitations result in protracted problem-solving durations and less-than-optimal optimization outcomes.To advance the "dual carbon" objectives and address existing challenges in achieving low-carbon economic operation within regional integrated energy systems, this paper builds upon existing research.It presents a low-carbon economic dispatch model for these systems.Moreover, an optimization scheduling technique rooted in the multi-objective algorithm is introduced for model resolution. The key contributions of our work include. (1) A holistic optimization model for the electricity-heat-gas subsystems within the regional integrated energy system is established, accommodating both its economic and environmental dimensions.This model accounts for the system's topology and inherent constraints.(2) A novel multi-objective chaotic artificial hummingbird strategy is introduced, integrating chaotic mapping and dynamic adjustments.Utilizing non-dominance and congestion distance metrics, this strategy refines the initial model.It pinpoints the optimal Pareto boundary for the RIES low-carbon economic dispatch challenge.(3) We introduce a TOPSIS approach based on subjective and objective combined weights.From the obtained Pareto frontier, the system's final dispatch plan that meets the system's operating requirements is selected.This is done to achieve low-carbon economic dispatching of the system.(4) Through rigorous simulation, the efficacy of our proposed methodology is validated.The results underline MOCAHA's superior capacity in addressing the RIES low-carbon economic conundrum across optimization outcomes, convergence efficiency, and diverse solution sets.This approach amplifies both the economic and environmental dividends of RIES.Concurrently, the proposed TOPSIS based on subjective and objective Low-carbon economic dispatch model for RIES The regional integrated energy system (RIES) stands as a sophisticated energy coupling mechanism, bridging the gap between user levels and expansive cross-regional integrated energy systems.This system epitomizes the trajectory of future energy internet advancements 21 .Comprising three primary entities, energy suppliers, RIES operators, and integrated energy users, the ecosystem of RIES is robust.Energy suppliers encompass entities like power companies, natural gas corporations, and heating agencies.RIES operators work diligently to satiate the diverse energy needs of integrated users, such as electricity, heat, and gas.They do this by orchestrating energy output from the internal subsystems and sourcing energy from the aforementioned suppliers.A visual representation of the RIES framework can be gleaned from Fig. 1.Within the RIES construct, myriad energy supply subsystems intertwine, facilitated by energy transformation devices like gas turbines, combined heat and power (CHP) units, and power-to-gas (P2G) devices.In RIES, user electricity loads are met by equipment such as coal-fired generators, gas turbines, wind turbine generators, and photovoltaic cells, with energy transactions occurring between the system and the external power grid.The heat produced by gas-fired boilers is used to meet user heat load requirements.CHP units serve as coupling devices between the power supply subsystem and the heat supply subsystem, providing both electricity and heat to users within the system, achieving the coupling of the power supply and heat supply subsystems.Meanwhile, P2G equipment captures CO 2 from the flue gas of fossil fuel combustion and synthesizes methane for gas units, realizing the coupling of the power supply and gas supply subsystems. By orchestrating the harmonized operation of RIES, we can enhance energy utilization efficiency, curtail energy consumption, and mitigate environmental impacts.Such coordination culminates in notable economic and environmental dividends. Equipment operating characteristics Optimizing the regional integrated energy system necessitates a thorough understanding of each equipment unit's operational characteristics.The aim is to ensure that every energy-producing unit's output aligns with the system's load demand.Consequently, to maximize the energy efficiency of the RIES, we've formulated mathematical www.nature.com/scientificreports/models for key energy-producing units, including coal-fired generators, combined heat and power units, wind turbines, and other representative equipment. (1) Coal-fired Power Generation Unit.The coal consumption characteristics of coal-fired power generation units are inherently nonlinear.In practical applications, a second-order approximation model is usually used.The coal consumption of coal-fired power generation units is as shown in Eq. (1). where F G,t is the coal consumption of coal-fired power generation unit i at time period t, P i,t is the actual active power output of coal-fired power generation unit i at time period t, and α i , β i , and γ i are the coal consumption coefficients of coal-fired power generation unit i, Ω is the set of coal-fired power generation unit. (2) Combined Heat and Power Unit. The fuel consumption of a combined heat and power (CHP) unit, which can use coal, natural gas, or other fuels to provide both electricity and heat to the system's users, is related to its output of electrical power and thermal power.The fuel consumption characteristics are shown in Eq. ( 2). where F CHP,t is the fuel consumption of the CHP unit i at time period t, P i,t CHP is the active power output of the CHP unit i at time period t; H is the thermal output of the CHP unit i at time period t; a CHP,i , b CHP,i , c CHP,i , d CHP,i , e CHP,i , and f CHP,i are the fuel consumption coefficients of the CHP unit i; Ω is the set of CHP units. The active power of wind power generators is influenced by factors such as the availability of wind energy, wind speed, wind turbine power curves, wind turbine shape, and turbine size.We use a simplified mathematical model for wind power generators, representing the active power of wind power generators as a piecewise function related to the wind speed of the wind farm at that time period.The forecasted power output of the wind power generator is shown in Eq. ( 3). where P w,t is the forecasted active power output of the wind power generator at time period t; P max is the maximum power output of the wind power generator; V CO , V CI , and V R represent the cut-out wind speed, cut-in wind speed, and rated wind speed of the wind power generator, respectively; V t is the forecasted wind speed of the wind farm at time period t. Objective functions Based on the equipment operating features outlined in "Equipment operating characteristics", we've developed a low-carbon economic dispatch model for RIES.The model encompasses two primary objectives: minimizing system operational costs and carbon emissions.The goal is to optimize both the system's total operational expenses and its carbon footprint harmoniously. The economic cost of RIES operation mainly considers factors such as the energy consumption cost of energy conversion equipment, the cost of purchasing energy from external energy networks, and the penalty for abandoning renewable energy.The details are given in Eq. ( 4). where Cost all , Cost units , Cost buy , and Cost PEN represent the total economic cost of system operation, the total cost of operating and maintaining units in the system, the total amount of energy purchase transactions of the system, and the operation penalty, respectively. F unit , Price f , P unit , μ unit refer to the system's fuel consumption, unit price of fuel, power output of each unit, and unit operation and maintenance cost, respectively.Cost buy,P , Cost buy,H , Cost buy,G refer to the system's cost of purchasing electricity, heat, and gas, respectively.Cost PEN,imb and Cost PEN,waste represent the penalty for power imbalance and energy waste, respectively. (1) www.nature.com/scientificreports/ The carbon emissions of RIES mainly consider the carbon emissions generated by energy conversion equipment and the equivalent carbon emissions of the energy purchased from external energy networks, as shown in Eq. (5). where E all , E units , E buy , E f , E buy,P , E buy,H , and E buy,G represent the total carbon emissions of the regional integrated energy system, carbon emissions from unit operation, equivalent carbon emissions from energy purchases, carbon emissions per unit of fuel, equivalent carbon emissions from electricity purchases, equivalent carbon emissions from heat purchases, and equivalent carbon emissions from gas purchases, respectively.F units represent the fuel consumption of the system's units.P buy , H buy , and G buy represent the system's electricity, heat, and gas purchases, respectively. Constraints To maintain the safe and reliable operation of the regional integrated energy system, each unit's functioning is governed by certain constraints.These include the balance between supply and demand, equipment operations, and system power flow.Leveraging the operational characteristics described in "Equipment operating characteristics", we classify system constraints according to their respective subsystems.Specifically, these constraints fall into three categories: those of the electricity subsystem, the natural gas subsystem, and the thermal subsystem.The following section elaborates on the constraints specific to each subsystem. The constraints on the operation of the electrical grid in a regional integrated energy system primarily include power balance constraints, unit output constraints, unit ramp-up constraints, and system power flow constraints, as shown in Eq. ( 6). where P units,t is the output power of coal-fired power units, CHP units, wind power units, and other units at time t; P load,t is the system's electricity demand at time t; P max,i,t and P min,i,t are the upper and lower output limits of unit i, respectively; R i,U and R i,D are the ramp-up and ramp-down rates of unit i, respectively; P ij,t is the power flow value of branch ij at time t; P ij , P ij are the lower and upper power flow limits of branch ij, respectively. The operating constraints of the gas network in the RIES mainly include gas flow balance constraints, gas flow constraints, etc., as shown in Eq. (7). where w k min and w k max represent the upper and lower limits of gas supply from the gas well, respectively.w k,t denotes the gas supply from gas well k in period t. f mn min , f mn max represent the upper and lower limits of pipeline gas flow, respectively.f mn,t indicates the gas flow in pipeline mn in period t. The operational constraints of the RIES's heat network mainly include heat network power balance constraints and unit output constraints, as shown in Eq. (8). where H buy,t is the purchased heating power at time t; H CHP,t is the heat output power of the CHP unit at time; H load,t is the heat load demand of the system at time t; H max,CHP and H min,CHP are the upper and lower limits of the heat output power of the CHP unit, respectively; R i,U and R i,D are the ramp-up and ramp-down rates of unit i, respectively. (5) P units,t + P buy,t = P load,t P min,i,t ≤ P i,t ≤ P max,i,t 0 ≤ P W,t ≤ P f ,W,t Low-carbon economic dispatch method for RIES based on MOCAHA The RIES low-carbon economic dispatch challenge is inherently intricate and non-convex.Its solution domain is typically nonlinear, discrete, and replete with multiple local optima.Confronted with high-constraint RIES scheduling, some optimization algorithms are susceptible to entrapment within these local optima, hindering the global optimal solution's identification.Moreover, various optimization strategies exhibit diminished global search capabilities, resulting in an incomplete traversal of the solution landscape and potentially overlooking the true optimum.Such shortcomings can culminate in protracted problem-solving durations, inhibiting timely solution attainment.Even when solutions are found, they might stray from the optimal considerably.Intrinsically, the RIES low-carbon economic dispatch is a multi-objective conundrum.Contemporary dispatching approaches, in tackling multi-objective dilemmas, often resort to subjective weighting mechanisms.Such methods entail schedulers assigning weights to distinct objectives grounded in their expertise and discretion.Nonetheless, this methodology is not without pitfalls.It is inherently reliant on the dispatcher's subjective experience, potentially yielding inconsistent outcomes.Furthermore, it may fall short in encapsulating the intricate interrelations and dependencies among objectives, rendering the pursuit of an authentically optimal outcome arduous. The performance and outcome of a scheduling plan can be severely impacted by the issues previously discussed, which subsequently influence the overall operational efficiency of the energy system.Addressing these challenges, we introduce a novel RIES low-carbon economic scheduling approach designed to enhance the coordinated optimization of the RIES system.This aims to reduce both operational costs and carbon emissions.The complete procedure of this proposed method is illustrated in Fig. 2. Our method incorporates two main components tailored to solve the RIES low-carbon economic scheduling conundrum.Firstly, we focus on the application and refinement of AHA, resulting in the development of the multi-objective chaotic artificial hummingbird algorithm.Once the RIES operational data is fed into this algorithm, it conducts a thorough global search in the optimization space.This exhaustive search considers the reduction of economic and carbon emissions simultaneously.The outcome of this process is a suite of Pareto optimal solutions, constituting the Pareto frontier for the RIES economic dispatch issue. The second component revolves around the application and enhancement of the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS).Specifically, we have incorporated the TOPSIS approach that merges both subjective and objective criteria (TOPSIS-SOCW).This strategy evaluates the objective attributes of the Pareto frontier using the entropy weight method and simultaneously integrates the insights and judgment of system scheduling experts.As a result, TOPSIS-SOCW offers a holistic ranking of all potential solutions within the Pareto frontier.This culminates in the identification of an optimal solution, balancing economic and low-carbon priorities, forming the finalized operational plan for RIES.In essence, this method combined two complementary algorithmic steps addresses the RIES economic dispatch challenge adeptly, ensuring the system's optimal performance. Traditional artificial hummingbird algorithm Optimization algorithms play a pivotal role in addressing the RIES scheduling challenge.They assist decisionmakers in identifying operation plans that adhere to system operational constraints and optimize the system's objectives.Among these algorithms, AHA stands out due to its inherent flexibility in parameter settings.Notably, it demands fewer adjustable parameters compared to many of its counterparts.This streamlined configuration not only facilitates the ease of implementation and debugging but also mitigates the potential for performance drop-offs stemming from unsuitable parameter choices.Additionally, AHA is designed with effective exploration and exploitation techniques, enabling it to rapidly pinpoint promising solutions within an expansive search domain and delve into these solutions for refined results deeply.Crucially, AHA possesses the capability to navigate complex, non-convex terrains without frequently succumbing to local optima traps.Collectively, these attributes render AHA especially apt for tackling RIES optimization scheduling issues. The Artificial Hummingbird Algorithm (AHA) is a novel meta-heuristic optimization algorithm proposed by Zhao et al., inspired by the nectar-foraging behavior of hummingbirds 22,23 .What sets hummingbirds apart is their astonishing foraging memory.Hummingbirds have a hippocampus in their brains that is much larger than that of any other bird, playing a critical role in learning and memory.AHA simulates three specific flight skills of hummingbirds in nature, namely axial flight, diagonal flight, and omnidirectional flight, as well as three intelligent foraging strategies: guided foraging, regional foraging, and migratory foraging.By introducing a visit table, AHA realizes the memory function of hummingbirds in finding and selecting food sources, ultimately achieving the goal of solving optimization problems. In AHA, L hummingbirds move in a D-dimensional search space to find the optimal solution for the problem to be optimized.The position of each hummingbird individual is the food source it visits, representing a feasible solution to the problem being optimized, denoted as X i = (x i,1 , x i,2 , ..., x i,D ) .The nectar replenishment rate of the food source represents the fitness value corresponding to the feasible solution.The AHA algorithm steps are as follows: Step 1: Initialization.www.nature.com/scientificreports/AHA places n hummingbirds on n food sources.The positions of the food sources are randomly initialized according to Eq. ( 9): x i represents the position of the ith food source; n represents the population size; S u and S L represent the upper and lower limits of the search space, respectively; r represents a random number uniformly distributed between [0, 1]. ( 9) The food source visitation table is initialized according to Eq. ( 10): where V i,j is the numerical value of row i and column j of visit table V; i = j means the hummingbird is feeding at a specific food source; i ≠ j indicates that the jth food source has been visited by the ith hummingbird in the current iteration. In order to obtain more nectar, hummingbirds will visit the food source with the highest nectar replenishment rate among food sources at the same visit level.During the foraging process, a direction switching vector is introduced to describe three skills: axial flight, diagonal flight, and omnidirectional flight. These flight patterns can be extended to a d-D space.The axial flight is defined as shown in Eq. ( 11): The diagonal flight is defined as shown in Eq. ( 12): The omnidirectional flight is defined as shown in Eq. ( 13): where D (i) represents the flying skill; rand ([1, d]) indicates generating a random integer from 1 to d; randperm(k) denotes creating a random permutation of integers from 1 to k; r 1 represents a random number uniformly distributed between (0, 1]; d represents the dimension of the problem.Where i = 1, 2, …, d. The mathematical descriptions of these flying skills allow hummingbirds to visit target food sources and obtain candidate food sources.The update of the position of the candidate food source is mathematically described as follows in Eq. ( 14): where v i (t + 1) represents the position of the ith candidate food source at iteration t + 1; x i (t) represents the position of the ith food source at iteration t; x i,tar (t) represents the position of the target food source that the ith hummingbird will visit; and a is a guided factor, which is subject to the normal distribution with mean = 0 and standard deviation = 1. The position update of the ith food source during guidance foraging is shown in Eq. ( 15): where x i (t + 1) represents the position of the ith food source at iteration (t + 1); f(x) represents the fitness value of the function; other parameters have the same meanings as above. Step 3: Territorial foraging.After visiting the target food source, hummingbirds are likely to move to nearby areas outside their territory to search for new food sources rather than visit other existing food sources.The mathematical description of the position update of the candidate food source in the nearby area is shown in Eq. ( 16): where b is the territorial factor, which is subject to a normal distribution with mean equals 0 and standard deviation equals 1; other parameters have the same meanings as above. When the area frequently visited by hummingbirds is lacking in food, hummingbirds usually migrate to more distant food source areas to forage.The mathematical description of the position update of the food source with the worst nectar replenishment rate is shown in Eq. ( 17): where x wor (t + 1) is the position of the food source with the worst nectar replenishment rate in the population at iteration (t + 1); other parameters have the same meanings as above. Optimization method of low-carbon economy based on multi-objective chaotic artificial hummingbird algorithm The traditional artificial hummingbird algorithm targets single-objective optimization challenges predominantly.When confronted with multi-objective optimization dilemmas, it struggles to harmonize conflicting optimization goals 22 .Addressing this shortcoming, we've instilled a multi-objective optimization framework that leans on non-dominated sorting and crowding distance sorting.By leveraging chaotic mapping, we generate the initial population, and a dynamic adjustment approach is integrated to bolster the algorithm's optimization prowess.www.nature.com/scientificreports/Building on these enhancements, we advocate for the multi-objective chaotic artificial hummingbird algorithm (MOCAHA) as an apt solution for the low-carbon economic operational quandary present in regional integrated energy systems.The specific steps of MOCAHA are as follows: Step 1: Input the forecasted load and equipment parameters for the RIES system and initialize the relevant parameters. Step 2: Initialize the hummingbird population using the Logistic-Tent chaotic mapping method. Step 3: Evaluate the objective functions for all food sources, followed by non-dominated sorting, where each food source corresponds to a system operation scheme. Step 4: Incorporate the sorted operation schemes into the external archive and utilize the crowding distance sorting algorithm to maintain the Archive. Step 5: Perform operations such as guided foraging, regional foraging, and migration foraging for all hummingbird individuals based on their dominance rank and the number of iterations.Update the visitation table accordingly. Step 6: Iterate through Step 3 to Step 5 until the termination conditions are met and subsequently output the final Archive.The external archive comprises a series of economical and low-carbon operation schemes. Construction of multi-objective artificial hummingbird algorithm The challenge of optimizing low-carbon economic operations within RIES fundamentally hinges on multiobjective optimization.Contrastingly, the traditional AHA primarily functions as a single-objective optimization algorithm.To adapt AHA for this multifaceted issue, we've integrated a non-dominated sorting algorithm 24 .Alongside this, we've incorporated an external archive, rooted in the crowding distance sorting method, culminating in the development of a multi-objective artificial hummingbird algorithm. Firstly, the non-dominated sorting algorithm was integrated to ascertain the dominance level of each RIES optimization scheduling solution.Subsequently, an external archive grounded in the crowding distance sorting approach was incorporated.Throughout the algorithm's iterative phase, optimal non-dominated solutions are consistently retained within these external archives.To effectively manage the archive size during the optimization progression, a crowding distance-based sorting methodology is utilized, which serves to enhance both solution diversity and the convergence speed of the population. The core idea of the non-dominated sorting is to rank all solutions based on their dominance relationships in the objective space.Dominance relation means that one solution is better than another solution in all objectives, or at least better in one objective. For the multi-objective optimization problem of RIES, for an optimization problem with n objective functions (where n is a positive integer), there is a equation as follows: where X a and X b are any two given scheduling solutions, and f i (x) represents the ith objective function fitness. If Eq. ( 18) holds, then X b is said to dominate X a .By comparing the dominance relationships of all solutions, they can be divided into different dominance levels F(F = 1, 2, …, n). Figure 3 shows a schematic diagram of non-dominated sorting with two objectives.If a decision variable is not dominated by any other decision variable, it is called a non-dominated solution.All non-dominated solutions in the solution set form the Pareto solution set. The non-dominated sorting can achieve a reasonable balance between multiple conflicting optimization objectives.At the same time, the dominance level of each operation plan obtained can show the importance of each scheduling plan in the current population, providing an important basis for the dynamic adjustment of algorithm factors in "Dynamic adjustment factor". (2) The External Archive Based on Crowding Distance Sorting. We introduced an external archive based on crowding distance sorting, and after each iteration, we stored the operation plans in the Pareto solution set obtained through non-dominated sorting into the external archive 25 .At the same time, as the scheduling plans in the external archive will change continuously with the progress of iteration, we use the crowding distance sorting algorithm to maintain the size of the external archive, to enhance the diversity of the population within the external archive, avoid the optimization process falling into local optimal solutions prematurely, and provide more choices and stronger decisionmaking basis for subsequently selecting the optimal compromise scheduling plan from the Pareto front. The crowding distance sorting is a non-parametric method that can enhance the diversity of the Pareto solution set.It maintains a fixed-size external archive by removing overly redundant solutions with smaller crowding distances.The specific steps are as follows: Step1: For each individual in the Pareto front, initialize their crowding distance to 0. Step2: For each objective function, sort the individuals according to that objective function.Set the crowding distances of the first and last individuals in the sorted list to infinity, indicating their boundary positions. Step3: For the remaining individuals, calculate their crowding distance.Compute the distance between each individual and their closest neighbors on each objective, i.e., the difference between the objective values of x i+1 and x i-1 , and normalize it.The calculation formula is shown in Eq. ( 19): where d k (x i ) represents the crowding distance of the ith individual on the kth objective, x i+1 and x i−1 are the two objective values adjacent to x i , and max(f k ) and min(f k ) represent the objective values of the first and last individuals on the kth objective. Step4: Traverse each objective, sum the normalized crowding distances on each objective, and obtain the final crowding distance D(x i ) for x i : Step5: Then, reorder the individuals in the Pareto solution set according to the crowding distance and remove the individuals with the lowest rank. Step6: Repeat Step3 to Step6 until the number of individuals in the Pareto solution set meets the requirements. By introducing the multi-objective optimization framework based on non-dominated sorting and crowding distance sorting into the traditional AHA, we have obtained MOAHA, which expands the applicability of AHA.It enables AHA to solve the RIES low-carbon economic scheduling problem and obtain the optimal Pareto front that describes the RIES low-carbon economic scheduling problem.Furthermore, MOAHA can find the best balance between multiple optimization objectives for RIES, thereby improving the overall performance of RIES. Population initialization based on chaotic mapping For traditional AHA, the initial population is highly random.It often far from the optimal solution, leading to a high degree of randomness and instability in the optimization results, which affects the search efficiency of the optimal solution.This impact is even more pronounced when solving complex RIES low-carbon economic operation problems.Therefore, we introduce chaotic mapping to dynamically and uniformly generate the initial population within the search space to improve population diversity and uniform traversal 26 . Chaotic mapping is an optimization method based on chaotic mapping rules, specifically used to describe the complex chaotic behavior generated by nonlinear systems.This method has a series of remarkable features such as determinism, apparent randomness, high sensitivity to initial values, and non-periodicity.It is precisely because of these unique attributes that chaotic mapping plays an important role in the population initialization stage of meta-heuristic algorithms 27 .The working principle first involves mapping the variables to be optimized to the value range of chaotic variables.Then, optimization is performed through the characteristics of chaotic variables, and finally, the optimized solutions found in the chaotic space are linearly transferred to the actual optimization space. Specifically, chaotic systems can be divided into two categories: low-dimensional chaos and high-dimensional chaos.High-dimensional chaotic systems stand out for their complex structure, numerous control parameters, and relatively high computational complexity.In contrast, low-dimensional chaotic systems offer simpler structures, fewer control parameters, and more intuitive implementation methods.However, low-dimensional chaotic systems also have some problems, such as the finiteness of chaotic behavior, the discontinuity of chaotic intervals, and the non-uniform data distribution of generated chaotic sequences. Different chaotic mappings have different properties, and their effects on optimization algorithms also vary significantly.For example, Cubic and Chebyshev mappings are sensitive to initial values, but mapping values are unevenly distributed in the interval [0,1].Logistic mapping has strong spatial traversal, but there are blank areas and aggregation areas in the system.Sine mapping has the advantages of simple structure and high efficiency, but there is a problem of uneven probability density distribution.Sinusoidal mapping has a nonlinear feedback mechanism and sensitivity to initial values, but there are blank areas and fixed-point problems.Tent mapping has good correlation and uniform probability density distribution, but it easily decays into periodic sequences in the later stages of iteration.The common types of chaotic mappings are shown in Table 1.www.nature.com/scientificreports/Therefore, considering the limitations of ordinary chaos mappings, we chose to combine the Logistic chaos mapping and the Tent chaos mapping to form the Logistic-Tent chaos mapping and use it as the food source initialization method for AHA to enhance algorithm performance and improve the efficiency of solving the regional integrated energy optimization scheduling problem. Figure 4 shows the distribution of the Logistic-Tent chaos mapping, and its formula is shown in Eq. ( 21): where the initial value of x n is in the range (0,1); the control parameter r is in the range (0,4); and mod is the modulus operation. Dynamic adjustment factor On the other hand, the traditional AHA relies on random numbers to select foraging methods, which results in slower search speed and unstable search capability in the optimization space, affecting the efficiency of searching for scheduling solutions.To enhance both the global and local search capabilities of AHA and improve the algorithm's optimization speed, we introduce a probability dynamic adjustment factor P. It adjusts the selection probability of guided foraging or regional foraging based on the dominance level of the scheduling solution, encouraging birds with lower fitness in the population to choose guided foraging with higher probability, thus enhancing global search capability.It also encourages birds with higher fitness in the population to choose regional foraging with higher probability, enhancing local search capability.The probability dynamic adjustment function P is represented as: where P i t+1 is the probability dynamic adjustment function of the ith bird in the (t + 1)th iteration, P max and P min are the preset maximum and minimum probabilities.F(X i t ) is the dominance level of the ith bird in the tth We control the probability of guided foraging and regional foraging using the probability dynamic adjustment factor.When P i t+1 ≤ P rand t guided foraging is performed; otherwise, regional foraging is carried out, where P rand t is a random number within the range of 0 to 1.In summary, the proposed MOCAHA can solve multi-objective optimization problems.We have made improvements to address issues with the traditional AHA, such as high randomness in the initial population, slow optimization speed, and unstable search capabilities.These improvements make the multi-objective chaotic artificial hummingbird algorithm suitable for solving the low-carbon economic dispatch problem in RIES. Scheduling decision method based on TOPSIS with subjective and objective combined weights Upon deriving the Pareto front, which delineates the optimal solution set for the multi-objective optimization scheduling of the RIES using MOCAHA, the system's real-time operational prerequisites must be considered to discern a compromise solution.The TOPSIS is widely used in multi-objective optimization decision-making.It can fully utilize the original data information and accurately reflect the gap between various decision-making schemes.The TOPSIS takes the results in the optimal solution set as decision-making schemes, calculates the relative closeness to the ideal solution, and takes the solution with smaller relative closeness as the effective compromise solution.Nonetheless, conventional TOPSIS harbors two principal shortcomings: firstly, the metric's weight is predominantly dictated by expert opinion, introducing an element of subjectivity.Secondly, its reliance on Euclidean distance for computing the disparity between each proposal and the archetype tends to obscure the intrinsic merits and drawbacks of individual plans.Specifically, when multiple metrics are at play, a scheme proximate to the positive ideal solution might, paradoxically, also be near its negative counterpart when viewed through the lens of Euclidean distance. We proposed a TOPSIS based on subjective and objective combined weights.After obtained the Pareto optimal solution set, we use the entropy weight TOPSIS that considers subjective weight correction.First, the optimal compromise solution is selected from the solution set to construct an evaluation model.A comprehensive evaluation is performed based on the entropy weight method, and the objective weight is sought, considering the impact of the Pareto optimal solution difference.Then, combine the subjective weight determined by the dispatching expert experience to make appropriate corrections to the objective weight and obtain the subjective and objective combined weights.The method used considers the entropy weight and subjective weight of the objective function comprehensively when calculating the relative closeness.It takes into account the experience of scheduling decision-making experts while reflecting the importance of the two objective functions objectively.According to this, the Pareto optimal solution with the maximum relative closeness value is selected as the optimal compromise solution.Based on the data of N optimal points on the Pareto front, a model is established and a comprehensive evaluation is conducted.The specific steps are as follows: Step1: Establish an evaluation matrix. For the two objective functions we established and the N Pareto optimal solutions, an evaluation matrix R ′ is established: When i takes values of 1 and 2, rij' is the value of the ith objective function corresponding to the jth Pareto optimal solution. Step2: Normalization of data.Since there are differences in dimensions and orders of magnitude between objective functions, the original data can be normalized as follows: where r ij is the normalized value of the ith objective function corresponding to the jth Pareto optimal solution; and max(r ′ ij ) and min(r ′ ij ) are the maximum and minimum values of the ith row in R, respectively.The normal- ized evaluation matrix R is: Step3: Using entropy weight method to calculate the objective weight of each objective function.First, calculate the entropy of each objective function by Eq. ( 26): where e i is the entropy of the i-th objective. The entropy weights is determined by the degree of difference in the solutions under this objective, which represents the amount of information provided by this objective.The formula for calculating the entropy weight is: where α i is the entropy weight of the i-th objective. Step4: Use the expert weights of dispatching experience to obtain the subjective and objective combined weights, the equation is: where λ i is the subjective weight set by the scheduling expert, and ω i is the subjective and objective combined weights. As you can see, ω i considers both the work experience of dispatchers and the entropy weights that objectively reflects the degree of difference between different solutions on the Pareto front. Step5: Establish a weighted normalized evaluation matrix R In the matrix R , the maximum and minimum values of the ith row correspond to the most ideal and least ideal situations for the ith objective, respectively. Step6: Determine the positive and negative ideal points. The formulas for calculating the positive ideal point F + and the negative ideal point F -are followed: where RiN is the value of ith row and Nth column in the matrix R ; min(•), max(•) indicate the minimum and maximum values, respectively. Step7: Calculate the relative closeness of each Pareto optimal solution where T j is the relative closeness of the jth solution.D j + and D j -are the Euclidean distances from the jth solution to the positive ideal point and the negative ideal point, respectively. The higher the relative closeness value, the closer the solution is to the positive ideal point.Therefore, the Pareto optimal solution with the highest relative closeness is selected as the optimal compromise solution. In summary, to address the inherent challenges in RIES low-carbon economic dispatch, such as susceptibility to local optima, delayed solution speed, and unpredictable outcomes, we've advanced a RIES low-carbon economic dispatch method based on MOCAHA.Firstly, this method employs the multi-objective chaotic artificial hummingbird algorithm to solve the RIES low-carbon economic problem, obtaining a uniformly distributed Pareto optimal frontier, thereby optimizing both the economic and low-carbon operation of RIES.Then, based on the proposed TOPSIS-SOCW and by integrating the objective features of the Pareto solution set with the expert judgment of dispatchers, an optimal dispatch scheme that balances the environmental protection and economic aspects of RIES is obtained.In essence, our approach not only provides a solution to the low-carbon economic scheduling challenges of RIES but also fortifies the system's energy efficiency, achieving a tangible low-carbon economic operation for RIES. Case study Introduction to the simulation case This paper investigates the RIES located in the northeastern region of China primarily.The climate characteristics of this region are hot summers and cold winters, with a particularly prominent demand for thermal energy.Additionally, the northeastern region possesses a developed natural gas pipeline system and abundant wind power resources, offering immense potential for the development of RIES. To enhance the energy efficiency of the Northeast's regional integrated energy system holistically, we adopted a comprehensive simulation case.This comprises a cutting-edge IEEE 6-node electric grid, coupled with a 6-node gas grid and a 6-node thermal grid 28,29 .Figure 5 delineates the simulation system's topology.This simulation case considers the coupling relationships among the electric power system, thermal system, and natural gas system adequately.It also introduces a significant amount of wind power resources to simulate the actual energy supply and demand characteristics of the northeastern region.By leveraging this simulation case, we aspire to probe deeper into the operational nuances and optimization strategies of RIES specific to the Northeast.In doing so, we aim to offer robust theoretical insights that champion efficient energy consumption and further the region's sustainable progress.( 27) In this case, the electrical grid, heat network, and gas network are interconnected through three CHP units that run on gas.Additionally, the electrical grid incorporates one coal-fired power unit and one wind power unit.The parameters of the units used in this case are detailed in Table 2.The load forecast curve, wind power plant output forecast curve and other parameters are shown in Fig. 6.The time-of-use purchase price of electricity from the main grid for users is shown in Table 3.The gas source purchase price is 1.8$/kcf, and the heat source purchase price is 20$/MW•h.The penalty price for curtailment of wind power is 50$/MW•h.The penalty prices for reducing electric load and gas load are 100$/MW h and 5$/kcf, respectively.Other system parameters can be found in the Refs 30,31 . We focus on the day-ahead scheduling of the system, with a scheduling period of 24 h and a scheduling time step set at 1 h.The simulation experiments are conducted on a computer with a 2.40 GHz, 16 GB RAM, AMD Ryzen R7-5800H CPU, running Windows 11, and the experiments are carried out on the MATLAB R2020b software platform. Comparison of multi-objective optimization algorithm performance In this paper, we benchmarked our proposed multi-objective chaotic artificial hummingbird algorithm (MOCAHA) against four renowned algorithms: the multi-objective artificial hummingbird algorithm (MOAHA), the multi-objective multi-verse optimization (MOMVO) algorithm 13 , the multi-objective gray wolf optimization (MOGWO) algorithm 28 , and the multi-objective particle swarm optimization (MOPSO) algorithm 29 .Each algorithm was tasked with solving the case study model outlined in this paper.To maintain consistency in the evaluation, we standardized the iteration times 3,32,33 for all algorithms to 1000, set the population size at 100, and kept the external archive size at 50.Table 4 presents the statistical results of the Pareto frontier obtained by the five algorithms after solving the regional integrated energy system's low-carbon economic operation problem.According to the table, compared to MOAHA, MOCAHA can enhance the optimization effect of economic cost while ensuring the optimization effect of carbon emissions.It improves the reduction of economic costs by an average of 8.87% compared to MOAHA; compared to MOMVO, MOCAHA has an advantage in reducing carbon emissions, with an average improvement of 5.55%; compared to MOGWO, MOCAHA has a significant advantage in reducing economic The best, average, and worst values of the five algorithms under each objective are ranked, and the average ranks are taken.The average rankings of each algorithm are shown in Fig. 7.It can be seen that the average rank of MOCAHA is 1.5, ranking first among the five algorithms. When addressing practical multi-objective optimization problems, the superior performance of a multiobjective optimization algorithm is reflected in both the breadth of the Pareto solution set distribution and its exceptional convergence speed 34 . A widely distributed Pareto solution set implies that the algorithm is more adept at capturing the diversity and complexity within the problem space, providing decision-makers with a more comprehensive set of choices.This is particularly crucial in practical scenarios where the complexity and diversity of problems often involve multiple competitive objectives.Figure 8 shows the comparison of the final Pareto frontiers obtained by the five algorithms in the simulation system of this paper.The Pareto frontier of different algorithms is formed by the solutions in the external archive at the end of their iterations.Analysis shows that MOCAHA, when solving the low-carbon and economic operation problem of the regional integrated energy system, can dominate the www.nature.com/scientificreports/solution sets obtained by all other algorithms and has the best optimization effect in terms of economic cost and carbon emissions.Furthermore, as can be seen from Fig. 8, under the same conditions of the number of iterations, population size, and external archive set size, the Pareto frontier obtained by MOCAHA has a wider extension range and more uniform distribution.Simultaneously, excellent convergence speed is equally vital for the practical application of multi-objective optimization algorithms 35 .A fast and stable convergence capability enables the algorithm to swiftly identify solution sets that are close to the optimal under limited computational resources, thereby enhancing the algorithm's practicality and efficiency.In dealing with real-world problems, the ability to rapidly obtain high-quality solutions is paramount for the practical use of optimization algorithms, especially in contexts involving complex decision-making and resource constraints.Figure 9 is obtained by saving the population in the Archive set every 100 iterations for each algorithm.Analysis shows that, after introducing the population initialization and dynamic adjustment factors based on the Logistic-Tent chaotic mapping, our proposed MOCAHA has better convergence in the early stage of the algorithm iteration than MOGWO, MOMVO, MOPSO, and ranks second only to MOAHA.However, in the later stage of the iteration, MOCAHA's optimization ability ranks first among the five algorithms and has a clear advantage. In addition, we ran each of the five algorithms 20 times randomly, combining the objective values of the 100 sets of Pareto non-dominated solutions obtained to determine the Pareto frontier of the combined solution set, which is used as the approximate Pareto optimal frontier for all algorithms to solve this multi-objective optimization problem 36 .Then, we used the convergence, diversity, and comprehensive performance metrics from 37 to validate the overall performance of the RIES low-carbon economic operation solutions obtained by the five algorithms 38 . Convergence metrics usually calculate the distance between the solution set obtained by the multi-objective optimization algorithm and the Pareto approximate frontier to reflect the closeness of the solution set to the real Pareto frontier.Different metrics choose different types of distances, and most require a reference set for comparison, i.e., an external reference set or another solution set.The better the convergence of the solution set, the better the convergence of the multi-objective optimization algorithm used.We used Generational Distance (GD) and epsilon (ε) to test and compare the convergence of the algorithms, and the results are shown in Table 5. www.nature.com/scientificreports/According to Table 5, MOCAHA has the best performance in both GD and ε indicators for both the best and average performance, ranking first among the five algorithms.The maximum value of GD is only higher than that of MOMVO, and the maximum value of ε is only higher than that of MOAHA, both ranking second.In summary, among the five algorithms, MOCAHA has the best convergence performance in solving the lowcarbon economic operation problem of regional integrated energy systems. Diversity metrics measure the distribution and extent of the solution set obtained by the multi-objective optimization algorithm.The more uniformly the solution set is distributed, the better its distribution; the denser the solution set is distributed in the boundary area of the Pareto frontier, the better its extent.We use Δ, Coverage over the Pareto Front (CPF), and Maximum Spread (MS) to test and compare the convergence of the algorithms.Where, the smaller the Δ means the better the diversity of the solution set.The larger the CPF and MS, the better the diversity of the solution set.The results are shown in Table 6. According to Table 6, the average performance of the solution set obtained by MOCAHA is the best in both the Δ and CPF.In the MS, the average performance of MOCAHA is slightly worse than MOGWO, comparable to MOAHA, but the best performance ranks first among the five algorithms.Therefore, the solution set obtained by MOCAHA has good distribution and extent and can meet the diversified needs of the solution set. The comprehensive metric can simultaneously measure the convergence and diversity of the solution set.Suppose there are two solution sets, S1 and S2.If S1 is better than S2 in the value of a certain comprehensive metric, it means that S2 is better than S2 in convergence or diversity, and it is possible to be better than the solution set S2 in both performances at the same time.We use Hyper Volume (HV), Inverted Generational Distance (IGD), IGDp, and Δp as four indicators to quantitatively evaluate the comprehensive performance of the Pareto solution set of the algorithm.The results are shown in Table 7. According to Table 7, the average performance of the solution set obtained by MOCAHA is the best in all three evaluation indicators among the five algorithms, and the worst performance is worse than MOAHA slightly.Therefore, it can be seen that MOCAHA has excellent performance on all comprehensive indicators.www.nature.com/scientificreports/ The average rankings of each performance metric for each algorithm were calculated, resulting in an average ranking of the five algorithms for solving RIES multi-objective optimization operation problems.The results are shown in Fig. 10.It can be seen that the average ranking of MOCAHA is 2.31, which is significantly higher than MOAHA, MOMVO, MOGWO, and MOPSO.In summary, MOCAHA has the best overall performance in handling RIES operation optimization problems and possesses excellent search capability for the optimal solution and Pareto solution set optimization. Comparison of low-carbon economic dispatch methods performance To assess the efficacy of the RIES low-carbon economic dispatch approach introduced in this paper, particularly its impact on decreasing operational economic costs and carbon emissions, we employed several methods for comparison.These included our proposed RIES dispatch method utilizing MOCAHA, alongside methods leveraging AHA, MOMVO, and MOPSO.We then conducted a thorough comparative analysis of the obtained operational outcomes.For all strategies that incorporated subjective weights, we standardized the subjective weight vectors to a default value of [0.5, 0.5]. The final solutions obtained by each method and their corresponding system operating economic costs and carbon emissions are shown in Table 8 and Fig. 11. Figures 12, 13, 14 and 15, along with Fig. 11 and Table 8, reveals the optimization results obtained from four different methods.Our proposed approach yields both lower economic cost and carbon emissions compared to the method based on AHA, with reductions of 8.8% and 14.2%, respectively.Compared to the method based on MOGWO, our method demonstrates a significant advantage in terms of economic cost, reducing the system's operational costs by 30.2%, and the carbon decrease by 2.3%.When compared to the method based on MOPSO, our proposed approach results in reductions of 25.7% in economic cost and 12.2% in carbon emissions. In summary, our proposed approach exhibits significant optimization benefits compared to the other three methods.It reduces the system's operational economic cost and carbon emissions effectively.Furthermore, our method ensures the basic consumption of wind power, substantially lowering the curtailment of renewable energy sources.The simulation results further confirm that the method we proposed is an efficient, feasible, and economically and environmentally beneficial dispatch strategy, contributing to achieving low-carbon economic operation in RIES. Impact of different subjective weights on simulation results Upon deriving the Pareto frontier through MOCAHA, our introduced TOPSIS-SOCW technique can seamlessly blend the objective traits of the Pareto frontier with the subjective insights of scheduling professionals.It's worth noting that, during real-time system operations, these experts have the leverage to mold system outcomes by adjusting subjective weights grounded in their expertise. To attest the method's adeptness in juxtaposing the solution set's objective facets and dispatch experts' seasoned discernment, we have laid out a triad of solution blueprints, delineated in Table 9.These three schemes have different subjective weight vectors, simulating three scenarios in actual system operation where dispatch experts prioritize system economics, low-carbon economic balance, and environmental protection respectively.Using these three schemes, we solve the case and conduct a comparative analysis of the results to clarify the influence of different subjective weights on the final operation results of RIES. Table 10 shows the system operating economic costs and carbon emissions under the three scenarios., the following analysis can be made: compared to Scenario 1, Scenario 2 reduced the subjective weights of the economic cost objective and increased the subjective weights of carbon emissions, resulting in a 4.4% increase in economic cost and a 5.2% reduction in carbon emissions in Scenario 2. Similarly, compared to Scenario 1, Scenario 3 purchased more energy from energy suppliers, leading to an 8.2% increase in economic cost and an 8.0% reduction in carbon emissions.Moreover, all three schemes absorbed wind power effectively, enhancing the utilization rate of renewable energy. Therefore, TOPSIS-SOCW can integrate the experience of dispatch personnel with the information in the solution set, weighing and trading off between the two optimization objectives of economic cost and carbon emissions.This approach not only considers the relative importance of multiple optimization objectives but also incorporates the expertise of professionals, offering effective support for solving the intricate RIES scheduling problems.Furthermore, this method can also meet the diverse operational requirements of RIES, assisting decision-makers in finding an appropriate compromise between multiple optimization objectives, ultimately realizing low-carbon economic operation of the system.www.nature.com/scientificreports/facilitates the identification of an optimal compromise solution.This approach ensures that the ultimate dispatch plan not only aligns with the diverse operational requirements of the system but also maintains a high level of objectivity. The simulation results underscore our method's superior performance in optimization results, convergence efficiency, and solution diversity.Notably, there's an 8.8% reduction in system operational economic costs and a 14.2% drop in carbon emissions. In our future research endeavors, our primary focus will be on seamlessly integrating diverse energy storage systems.We aim to enhance the system's capacity to incorporate renewable sources and transition towards ultra-reliable clean energy alternatives, such as geothermal and nuclear.Moreover, we plan to explore advanced strategies, including multi-agent technology and clustering techniques, to optimize the scheduling of RIES.This initiative is intended to further improve the overall effectiveness of system optimization.Our overarching ambition is to shape a regional integrated energy system that is both economically viable and environmentally friendly, while also bolstering security measures. Figure 1 . Figure1.Overall framework of the regional integrated energy system. Figure 2 . Figure 2. Flowchart of the RIES Low-carbon economic dispatch method. Figure 5 . Figure 5. Topology of the simulation example. Figure 7 . Figure 7. Average ranking of the optimization performance of each algorithm. Figure 8 . Figure 8.Comparison of the final Pareto frontier obtained by the five algorithms when solving the model. Figure 9 . Figure 9.The change process of the external archive during the iterative solving of the model by the five algorithms. Figure 10 . Figure 10.Average ranking of algorithm indicators. Figure 11 . Figure 11.Economic costs and carbon emissions generated by system operation under different solving methods. Figure 12 . Figure 12.System operation plan obtained by the method based MOCAHA. Figure 13 . Figure 13.System operation plan obtained by the method based on AHA. Figure 16 displays the Pareto front obtained by MOCAHA cruising and also displays the positions of the each final solutions obtained on the Pareto front.The final system operation plans obtained by each scenario are shown in Figs.17, 18, and 19, respectively. Figure 14 . Figure 14.System operation plan obtained by the method based on MOGWO. Figure 15 . Figure 15.System operation plan obtained by the method based on MOPSO. Figure 16 . Figure 16.The positions of results on the Pareto frontier. Figure 17 . Figure 17.System operation plan obtained in scenarios 1. min Cost all = Cost units + Cost buy + Cost PEN mid Table 1 . Introduction to several common chaos mappings. F t max and F min t are the highest and lowest dominance levels in the tth iteration, and F t mid is the median dominance level in the tth iteration. Table 3 . Time-of-use electricity prices., with an average improvement of 30.62%; compared to MOPSO, MOCAHA has significant advantages in reducing both economic costs and carbon emissions, with average improvements of 26.60% and 11.13% respectively.In addition, among the five algorithms, MOCAHA has the largest difference between its best and worst solutions, indicating a broader Pareto solution set distribution and stronger global search capability.Simultaneously, MOCAHA has the smallest best solutions in both reducing economic costs and carbon emissions, indicating the strongest local search capability among the algorithms.Therefore, MOCAHA can obtain a more reasonable set of Pareto non-dominated solutions, providing decision-makers with multiple candidate plans with different preferences. Table 5 . Comparison of convergence metrics. Table 6 . Comparison of diversity metrics. Table 7 . Comparison of comprehensive performance metrics. Table 8 . Economic costs and carbon emissions generated by system operation under different methods. Table 9 . Three optimization schemes with different subjective weights. Table 10 . Economic costs and carbon emissions under different scenarios.
13,349
2024-02-19T00:00:00.000
[ "Environmental Science", "Engineering", "Economics" ]
Developmental Plasticity in Child Growth and Maturation The ability of a given genotype to produce different phenotypes in response to different environments is termed “plasticity,” and is part of the organism’s “adaptability” to environmental cues. The expressions of suites of genes, particularly during development or life history transitions, probably underlie the fundamental plasticity of an organism. Plasticity in developmental programming has evolved in order to provide the best chances of survival and reproductive success to organisms under changing environments. Environmental conditions that are experienced in early life can profoundly influence human biology, child growth and maturation, and long-term health and longevity. Developmental origins of health and disease and life history transitions are purported to use placental, nutritional, and endocrine cues for setting long-term biological, mental, and behavioral strategies for child growth and maturation in response to local ecological and/or social conditions. The window of developmental plasticity extends from conception to early childhood, and even beyond to the transition from juvenility to adolescence, and could be transmitted transgenerationally. It involves epigenetic responses to environmental changes, which exert their effects during life history phase transitions. Trait variability, irrespective of whether it is physiological, morphological, behavioral, molecular, or cellular, is the leading edge of evolution. When facing the challenge of developing an individual that best fits its environment, nature demonstrates an interesting combination of five different adaptive processes that influence human phenotype, and each process operates on a different time scale (Table 1; Muller, 2007;Hochberg et al., 2011a). The first adaptive process involves changes in gene sequence and frequency in a population or species; for this type of adaptation, time is an important constraint -adaptive a genotype occurs over several hundred thousand years. The second process is the modification of homozygosity in a population, and this process occurs over several hundred years and numerous generations. The third process refers to plasticity, and this process occurs over the total life span of the individual, and may be carried forward for three to four generations. The fourth process is short-term acclimatization that can last several months or years. The fifth adaptive mechanism involves cultural adaptation, which also tends to be of moderate pace in the hundreds of years. The ability of a given genotype to produce different phenotypes in response to different environments is termed "plasticity," and is part of the organism's "adaptability" to environmental cues (Bateson et al., 2004). The expressions of suites of genes, particularly during development or life history transitions, probably underlie the fundamental plasticity of an organism (Crews et al., 2007). It was recently appreciated that the life history evolutionary theory is a powerful tool for understanding child growth and development from an evolutionary perspective (Hochberg, 2009). By applying this theory to developmental data, adaptive growthand metabolic-related strategies for transition from one life history phase to the next and the timing of such transitions (inherent adaptive plasticity) have evolved. The environmental conditions that are experienced in early life can profoundly influence human biology and long-term health. Early life nutrition and stress are among the best documented examples of such conditions because they influence the adult risk of developing metabolic diseases, such as type 2 diabetes mellitus (T2D), and cardiovascular diseases (Barker, 1995). Individuals who are born small-for-gestational age (SGA) have an increased risk of cardiovascular morbidity and mortality when they are adults (Barker and Osmond, 1986;Barker et al., 1989;Barker, 1995Barker, , 2006. This epidemiological evidence is now supported by an extensive experimental literature in animals (Gluckman et al., 2008). Evidence on the importance of prenatal and early postnatal growth for later morbidity suggests the existence of a link between developmental responses to early environments and adult biology. These associations are grounded in functional relationships, and are broadly consistent with life history evolution theory. PLASTICITY IN DEVELOPMENTAL PROGRAMMING Environments change continuously, and a species adapts its phenotype to the prevailing environment, even when the environmental change is disruptive. A species is considered to be well adapted and fit in evolutionary terms when it can survive to reproduce, and display relative phenotypic consistency across many generations. Phenotype stability is most likely to occur when the species has adapted to a normative range of environments which remains www.frontiersin.org relatively stable on generational time-scales. Yet, organisms exist within an environment that can change rapidly, and those species with a relatively fixed phenotype may not be able to respond sufficiently quickly in order to survive an unexpected environmental change. Adaptive plasticity enables a species to respond to an environmental change in order to survive and reproduce, and may manifest itself as polyphenism (alternative phenotypes in different environments, such as in metamorphosis) or as a continuous variation in traits. However, not all developmental responses to environmental cues have an adaptive basis. When the cue is severe or novel, the outcome may be disruptive, and may result in teratogenesis, disease, or death (Gluckman and Hanson, 2007a). The predictability of environmental changes is also an important determinant of the degree of adaptive flexibility of a species (Hochberg et al., 2011a). In some instances, the environmental change is highly predictable, and an adapted species exists as a limited range of subtle, but distinct and definable, phenotypes. Adaptive plasticity of an organism is associated with immediate adaptive responses (forecasting or predicting), which are concerned with its immediate survival with no consideration for the long-term consequences (Gluckman and Hanson, 2007a). These adaptive responses adjust the developmental phenotype, and comprise a set of processes that can be triggered by a wide range of environmental cues in order to promote lifetime fitness. Recognition of an environmental cue often occurs during sensitive periods in the lifespan of a species, namely the prenatal period and/or during transitions between life history phases (Hochberg, 2009;Hochberg et al., 2011a). Recognition of an environmental cue also enables the organism to adapt or acclimatize to an environment change, and creates future trajectories in its development. The resultant adaptive advantage depends upon the fidelity of the cue about the future state of the environment. High fidelity cues enable the organism to optimize its adaptation or fit to the anticipated environment. Low fidelity cues carry a fitness disadvantage, although the impact will depend upon the extent of mismatch between the predicted and actual future environment. Two types of adaptive responses or plasticity exist (Gluckman and Hanson, 2007a). The first types are the anticipatory or predictive adaptive responses where the developing organism forecasts the future environment, and then adjusts its phenotypic trajectory accordingly. The second types are the immediate adaptive responses which promote short-term maternal or fetal survival with some advantages in later life (developmental plasticity). Since these two adaptive responses come with a significant cost, individual members of a species make a cost-benefit analysis in order to determine the true value of an adaptive response. Within the adaptive responses, the organism may engage in a trade-off between phenotypic changes in order to ensure its short-term survival at the expense of a long-term advantage. Hence, trade-offs occur because energy needs to be allocated in order to meet the different metabolic and physiological demands of a developing organism. Therefore, trade-offs can often manifest themselves as longevity as an alternative to reduced survival of the juveniles. Such is the consequence of embryonic fetal development when it occurs in a deprived intrauterine environment as a result of a limited transplacental nutrient supply. In response, the fetus protects the development of its heart and brain at the expense of other organs, and somatic growth is retarded. Intrauterine growth restriction (IUGR) is an example of an immediate cryptically maladaptive response to the environment (Gluckman and Hanson, 2005). PLASTICITY IN PHASE TRANSITIONS OF HUMAN LIFE HISTORY The secular trends in child growth and puberty are dazzling examples of such adaptation (Arcaleni, 2006). European men are now 13 cm taller than they were 150 years ago. This range of plasticity in growth over approximately six generations is not long enough to result from changes in the DNA sequence. Over the same six generations, the age of menarche in Western countries has decreased by 4 years. This reduction has a theoretical fitness advantage on the fecundity span in an environment that is rich in energy resources, and demonstrates plasticity in the maturation of the hypothalamic-pituitary-gonadal (HPG) axis. As a consequence of constantly changing life conditions and environment, today's children may be stunted in growth or be tall, adapt their body composition and energy metabolism, and modulate their longevity, fertility, and fecundity. The signals of energy balance that modulate this plasticity are both intrinsic (internal) and extrinsic (environmental; Hochberg et al., 2011a). The internal signals include leptin, the growth hormone -insulin-like growth factor 1 (GH-IGF1) axis, ghrelin, thyroid hormones, insulin, and the cortisone-cortisol shuttle (11β-hydroxysteroid dehydrogenases), whereas the environmental signals include pre-and postnatal nutrition, stressors, endocrine disrupting chemicals (EDCs), and light (Gluckman and Hanson, 2005). Human growth and development is an orchestrated process of well-recognized and predictable events with five overlapping, yet distinct, pre-adult life history phases: the prenatal, infantile, childhood, juvenile, and pubertal growth phases (Figure 1). The transition periods between these phases are sensitive windows of developmental plasticity, and there is now some evidence that the features of transition from one phase to the next are transmitted transgenerationally (Stein et al., 2004). With decreasing sensitivity, the transitions between phases are periods of adaptive plasticity, and the multifactorial regulation of growth during each phase mirrors the interplay between genetic, hormonal, environmental, and psychosocial factors. In response to environmental cues, especially those that relate to energy resources, a life history phase can be added or deleted (such as the added childhood phase in hominids), and can have its duration, intensity, and onset time altered (Hochberg and Albertsson-Wikland, 2008;Hochberg, 2009;Hochberg et al., 2011a). Thus, the timing of infancy-childhood transition (ICT) adaptively adjusts an individual's size to the prevailing environment in response to environmental cues (Hochberg and Albertsson-Wikland, 2008;Gawlik et al., 2011). We have previously reported that the ICT is a major determinant of final adult height, and a delayed ICT is the most common cause of idiopathic short stature (Hochberg and Albertsson-Wikland, 2008). The transition from childhood to juvenility is entrusted with the programming of body composition (Hochberg, 2008(Hochberg, , 2010. The transition from juvenility to adolescent-related puberty and the growth spurt is a function of maturation of the HPG axis. Poor quality of life during this transition delays fecundity and increases longevity (Pembrey et al., 2006). Hence, a series of control mechanisms must exist in order to enable (a) the GH-IGF1 axis to dominate as the child transits into childhood, (b) adrenarche at the onset of juvenility, and (c) an abrupt increase in sex hormones at initiation of puberty. As already noted, an organism distributes its energy resources during its life by timed allocations toward growth, maintenance, avoiding death, reproduction, and raising offspring to independence in order to enhance its reproductive fitness (Bogin et al., 2007;Muller, 2007). Whereas the environment at any one geographical location may vary slowly, nutritional conditions may change rapidly. Evolution has provided organisms with the mechanisms to adapt to such extremes. Humans can also use sociocultural adjustments to fill the gaps when the changes occur faster than the evolutionary time scale. This can be seen when one examines the evolution of hominid life history from Australopithecus afarensis to Homo sapiens. In humans, the duration of infancy has become shortened, and that of childhood has been prolonged, and these two phases are followed by a relative short juvenility and late adolescence in order to increase fitness (Pratt et al., 1994;Bogin et al., 2007;Hochberg, 2008). The overall result of this strategy is increased body size and longevity, and reproduction at a later age, as compared to other primates. This strategy has been very successful for humans, who can thrive and propagate in extremely diverse environments that encompass the entire range of geographic latitudes and altitudes. An important environmental cue for infants and young children is the care giving behaviors of their parents, which can be used as a predictive indicator of the security of their environment. The resultant attachment patterns are transmitted transgenerationally (Belsky and Fearon, 2002;Del Giudice, 2009). The degree of security that is experienced during childhood sets development on alternative pathways, and adaptively shapes the individual's future reproductive strategy. A secure attachment will result in a reproductive strategy that is based on late maturation, a commitment to a long-term relationship, and a large investment in parenting. In terms of evolutionary developmental biology (evo-devo), which studies the developmental mechanisms that control body shape and form and the alterations in gene expression and function that lead to changes in body shape and pattern (Goodman and Coughlin, 2000), the expected response to a secure environment will include investment in large body size (Liu et al., 1998(Liu et al., , 2000. This example of transgenerational phenotypic plasticity contrasts that of an insecure attachment and a small parental investment that involves a large number of children: the response is a compromise in body size, early reproduction, and short-term mating. Child growth and body composition display a vast range of adaptive plasticity. Short-term plasticity in the various child growth phases and transitions suggests that epigenetic mechanisms determine the extent of adaptive plasticity during growth in response to environmental cues. In the light of these new findings, this issue considers the utility of life history theory, and the links between epigenetics, developmental programming, and plasticity in early growth and nutrition. Current research in child health strives to identify mechanisms that underlie plasticity in developmental programming and life history transitions. Developmental programming and life history transitions are purported to use nutritional or endocrine cues for setting long-term biological strategies in response to local ecological and/or social conditions (Bateson et al., 2004;Gluckman and Hanson, 2007b;Kuzawa, 2007). Rapid changes in nutrition during one's lifetime can then lead to "mismatch" and metabolic disease (Gluckman and Hanson, 2007b). It has been further proposed that intergenerational influences on nutrition and growth stabilize the nutritional signals that are received in utero in order to increase the reliability of an intrauterine cue as a predictive signal (Kuzawa, 2007). It is now also known that the effects of hormones, stress, and drugs during embryogenesis can not only influence the subsequent behavioral phenotype of the individual, but can also modify the individual's response to adult experiences (Crews et al., 2007). PLASTICITY IN HUMAN GROWTH Postnatal growth in body weight and stature can be assessed by three measures: growth velocity, attained body size, and the timing or "tempo" of growth, which is a measure of how rapidly an individual achieves its growth potential. Human growth rates differ markedly between individuals, particularly during the most rapid phases of growth, which occur during infancy and adolescence (Parent et al., 2003;Ong et al., 2011). Human growth demonstrates both "elasticity" and "plasticity" (or long-term programming) during the different growth periods. www.frontiersin.org The concepts of growth elasticity and plasticity arose from the results of studies in experimental animals that date back to the sixties in which the influence of nutrition on growth was investigated. The results of these studies demonstrated that there are critical time windows in which the outcome of a programmed growth trajectory can be changed. McCance and Widdowson were the first to report this phenomenon when they showed that the exact timing of undernourishment in the growth phase can exert either a permanent or transient effect on final body size (McCance, 1962;McCance and Widdowson, 1974). When rats are transiently undernourished (food-restricted) in very early postnatal life, they remained smaller throughout later life than control rats which are not undernourished. In contrast, rats which are transiently undernourished during later growth phases show catch-up growth after the period of under-nutrition and attain the same adult weights as the control rats. While human growth may be impacted by severe acute or chronic diseases, there is growing awareness that growth rates, and in particular the tempo of growth, may have marked influences on the subsequent risks for morbidity and mortality, and hence reproductive fitness. Birth weight is strongly correlated with perinatal mortality, and is the single strongest predictor of infant survival. Neonates who are born at term and weigh between 1500 and 2500 g (<10th percentile) have a 5-to 30-fold increase in perinatal morbidity and mortality when compared with neonates whose birth weights lie between 10th and 90th percentiles. The strength of the correlation between birth weight and perinatal mortality depends on gestational age [the lower the birth weight, the higher the rate of neonatal mortality for the estimated gestational age (Wilcox and Skjaerven, 1992)], and also on factors that are unrelated to gestational age. This low birth weight association with neonatal mortality is echoed in adult life with the development of later disease and mortality (Godfrey and Barker, 2001). In postnatal life, there is growing evidence that the"natural variations" in body size and growth rate may have major relevance, not only on adult height, but also more importantly on infant and childhood survival and reproductive fitness. Pygmies are an "extreme" example of the interplay between postnatal growth and development, survival, and reproductive fitness. Their characteristic small adult size does not appear to have evolved through any positive selection for short stature, but rather as the result of a life history trade-off between the fertility benefits of large body size against the costs of late growth cessation in a setting of extremely high childhood and early adult mortality (Migliano et al., 2007). In Western settings, rapid weight gain during early postnatal life is associated with increased risks for disease. For example, Ong et al. (2011) showed that children who showed catch-up growth between birth and 2 years were fatter, and have more central fat distribution at 5 years, when compared to children with normal early growth. Ekelund et al. (2007) examined the independent associations between weight gain during infancy (0-6 months) or early childhood (3-6 years) with components of the metabolic syndrome in young adults in a prospective cohort study in 128 individuals from birth to 17 years. They concluded that rapid weight gain during infancy (0-6 months), but not during early childhood (3-6 years), predicted the clustered metabolic risk at age 17 years. We have recently shown in a sample of 22 natural-fertility societies that the age at menarche correlated negatively with their average adult body mass, and the average adult body weight positively correlated with reproductive fitness (Hochberg et al., 2011b). Infant feeding type and feeding patterns can also influence growth trajectories and disease risk. Compared to formulafeeding, breast feeding is associated with slower infant weight gain and lower later obesity risk. The results of several meta-analyses suggest that breast feeding has a protective effect, especially in SGA and preterm infants (Owen et al., 2005). Experimental evidence from several randomized control trials of nasogastric feeding of breast milk and various nutrient formulae for 4 weeks showed long-term differences on adiposity levels and the later propensity to cardiovascular disease (Singhal et al., 2002a,b). Precocious puberty that is associated with rapid weight gain and growth, particularly during infancy also has implications for future life events. Ong et al. (2007) have shown that an early age of menarche confers increased risk for disease, such as obesity, T2D, and hypertension, and death from cardiovascular disease and cancer in later life (Lakshman et al., 2009). Finally, the mechanisms that signal and regulate early catch-up growth in the postnatal period may mediate or modify the associations between small size at birth and risks for disease in adulthood. The combination of low birth weight and a subsequent high body mass index (BMI) is related to the increased incidence of T2D in later life. Using longitudinal data that were collected from 8760 individuals who were born in Helsinki between 1934and 1944, Eriksson et al. (2003 reported that the large differences in the incidence of T2D were associated with growth rates in utero, weight gain in infancy, and the age at adiposity rebound. These observations have implications for the early origins of both obesity and cardiovascular disease in that programmable windows of human obesity may exist during the periods of greatest weight velocity. However, current evidence has yet failed to agree on the specific programmable windows during postnatal growth and development for later disease risks (Singhal et al., 2002b;Eriksson et al., 2003;Owen et al., 2005). FUTURE DIRECTIONS The end-target of translational research is the patient with the goal to improve medical care. Traditionally, translational researchgrowth included -has followed medical reasoning, viewing organisms as machines whose design has been optimized by engineers to provide good health. This article takes the evolutionary reasoning: why those mechanisms are the way they are? Organisms are viewed here as packages of compromises (trade-offs) between traits shaped by natural selection to maximize reproduction. Wide knowledge gaps still exist in our current understanding of the phenotypic plasticity and the putative epigenetic machinery, despite the increasing use of numerous experimental systems. As a result, we still do not know whether some of the epigenetic mechanisms that have been identified thus far using these experimental systems are operative in humans and other eutherians. Hereditary, environmental, and stochastic factors determine the accumulation of epigenetic variation over time, but their relative contribution to the phenotypic outcome in terms of child growth and maturation is not known because few data are available. That the environment can influence growth and developmental trajectories during pre-adult life history stages is well established, and later life outcomes have been much sought after. Yet, the mechanistic events that influence the transition from one life history stage to the next, growth and puberty are incompletely understood. Growth and puberty are regulated by insulin, growth hormone, the IGFs, and the sex hormones. These hormones drive the rate of growth and development, but it is unclear what determines the timing and degree of the different phases of developmental events and the quantity of growth. At the target tissues for these hormones, we need to first identify gene expression changes that occur in each tissue. Epigenetic events, including the cell type-specificity and tissue-specificity of chromatin regulation are great challenges for future human studies. Since no other animal has a similar pre-adult life history to that of humans, an obvious question is whether the findings from any experimental animal can be extrapolated to humans. The mechanisms by which cues about nutrient availability in the uterus and postnatal environment are transmitted to the offspring and by which different stable phenotypes are induced are still unknown. The genetic control of the regulation of placental supply and fetal demand for maternal nutrients is not fully understood, and many of the detrimental events that occur in the fetus could be possibly due to epigenetic misprogramming.
5,262.6
2011-08-22T00:00:00.000
[ "Biology", "Psychology" ]
Axion phenomenology and θ-dependence from Nf = 2 + 1 lattice QCD We investigate the topological properties of Nf = 2 + 1 QCD with physical quark masses, both at zero and finite temperature. We adopt stout improved staggered fermions and explore a range of lattice spacings a ∼ 0.05 − 0.12 fm. At zero temperature we estimate both finite size and finite cut-off effects, comparing our continuum extrapolated results for the topological susceptibility χ with predictions from chiral perturbation theory. At finite temperature, we explore a region going from Tc up to around 4 Tc, where we provide continuum extrapolated results for the topological susceptibility and for the fourth moment of the topological charge distribution. While the latter converges to the dilute instanton gas prediction the former differs strongly both in the size and in the temperature dependence. This results in a shift of the axion dark matter window of almost one order of magnitude with respect to the instanton computation. Introduction Axions are among the most interesting candidates for physics beyond the Standard Model. Their existence has been advocated long ago [1][2][3][4] as a solution to the so-called strong-CP problem through the Peccei-Quinn (PQ) mechanism. It was soon realized that they could also explain the observed dark matter abundance of the visible Universe [5][6][7]. However, a reliable computation of the axion relic density requires a quantitative estimate of the parameters entering the effective potential of the axion field, in particular its mass and self-couplings as a function of the temperature T of the thermal bath. The purpose of this study is to obtain predictions from the numerical simulations of Quantum Chromodynamics (QCD) on a lattice. Our results, which are summarized at the end of this section, suggest a possible shift of the axion dark matter window by almost one order of magnitude with respect to instanton computations. This shift is a consequence of the much slower decrease of the axion mass with the temperature in comparison to the dilute instanton gas prediction. Our present simulations are however limited to a range of temperatures not exceeding 600 MeV: the main obstruction is represented by the freezing of the topological modes on fine lattices, which afflicts present lattice QCD algorithms. For a more complete understanding of axion dynamics at finite T , in the future a major effort must be undertaken to reach higher temperatures. General framework Given the strong bounds on its couplings, the axion field can be safely treated as a nondynamical external field. Its potential is completely determined by the dependence of the QCD partition function on the θ-angle, which enters the pure gauge part of the QCD Euclidean Lagrangian as where is the topological charge density. The θ-dependent part of the free energy density can be parametrized as follows where χ(T ) is the topological susceptibility at θ = 0, is the global topological charge and V = V /T ), while s(θ, T ) is a dimensionless even function of θ such that s(0, T ) = 1. The quadratic term in θ, χ(T ), is proportional to the axion mass, while non-linear corrections in θ 2 , contained in s(θ, T ), provide information about axion interactions. In particular, assuming analyticity around θ = 0, s(θ, T ) can be expanded as follows [8] s(θ, T ) = 1 + b 2 (T )θ 2 + b 4 (T )θ 4 + · · · , (1.5) where the coefficients b n are proportional to the cumulants of the topological charge distribution. For instance b 2 , which is related to quartic interactions terms in the axion potential, can be expressed as (1.6) The function F (θ, T ), related to the topological properties of QCD, is of nonperturbative nature and hence not easy to predict reliably with analytic methods. This is possible only in some specific regimes: chiral perturbation theory (ChPT) represents a valid approach only in the low temperature phase; at high-T , instead, a possible analytic approach is the Dilute Instanton Gas Approximation (DIGA). DIGA predictions can in fact be classified in two groups: those that make only use of the DIGA hypothesis itself (i.e. that just relies on the existence of weakly interacting objects of topological charge one), and those that exploit also perturbation theory, the latter being expected to hold only at asymptotically high values of T . Using only the dilute gas approximation one can show that the θ-dependence of the free energy is of the form (see e.g. [9,10]) = 1/360 and so on. Using also perturbation theory it is possible to obtain an explicit form for the dependence of the topological susceptibility on the temperature. To leading order, for N (l) f light quark flavors of mass m l , one obtains (see e.g. [9,10]) Only part of the NLO corrections to this expression are known (see [11] or [12] for a summary, [13] for the N f = 0 case). As an alternative, a fully non-perturbative approach, which is based completely on the first principles of the theory, is represented by lattice QCD simulations. In fact, extensive studies have been carried out regarding the θ-dependence of pure gauge theories. It was shown in ref. [14], and later confirmed in refs. [13,15,16], that the form of the free energy in eq. (1.7) describes with high precision the physics of the system for T 1.15 T c , while for T T c everything is basically independent of the temperature, thus strengthening the conclusion χ(T < T c ) ≈ χ(T = 0) obtained in previous studies [17][18][19][20][21]. In refs. [13,22] it was also shown that the temperature dependence of the topological susceptibility is correctly reproduced by eq. (1.8) for temperatures just above T c , even if the overall normalization is about a factor ten larger than the perturbative prediction. A realistic study of θ-dependence aimed at being relevant to axion phenomenology requires the numerical simulation of lattice QCD including dynamical quarks with physical masses. Apart from the usual computational burden involved in the numerical simulation of light quarks, that represents a challenge from at least two different but interrelated points of view. Because of the strict connection, in the presence of light fermions, between the topological content of gauge configurations and the spectrum of the fermion matrix (in particular regarding the presence of zero modes), a reliable study of topological quantities requires a discretization of the theory in which the chiral properties of fermions fields are correctly implemented. For standard discretizations, such properties are recovered only for small enough lattice spacings, so that a careful investigation of the continuum limit becomes essential. Indeed only recently it was possible to measure the dependence of the topological susceptibility on the quark masses to a sufficient accuracy to be compared with the prediction of chiral perturbation theory [23][24][25][26][27]. On the other hand, it is well known that, as the continuum limit is approached, it becomes increasingly difficult to correctly sample the topological charge distribution, because of the large energy barriers developing between configurations belonging to different homotopy classes, i.e. to different topological sectors, which can be hardly crossed by standard algorithms [28][29][30][31][32]. That causes a loss of ergodicity which, in principle, can spoil any effort to approach the continuum limit itself. Combining these two problems together, the fact that a proper window exists, in which the continuum limit can be taken and still topological modes can be correctly sampled by current state-of-the-art algorithms, is highly non-trivial. In a finite temperature study, since the equilibrium temperature is related to the inverse temporal extent of the lattice by T = 1/(N t a), the fact that one cannot explore arbitrarily small values of the lattice JHEP03(2016)155 spacing a, because of the above mentioned sampling problem, limits the range of explorable temperatures from above. In the present study we show that, making use of current state-of-the-art algorithms, one can obtain continuum extrapolated results for the θ-dependence of QCD at the physical point, in a range of temperatures going up to about 4 T c , where T c ∼ 155 MeV is the pseudocritical temperature at which chiral symmetry restoration takes place. Then we discuss the consequences of such results to axion phenomenology in a cosmological context. Our investigation is based on the numerical simulation of N f = 2 + 1 QCD, adopting a stout staggered fermions discretization with physical values of the quark masses and the tree level improved Symanzik gauge action. First, we consider simulations at zero temperature and various different values of the lattice spacing, in a range ∼ 0.05 − 0.12 fm and staying on a line of constant physics, in order to identify a proper scaling window where the continuum limit can be taken without incurring in severe problems with the freezing of topological modes. Results are then successfully compared to the predictions of chiral perturbation theory. We also show that, for lattice spacings smaller than those explored by us, the freezing problem becomes severe, making the standard Rational Hybrid Monte-Carlo (RHMC) algorithm useless. For a restricted set of lattice spacings, belonging to the scaling window mentioned above, we perform finite temperature simulations, obtaining continuum extrapolated results for χ and b 2 in a range of T going up to around 600 MeV. These results are then taken as an input to fix the parameters of the axion potential in the same range of temperatures and perform a phenomenological analysis. Summary of main results and paper organization Our main results are the following. Up to T c the topological susceptibility is almost constant, and compatible with the prediction from ChPT. Above T c the value of b 2 rapidly converges to what predicted by DIGA computations; on the contrary the dependence of the topological susceptibility on T shows significant deviations from DIGA. This has a significant impact on axion phenomenology, in particular it results in a shift of the axion dark matter window by almost one order of magnitude with respect to the instanton computation. Since the T -dependence is much milder than expected from instanton calculations, it becomes crucial, for future studies, to investigate the system for higher values of T , something which also claims for the inclusion of dynamical charm quarks and for improved algorithms, capable to defeat or at least alleviate the problem of the freezing of topological modes. The paper is organized as follows. In section 2, we discuss the setup of our numerical simulations, in particular the lattice discretization adopted and the technique used to extract the topological content of gauge configurations. In section 3 we present our numerical analysis and continuum extrapolated result for the θ-dependence of the free energy density, both at zero and finite temperature. Section 4 is dedicated to the analysis of the consequences of our results in the context of axion cosmology. Finally, in section 5, we draw our conclusions and discuss future perspectives. Table 1. Bare parameters used in this work, from [36,37] or spline interpolation of data thereof. The systematic uncertainty on the lattice spacing determination is 2 − 3% and the light quark mass is fixed by using m s /m l = 28.15. Discretization adopted The action of N f = 2 + 1 QCD is discretized by using the tree level improved Symanzik gauge action [33,34] and the stout improved staggered Dirac operator. Explicitly, the Euclidean partition function is written as 3) The symbol W n×m i; µν denotes the trace of the n × m Wilson loop built using the gauge links departing from the site in position i along the positive µ, ν directions. The gauge matrices U (2) i,µ , used in the definition of the staggered Dirac operator, are related to the gauge links U i; µ (used in S YM ) by two levels of stout-smearing [35] with isotropic smearing parameter ρ = 0.15. The bare parameters β, m s and m l ≡ m u = m d were chosen in such a way to have physical pion mass m π ≈ 135 MeV and physical m s /m l ratio. The values of the bare parameters used in this study to move along this line of constant physics are reported in table 1. Most of them were determined in [36,37], the remaining have been extrapolated by using a cubic spline interpolation. The lattice spacings reported in table 1 have a 2 −3% of systematic uncertainty, as discussed in [36,37] (see also [38]). Determination of the topological content In order to expose the topological content of the gauge configurations, we adopt a gluonic definition of the topological charge density, measured after a proper smoothing procedure, which has been shown to provide results equivalent to definitions based on fermionic operators [39][40][41][42][43]. The basic underlying idea is that, close enough to the continuum limit, JHEP03(2016)155 the topological content of gauge configurations becomes effectively stable under a local minimization of the gauge action, while ultraviolet fluctuations are smoothed off. A number of smoothing algorithms has been devised along the time. A well known procedure is cooling [44][45][46][47][48]: an elementary cooling step consists in the substitution of each link variable U i; µ with the SU(3) matrix that minimizes the Wilson action, in order to reduce the UV noise. While for the case of the SU(2) group this minimization can be performed analytically, in the SU(3) case the minimization is usually performedà la Cabibbo-Marinari, i.e. by iteratively minimizing on the SU(2) subgroups. When this elementary cooling step is subsequently applied to all the lattice links we obtain a cooling step. Possible alternatives may consist in choosing a different action to be minimized during the smoothing procedure, or in performing a continuous integration of the minimization equations. The latter procedure is known as the gradient flow [49,50] and has been shown to provide results equivalent to cooling regarding topology [51][52][53][54]. Because of its computational simplicity in this work we will thus use cooling, however it will be interesting to consider in future studies also the gradient flow, especially as an independent way to fix the physical scale [50,55,56]. Since the topological charge will be measured on smoothed configurations, we can use the simplest discretization of the topological charge density with definite parity [57] where U x; µν is the plaquette located in x and directed along the µ, ν directions. The tensor˜ is the completely antisymmetric tensor which coincides with the usual Levi-Civita tensor µνρσ for positive indices and is defined by˜ µνρσ = −˜ (−µ)νρσ and antisymmetry for negative indices. The lattice topological charge Q L = x q L (x) is not in general an integer, although its distribution gets more and more peaked around integer values as the continuum limit is approached. In order to assign to a given configuration an integer value of the topological charge we will follow the prescription introduced in [29]: the topological charge is defined as Q = round(αQ L ), where 'round' denotes the rounding to the closest integer and α is fixed by the minimization of i.e. in such a way that αQ L is 'as integer' as possible. Actually, one could take the nonrounded definition Q = Q L as well, the only difference being a different convergence of results to the common continuum limit (see ref. [58] for a more detailed discussion on this point). The topological susceptibility is then defined by s is the four-dimensional volume of the lattice, and the coefficient b 2 has been introduced in eq. (1.6). JHEP03(2016)155 We measured the topological charge every 20 cooling steps up to a maximum of 120 steps and verified the stability of the topological quantities under smoothing. Results that will be presented in the following have been obtained by using 100 cooling steps, a number large enough to clearly identify the different topological sectors but for which no significant signals of tunneling to the trivial sector are distinguishable in mean values. For all data reported in the following we verified that the corresponding time histories were long enough to correctly sample the topological charge. In particular we checked that Q is compatible with zero and that the topological charge is not frozen. Indeed it is well known that, while approaching the continuum limit, the autocorrelation time of the topological charge increases very steeply until no tunneling events between different sectors happen anymore [28][29][30][31][32]. An example of this behavior can be observed in figure 1, where some time-histories for zero temperature runs are showed, for three different lattice spacings. While the general features of this phenomenon are common to all the lattice actions, the critical value of the lattice spacing at which the charge gets stuck may depend on the specific discretization adopted. In our case we were able to obtain reasonable sampling for lattice spacings down to a = 0.0572 fm, while finer lattices (in particular, with a 0.04 fm) showed severe freezing over thousands of trajectories and had to be discarded. Numerical results The main purpose of our numerical study is to provide results for the θ-dependence of QCD at finite temperature, in particular above the pseudo-critical temperature, in order to take them as an input for axion phenomenology. However, in order to make sure that our lattice discretization is accurate enough to reproduce the chiral properties of light fermions, we have also performed numerical simulations at zero or low temperature, where results can be compared to reliable analytic predictions. Indeed, at zero temperature, the full θ dependence of the QCD partition function, can be estimated reliably using chiral Lagrangians [4,59,60]. In particular, at leading order in the expansion, χ and b 2 can be written just in terms of m π , f π and z = m u /m d as ChPT provides a prediction at finite temperature as well. In particular we have where the functions J n are defined in ref. [63]. However, at temperatures around and above T c , the chiral condensate drops and chiral Lagrangians break down, so that the finite T predictions in eq. (3.2) are expected to fail. In this regime non-perturbative computations based on first principle QCD are mandatory. Zero temperature In table 2 we report our determinations of the topological susceptibility on N 4 s lattices for several values of N s and different lattice spacings. The results on the three smallest lattice spacings are also plotted in figure 2. In order to extract the infinite volume limit of χ at fixed lattice spacing, one can either consider only results obtained on the largest available lattices and fit them to a constant, or try to model the behavior of χ with the lattice size using all available data. We have followed both procedures in order to better estimate possible systematics. JHEP03(2016)155 The topological susceptibility can be written as the integral of the two point function of the topological charge density; since the η is the lightest intermediate state that significantly contributes to this two point function (three pions states are OZI suppressed), one expects the leading asymptotic behavior to be where C is an unknown constant. This form nicely fits data in the whole available range of aN s (see figure 2), both when m η is fixed to its physical value and when it is left as a fit parameter. The results obtained using this last procedure (which gives the most conservative estimates) are reported in table 2 and correspond to the entries denoted by N s = ∞. On the other hand, a fit to a constant value works well when using only data coming from lattices with aN s > 1.5 fm, and provides consistent results within errors. This analysis makes also us confident that results obtained on the lattices with a 0.1249 fm and a 0.0989 fm, where a single spatial volume is available with aN s > 3 fm, are not affected by significant finite size effects. In figure 3 we plot χ 1/4 , extrapolated to infinite volume, against the square of the lattice spacing, together with the ChPT prediction. Finite cut-off effects are significant, especially for a 0.1 fm, meaning that we are not close enough to the continuum limit to reproduce the correct chiral properties of light quarks. In the case of staggered quarks, such lattice artifacts originate mostly from the fact that the full chiral symmetry group is reproduced only in the continuum limit, so that the pion spectrum is composed of a light pseudo-Goldstone boson and other massive states which become degenerate only as JHEP03(2016)155 a → 0. The physical signal vanishes in the chiral limit whereas this is not the case for its discretization effects. This means that it is necessary to work on very fine lattice spacings in order to keep these effects under control, a task which is particularly challenging with present algorithms, because of the freezing of the topological charge. Despite the large cut-off effects, we can perform the continuum extrapolation of our data. If one considers only the three finest lattice spacings, the finite cut-off effects for this quantity can be parametrized as simple O(a 2 ) corrections and a best fit yields χ 1/4 = 73(9) MeV, while in order to describe the whole range of available data one must take into account O(a 4 ) corrections as well, i.e. obtaining χ 1/4 = 69 (9) MeV. Both best fits are reported in figure 3. Therefore we conclude that the continuum extrapolation is already under control with the available lattice data, and is in satisfactory agreement with the predictions of chiral perturbation theory. In order to further inquire about the reliability of our continuum extrapolation and the importance of the partial breaking of chiral symmetry in the staggered discretization, we studied the combination m ngb (a) → m phys π , this ratio converges to χ 1/4 as a → 0. The state with taste structure γ i γ µ was used, whose mass is close to the root mean square value of all the taste states (see ref. [37] figure 2) and the values of χ tc (a) strongly reduces lattice artefacts with respect to χ 1/4 (a), moreover a linear fit in a 2 well describes data for all available lattice spacings, giving the result χ 1/4 = 77(3). Although a complete study of the systematics affecting χ 1/4 tc (a) was not performed (e.g. the dependence of m ngb (a) on the lattice size was not studied, just the largest size was used), this is a strong indication that the dominant source of lattice artefacts in χ 1/4 (a) is the chiral symmetry breaking present at finite lattice spacing in the staggered discretization. Finite temperature Finite temperature simulations can in principle be affected by lattice artifacts comparable to those present at T = 0. For that reason, we have limited our finite T simulations to the three smallest lattice spacings explored at T = 0, i.e. those in the scaling window adopted for the extrapolation to the continuum limit with only O(a 2 ) corrections. At fixed lattice spacing the temperature has been varied by changing the temporal extent N t of the lattice and, in all cases, we have fixed N s = 48. This gives a spatial extent equal or larger than those explored at T = 0 and an aspect ratio N s /N t ≥ 3 for all explored values of N t . The absence of significant finite volume effects has also been verified directly by comparing results with those obtained on N t × 40 3 lattices. In table 3 we report the numerical values obtained for the topological susceptibility, for the ratio χ(T, a)/χ(T = 0, a) (where for χ(T = 0, a) the infinite volume extrapolation has been taken) and for the cumulant ratio b 2 . Results for χ 1/4 as a function of T /T c , where T c = 155 MeV, are reported in figure 4 for the three different lattice spacings. The dependence on a is quite strong, as expected from the T = 0 case. Inspired by the instanton gas prediction, eq. (1.8), we have performed a fit with the following ansatz which also takes into account the dependence on a and nicely describes all data in the range T > 1.2 T c with χ 2 /dof 0.7. In table 4 we report the best fit values obtained performing the fit in the region T > T cut for some T cut values; best fit curves are reported in figure 4 as well, together with a band corresponding to the continuum extrapolation. It is remarkable that most of the lattice artifacts disappear when one considers, in place of χ itself, the ratio χ(T, a)/χ(T = 0, a), whose dependence on the lattice spacing is indeed quite mild. That is clearly visible in figure 5. Also in this case we have adopted a fit ansatz similar to eq. (3.7) JHEP03(2016)155 which well fits all data in the range T > 1.2 T c with a χ 2 /dof 1. The best fit curves and the continuum extrapolation are shown in figure 5. The best fit parameters, again for different T cut values, are reported in table 5. It is important to note, in order to assess the JHEP03(2016)155 T > 1.2T c one obtains for the exponent the value −0.674 (38), to be compared with D 2 /4 from the first line of table 5 and A 2 from the first line of table 4. However in the following analysis we will refer to results obtained through the ratio χ(T )/χ(0), which is the quantity showing smaller finite cut-off corrections. Let us now comment on our results for the topological susceptibility. For temperatures below or around T c , the temperature dependence is quite mild and, for temperatures up to T 1.2 T c , even compatible with the prediction from ChPT, which is reported in figure 5 for comparison. Then, for higher values of T , a sharp power law drop starts. However, it is remarkable that the power law exponent is smaller, by more than a factor two, with respect to the instanton gas computation, eq. (1.8), which predicts D 2 −8 in the case of three light flavors (the dependence on the number of flavor is however quite mild). DIGA is expected to provide the correct result in a region of asymptotically large temperatures, which however seems to be quite far from the range explored in the present study, which goes up to T 4 T c . This is in sharp contrast with the quenched theory, where the DIGA power law behavior sets in at lower temperatures [13,22]. In order to allow for a direct comparison, we have reported the DIGA prediction in figure 5, after fixing it by imposing χ(T c ) = χ(0) as an overall normalization. As we discuss in the following section, the much milder drop of χ as a function of T has important consequences for axion cosmology. A determination of the topological susceptibility has been presented in ref. [72], based on the Domain Wall discretization, in the range T /T c ∼0.9-1.2. Since the data have been produced with different lattice spacings at different temperatures, it is however difficult to compare their results with ours. An extended range of T has been explored in ref. [73] in the presence of twisted mass Wilson fermions, reporting a behavior similar to that observed in this study, although with larger values of the quark masses (corresponding to a pion mass m π ∼ 370 MeV). The comparison is performed in figure 8, and shows a reasonable JHEP03(2016)155 agreement if results from ref. [73] are rescaled according to the mass dependence expected from DIGA, 2 i.e. χ(T ) ∼ m 2 q ∼ m 4 π . Let us now turn to a discussion of our results for the coefficient b 2 (defined in eq. (1.5)), which is related to the non-gaussianity of the topological charge distribution and gives information about the shape of the θ-dependent part of the free energy density. Data for b 2 are reported in figure 9. For T < T c this observable is too noisy to give sensible results but a reasonable guess, motivated by what happens in the quenched case and by the ChPT prediction, is that b 2 is almost temperature independent for T T c , like the topological susceptibility. In the high temperature region b 2 can instead be measured with reasonable accuracy and, due to the peculiar dependence of the noise on this observable on χV, data on the smaller lattice spacing turned out to be significantly more precise than the others (see the discussion in ref. [58]). While data clearly approach the dilute instanton gas prediction b DIGA 2 = −1/12 at high temperature, deviations from this value are clearly visible on all the lattice spacings for T 1.3 T c and, for the smallest lattice spacing, also up to T ∼ 2.5 T c . This is in striking contrast to what is observed in pure gauge theory, where deviations from −1/12 are practically absent for T 1.15 T c , with a precision higher than 10% [13][14][15][16]. As discussed in the introduction deviations from b DIGA 2 cannot be simply ascribed to a failure of perturbation theory (like e.g. for the behavior of χ(T )) but are instead unambiguous indications of interaction between instantons. Another difference with respect to the quenched case is that in the pure gauge theory (with both gauge groups 2 One might wonder why DIGA should work for the mass dependence of χ and not for its dependence on T . A possible explanation is that, while the temperature dependence stems from a perturbative computation, the mass dependence comes from the existence of isolated zero modes in the Dirac operator, i.e. from the very hypothesis of the existence of a dilute gas of instantons, which seems to be verified already at moderately low values of T , see the following discussion on b2, but not at T ∼ Tc. SU(N ) and G 2 , see [14,15]) the asymptotic value is approached from below, while in the present case it is approached from above. These peculiar features can be related to a different interaction between instantons mediated by light quarks, as it is clear from the following discussion. JHEP03(2016)155 To describe deviations from the instanton gas behavior it is convenient to use the parametrization of F (θ, T ) introduced in ref. [14]: Indeed it is not difficult to show that every even function of θ of period 2π can be written in this form, and the main advantage of this form is that the value of coefficient c 2n influences only b 2j with j ≥ n: in particular (3.10) Since the coefficients c 2n parametrize deviations from the instanton gas that manifest themselves only in higher-cumulants of the topological charge, it is natural to interpret eq. (3.9) as a virial expansion, in which the role of the "density" is played by the first coefficient (i.e. the topological susceptibility χ), and to introduce the dimensionless coefficients d 2n by where χ(T = 0) was used just as a dimensional normalization and one expects a mild dependence of d 2(n−1) (T ) on the temperature, since the strongly dependent component , (3.12) which nicely describes the b 2 data for the smallest lattice spacing using d 2 = 0.80 (16), see figure 9, where the expression χ(T )/χ(T = 0) (T c /T ) 2.7 was used. In the spirit of a virial expansion interpretation of eq. (3.9), the coefficient d 2 can be considered as the lowest order interaction term between instantons. In particular, a positive value of d 2 corresponds to an attractive potential, which is in sharp contrast with the pure gauge case, where a repulsive, negative value of d 2 is observed. This peculiar difference can be surely interpreted in terms of effective instanton interactions mediated by light quarks, which are likely also responsible for the much slower convergence, with respect to the quenched case, to the DIGA prediction. Implications for axion phenomenology The big departure of the results for the topological susceptibility at finite temperature from the DIGA prediction has a strong impact on the computation of the axion relic abundance. In particular the model independent contribution from the misalignment mechanism is determined to be [63] Ω mis a = 86 33 where Ω γ and T γ are the present abundance and temperature of photons while n a /s is the ratio between the comoving number density n a = m a a 2 and the entropy density s computed at a late time t such that the ratio n a /s became constant. The number density n a can be extracted by solving the axion equation of motion a + 3Hȧ + V (a) = 0 , (4.2) The temperature (and time) dependence of the Hubble parameter H is determined by the Friedmann equations and the QCD equation of state. The biggest uncertainties come therefore from the temperature dependence of the axion potential V (a). At high temperatures the Hubble friction dominates over the vanishing potential and the field is frozen to its initial value a 0 . As the Universe cools the pull from the potential starts winning over the friction (this happens when T ≈ T osc , defined as m a (T osc ) ≈ 3H(T osc )) and the axion starts oscillating around the minimum. Shortly after H becomes negligible and the mass term is the leading scale in eq. (4.2). In this regime the approximate WKB solution has the form where R(t) is the cosmic scale factor. Since the energy density is given by ρ a ∼ m 2 a A 2 /2, the solution (4.3) implies that what is conserved in the comoving volume is not the energy JHEP03(2016)155 Figure 10. Values of the axion decay constant f a as a function of the initial field value θ 0 = a 0 /f a such that the axion misalignment contribution matches the full or a tenth of the observed dark matter abundance (red band or dotted green line respectively). When the PQ symmetry is broken only after inflation the axion abundance is reproduced by choosing θ 0 ≈ 2.2, i.e. the vertical blue dashed line. density but N a = ρ a R 3 /m a , which can be interpreted as the number of axions [5][6][7]. Through the conservation of the comoving entropy S, it follows that n a /s becomes an adiabatic invariant. Hence, it is enough to integrate the equation of motion (4.2) in the small window around the time when T ≈ T osc . We integrated numerically eq. (4.2) in the interval between the time when m a = H/10 to that corresponding to m a = 2400H and extract the ratio n a /s when m a ∼ 300H, namely a factor a hundred since the oscillation regime begins. The value for T osc varies from T c to several GeV depending on the axion decay parameter f a and the temperature dependence of the axion potential. More details about this standard computation can be found for example in [63,74]. In order to estimate the uncertainty in the results given below we varied the fitting parameters of the topological susceptibility D 2 , D 0 and those relative to the QCD equation of state [37] within the quoted statistical and systematic errors. Given that b 2 (T ) converges relatively fast to the value predicted by a single cosine potential, we can assume V (a) = −χ(T ) cos(a/f a ) for T T c . Using the most conservative results for the fit of χ(T ), i.e. χ(T )/χ(0) = (1.8 ± 1.5)(T c /T ) 2.90±0.65 , in figure 10 we plot the prediction for the parameter f a as a function of the initial value of the axion field θ 0 = a 0 /f a assuming that the misalignment axion contribution make up for the whole observed dark matter abundance, Ω DM = 0.259(4) [75]. We also plot the case where the axion misalignment contribution accounts only for part (10% for definiteness) of the dark matter abundance. In some cases the axion field acquires all possible values within the visible horizon, therefore the initial condition to the eq. where the errors correspond respectively to the uncertainties on the fit parameters D 2 , D 0 , and to the error in the QCD equation of state. Note that, in this case, also other contributions from topological defects are expected to contribute. In case their effects are not negligible, or if axions are not responsible for the whole dark matter abundance, the value above should just be read as an upper bound to the PQ scale. The value in eq. (4.4) is almost one order of magnitude bigger than the one computed with instantons techniques (f a 0.2 · 10 12 GeV), which in fact corresponds to the value we instead get for one tenth of the dark matter abundance. Some few remarks are in order. While the uncertainties on the axion mass fit we used are not small, the final axion abundance is rather insensitive to them and the prediction for f a has therefore a good precision. The results above, however, rely on the extrapolation of the axion mass fit formula up to few GeV, where no lattice data is available. In particular for the value of f a in eq. (4.4) the axion field starts oscillating around T osc = 4.3 GeV. An even longer extrapolation is required for f a = 1.67 · 10 11 GeV, corresponding to Ω mis a = 0.1 Ω DM , where the axion starts oscillating around T osc = 7.2 GeV. Given the stability of the fit in the accessible window of temperatures (see figure 6) big changes in the axion mass behavior are not expected. Still, extending the analysis to even higher temperatures would be extremely useful to control better the extrapolation systematics. Larger values of f a corresponds instead to smaller T osc , for example for f a = 10 13 GeV, T osc = 2.2 GeV while for f a = 10 14 GeV, T osc = 1.1 GeV. Above these values no extrapolation is required and the corresponding results are free from the extrapolation uncertainties. Conclusions We studied the topological properties and the θ-dependence for N f = 2 + 1 QCD along a line of constant physics, corresponding to physical quarks masses, and for temperatures up to 4 T c , where T c = 155 MeV is the pseudo-critical temperature at which chiral symmetry restoration takes place. We explored several lattice spacings, in a range 0.05 − 0.12 fm, in order to perform a continuum extrapolation of our results. Our investigation at even smaller lattice spacings has been hindered by a severe slowing down in the decorrelation of the topological charge. At zero temperature we observe large cut-off effects for the topological susceptibility. Nevertheless we are able to perform a continuum extrapolation, obtaining from the three finest lattice spacings χ 1/4 = 73(9) MeV, in reasonable agreement with the ChPT prediction in the case of degenerate light flavors, χ 1/4 ChPT = 77.8(4) MeV. At finite temperature we observe that cut-off effects are drastically reduced when one considers the ratio χ(T )/χ(T = 0), which turns out to be the most convenient quantity to perform a continuum extrapolation. The agreement with ChPT persists up to around T c . Regarding the shape of the θ-dependent part of the free energy density, and in particular the lowest order coefficient b 2 , a visible convergence to the instanton gas prediction is observed in the explored range. With respect to the pure gauge case, the convergence of b 2 to DIGA is slower and the deviation is opposite in sign [14,15]. This suggests a different interaction between instantons right above T c , namely repulsive in the quenched case and attractive in full QCD [76]. The deviations from the dilute instanton gas predictions that we found in the present study have a significant impact on axions, resulting in particular in a shift of the axion dark matter window by almost one order of magnitude with respect to estimates based on DIGA. The softer temperature dependence of the topological susceptibility also changes the onset of the axion oscillations, which would now start at a higher temperatures (T ∼ 4 GeV). An important point is that this seems an effect directly related to the presence of light fermionic degrees of freedom: indeed, pure gauge computations [13,22] observe a power-law behavior in good agreement with DIGA in a range of temperatures comparable to those explored in the present study. One might wonder whether a different power law behavior might set in at temperatures higher than 1 GeV. That claims for future studies extending the range explored by us. The main obstruction to this extension is represented by the freezing of the topological modes at smaller lattice spacings which would be necessary to investigate such temperatures (a < 0.05 fm). Such an obstruction could be overcome by the development of new Monte-Carlo algorithms. Proposals in this respect have been made in the past [77,78] and some new strategies have been put forward quite recently [79,80]. In view of the exploration of higher temperatures, one should also consider the inclusion of dynamical charm quarks.
9,716
2016-03-01T00:00:00.000
[ "Physics" ]
Analysis of As 2 S 3-Ti : LiNbO 3 Taper Couplers Using Supermode Theory In this work, we develop a simulation method based on supermode theory and transfer matrix formalism, and then apply it to the analysis and design of taper couplers for vertically integrated As2S3 and Ti: LiNbO3 hybrid waveguides. Test structures based on taper couplers are fabricated and characterized. The experimental results confirm the validity of the modeling method, which in turn, is used to analyze the fabricated couplers. Introduction As study on integrated Optics proceeds, several schemes with regard to materials and structures were developed, such as silicon-on-insulator, chalcogenide glass waveguides, III-V semiconductor waveguides and titanium diffused waveguides.While different schemes have their own merits and shortcomings, reciprocal benefits can be obtained from integration of them, namely, the hybrid waveguides.For example, preliminary result was reported on As 2 S 3 -on-Ti: LiNbO 3 hybrid waveguide devices [1,2], which benefit from the high index contrast of As 2 S 3 and easy connection with commercial single mode fibers.For integration of different waveguides, light coupling is the key.A directional coupler is the simplest functional device to couple light by transferring energy between two waveguides.However, in practice its coupling efficiency can be fairly low due to the phase mismatch and small tolerance to fabrication errors.Alternatively grating and taper couplers are used, and taper couplers are generally preferred owing to its simplicity in design and fabrication.Despite diverse forms, the general taper coupler is composed of two parallel waveguides placed in close proximity: one is uniform whereas at one end of the other one, the width is gradually varied.Two ends of the taper match the wave guiding properties of two waveguides, so the mode is transformed gradually from one into another during propagation in the taper.Although the principle is intuitively quite simple, the design in most cases is conservative because of the lack of precise modeling guidelines and accurate modeling tools [3].A lot of theoretical study was carried out to investigate them, and different approaches were devel-oped.Lee et al. proposed an equivalent waveguide concept employing a conformal mapping method, which was combined with the Beam Propagation Method (BPM) to conduct analysis [4].In [3], tapered waveguides were analyzed by considering the whole taper as a succession of short linear taper fragments and modeling each of them using a two-dimensional BPM that solves directly the Helmholtz equation. However, most of the early work focused on correcting simulation methods to improve the accuracy, and the underlying physical mechanism governing the power transfer was not described [5].Therefore, few guidelines can be found for designers.Thus more and more researchers began to look into taper couplers from the angle of supermodes, i.e. local modes.In [5], Xia et al. defined and distinguished between the resonant coupling and adiabatic coupling from the view of supermodes [5].Resonant couplers are compact and simple but highly sensitive to unavoidable variations during fabrication [5].Adiabatic couplers , on the contrary, don't require exact control of taper length and gap, but need longer lengths [6].Sun et al. conducted a series of studies on the behavior of supermodes in adiabatic couplers [6,7] and derived a mathematical expression of the shortest adiabatic tapers [6].As such theoretical work contributed a lot to our understanding of taper couplers, the study on issues of practical application and modeling is still lacking.In practice, we often need to balance the taper length and the coupling efficiency, since we may not have sufficient space to fulfill the adiabatic condition, and we may want certain coupling efficiency that is not necessarily 100%.Mach-Zehnder interference filters, for example, typically use 3 dB couplers.Moreover, the mate-rials and structures used may limit the coupling.Thus, there are a lot of efficient but non-adiabatic taper couplers desired in practice. In published papers most simulations were conducted based on beam propagation method (BPM) [8].BPM calculates the electromagnetic fields during light propagation process and gives distributions of electric and magnetic fields.It is highly accurate as long as certain assumptions are met.However, limited knowledge of underlying mechanism can be obtained from the simulation process, so it is widely used to as a means of examining the designed taper coupler instead of guiding the design at the first place.Alternatively, the modeling of taper couplers can be based on the concept of modes using the coupled mode theory, which can provide insights to the mode evolvement in the coupler and thus provide immediate guidelines for design. Modeling Methods A taper coupler, which consists of two adjacent waveguides, can be regarded as a modified directional coupler.In each waveguide, only one mode is allowed to propagate.The coupled mode theory analyzes the coupled waveguides by taking one waveguide as the subject and studying the influence of the perturbation imposed by the presence of the other one.The supermode theory, however, views the coupled waveguides as a whole system, i.e. a composite two-waveguide structure, and studies the normalized local modes of the system, which are called supermodes.Nevertheless, both theories describe mode coupling for scenarios that coupled waveguides are invariable along the propagation direction.But the taper coupler is a varying structure where the width of one of the waveguides is constantly changing along the propagation direction.However, the coupler can be divided into a succession of infinitely short sections.The length of each section is so small that the width can be regarded as invariant.So the simulation of a taper coupler can be divided into two steps: modeling of individual divisions and a cascade of individual models.For each division, as the width is deemed constant, it is actually a simple directional coupler, in which there are fundamental supermode and first order supermode, named as even mode (E e ) and odd mode (E o ) respectively according to the symmetry of their field distributions.The total field is a linear combination of the even and odd mode. If the propagation constants of modes in individual waveguides are the same, namely, they are phase matched, two lobes of even and odd mode have the same size.If two propagation constants are different, that is, the phases are mismatched, the symmetry of lobes of E e and E o is broken, and their shapes are different.When phase mismatch is large, two waveguides are effectively decoupled: a wave propagating in either one is virtually unaffected by the existence of the other, and the supermodes of the composite structure just become those of the individual waveguides [9].δ is defined as the difference of the propagation constants of two individual modes while β c is for two supermodes in a similar way in (1): As shown in Figure 1, if δ is much smaller than 0, most energy of the even mode is located in waveguide 1 while if it is much larger than 0, most energy is located in waveguide 2. The opposite is true for the odd mode.So, the essence of taper coupling is to spatially transfer the energy of a supermode (even mode) from one waveguide to the other by designing the tapered waveguide so that δ sweeps from a negative value to a positive value while suppressing the coupling to the other supermode (odd mode) [6].The larger scope δ covers, the more thorough the energy transfer is.Ideally, δ changes from negative infinity to positive infinity, whereas in practice, the scope is determined by the materials and structures. Solving the coupled mode equations by substituting the general supermode solutions into them, we can obtain the expressions of supermodes and the relationship between the phase mismatch of supermodes (β c ) and that of individual modes (δ) [7] As δ and β c are known, the coupling strength κ [9] can be calculated.Then we have a complete mathematical description of the model with parameter δ, κ and β c .Following the same method, models of all the divisions in the taper coupler can be built. Subsequently, transfer matrix formalism is derived to cascade all the models based on coupled mode equations.In the matrix form, the solution to coupled mode equations is (3). sin e sin e sin e 0 cos sin e 0 are the input electric fields in waveguide 1 and 2 respectively.Let 0 and re-form the equation to obtain the expression of vector, let 0 z z  z z z    to re-write (3), substitute the vector expression into it, and we arrive at the transfer matrix formalism relating the model at to the model at starts with uncoupled waveguides, and their eigen-modes are computed individually without the presence of the other one.The propagation constants of the Ti waveguide mode and the As 2 S 3 waveguide mode are found to be β 1 and β 2 respectively.Then the model for the coupled system is built, and the even mode (β e ) and odd mode (β o ) are found, as Figure 2 shows.Then by multiplying the matrices in order, the models are cascaded.As a result, the electric field at certain point can be obtained from the known input z   1 0 E and .The algorithm is summarized in Table 1. Algorithm of modeling the taper coupler.In step 2, due to the complexity of the waveguide structure, computer software FIMMWAVE (Photon Design Ltd.) is used to model each section, i.e., to compute mode parameters.The film mode matching method is applied as the mode solver.It is good for structures consisting of large uniform areas, such as As 2 S 3 rectangular waveguides.The resolution and the size of simulation window are tested to prevent artificial errors.Simulation The approximation of a width-varying waveguide with a sequence of width-constant waveguides is mathematically equivalent to the approximation of a continuous integral with a discrete summation, which induces error inevitably.As the matrices cascade, the previous error passes on, and combines with the error of the present one.Consequently, such accumulation of the errors will manifest at the end of the taper, even if very small error exists in intermediate models.Simulation experiments show that discretization spacing is critical to the numerical error: the larger an error exists, the smaller the spacing needs to be, and the heavier the computation load is required.In order to reduce the error at the first place, the trapezoidal approximation algorithm , and Simulation Results The structure of an As 2 S 3 -Ti: LiNbO 3 coupler is illustrated in Figure 3.A titanium diffused waveguide is formed in lithium niobate substrate (Ti: LiNbO 3 ).On substrate surface is a piece of tapered As 2 S 3 rectangular waveguide, which is separated from the titanium diffused waveguide by a few microns.Both waveguides work in single mode condition.In Ti: LiNbO 3 fabrication process, the LiNbO 3 material under Ti pattern rises up from the substrate surface during titanium diffusion, resulting in a 0.1 μm high bump.In order to avoid the scattering loss caused by the rough surface of the bump, As 2 S 3 waveguide is placed to the side of the bump (side coupling) instead of on the top.For simplicity, air cladding is used.The height of As 2 S 3 waveguide is 470 nm.The final width of As 2 S 3 waveguide is determined to be 3.5 μm, in order to have a good mode confinement in the As 2 S 3 waveguide. As the width of As 2 S 3 taper varies, the mode propaga- tion constants in each section are plotted against the average width of that section in Figure 4. We see that the propagation constant of As 2 S 3 mode increases gradually as its width becomes larger whereas the Ti mode remains constant due to the invariable Ti waveguide width.The propagation constant of the even mode coincides with that of the Ti mode first and then gradually follows the trend of the As 2 S 3 mode.On the contrary, for odd mode, the propagation constant goes from the As 2 S 3 mode to the Ti mode.During this process, there is a point that the propagation constants of the As 2 S 3 mode and the Ti mode are equal, corresponding to the point that the phase mismatch δ equals to 0. From the graph, it is the point where the β-As 2 S 3 and β-Ti curves cross, corresponding to the width of 1.47 μm, called as critical width.It is the critical point where two waveguides are phase matched, and the energy is equally distributed in two waveguides for both even and odd mode. In other words, it can be regarded as the mid-point of mode coupling process from Ti waveguide to As 2 S 3 waveguide. As the width of the As 2 S 3 waveguide increases, the increasing rate of propagation constant β 2 gets smaller.That means the phase mismatch δ, the difference between the propagation constants of two waveguides, will eventually cease to grow.The normalized phase mismatch γ [6] is introduced to characterize such variation [6], as shown in (5) and plotted in Figure 5.     (5) Among various types of taper geometries, the linear taper is most straightforward and provides insights into Copyright © 2012 SciRes.OPJ X. XIA ET AL. 348 the general taper design.Figure 6 shows the coupling efficiency of linear tapers of different lengths, with width varying from 1.0 μm to 3.5 μm. The squares stand for the coupling efficiency and the bars represent the magnitude of oscillation.There is an optimum point that the maximum coupling efficiency reaches 96% when the length is 5 mm.The inset curve shows the percentage of energy coupled as light propagates through a 5 mm long linear taper.We can see that it consists of a monotonically ascending part and a subsequent oscillation part.The coupling is mostly contributed by the former part while the latter is due to resonance effects. For the even mode, the larger γ is, the more energy is located in As 2 S 3 waveguide and the less in Ti waveguide, while it is vice versa for the odd mode.Since the even mode is the mode to couple, the energy remaining in Ti waveguide imposes an ultimate limit to the coupling efficiency.From the curve of γ in Figure 5, we learn that at the end of the taper, γ is 2.59.Because γ is not large enough, there is still a coupling between two waveguides.Such coupling deteriorates the coupling efficiency and causes it to oscillate.The behavior of the coupler in this region is similar to that of a resonant coupler.As a result, a certain amount of energy flows back and forth between the two waveguide modes.From the view of supermode theory, the oscillation is a result of beating between the even and odd modes.Although the even mode is desired, the coupling of the odd mode is not completely suppressed, for example, if the length of the taper is not long sufficiently according to the adiabatic criterion in [6].When the odd mode propagates in the taper, there is coupling between the even and odd modes and a small amount of energy flows back and forth constantly.Since at the end of taper, the majority of the energy of the even mode is in As 2 S 3 waveguide and that of the odd mode is in Ti waveguide, there is a constant energy flow between two waveguides, and consequently the coupling efficiency oscillates. In the presence of mode beating, it is not necessarily the longer taper, the better coupling.There exists an optimum length for a taper with fixed width variation: if it is shorter than that, the mode is under-coupled since it is far away from the adiabatic criterion for 100% coupling; if considerably longer than that, the coupling efficiency is degraded by the resonant effect, as Figure 6 shows.In order to reduce the problem of mode beating, we must enlarge γ, either by increasing the phase mismatch δ or by decreasing the coupling strength κ. δ is limited by the property of the materials whereas κ can be controlled by the structure.For example, κ can be reduced by introducing a gap between As 2 S 3 waveguide and Ti waveguide. Although the coupling efficiency can be as high as 96%, it takes quite a few millimeters to get a decent coupling efficiency for linear tapers, which is not acceptable for ultra-compact design.According to the above analysis, efficient coupling takes place in the first part of taper where As 2 S 3 waveguide expands across the critical width and correspondingly the phase mismatch δ changes from a negative value to a positive one.That contributes to efficient coupling and we want it to be sufficiently long.Once most of energy has entered As 2 S 3 waveguide, the rest of the taper can be shortened.As a consequence, we have arrived at a two-stage taper (Figure 3).Furthermore, since the end width of the first stage (transition width) can now be a much smaller value, the rate of width change is reduced largely.Simulation shows that for the first part of a two-stage taper, if the width varies from 1.0 μm to 1.6 μm (have some leeway for fabrication deviations) in the length of 2 mm, the width increasing rate is X.XIA ET AL. 349 3  10 -4 , which is equivalent to an 8.3 mm long linear taper.Along with a 1 mm long second part, with width varying from 1.6 μm to 3.5 μm, the total length is 3 mm.The coupling efficiency can still reach above 90%, whereas the total length is reduced by 64%. Experiments To test As 2 S 3 -Ti: LiNbO 3 taper coupler design, S-shaped structures are fabricated and tested on a near IR measurement setup.As shown in Figure 7, it is composed of two taper couplers and an S-shaped As 2 S 3 waveguide to connect them.The taper couplers follow the two-stage taper coupler design. The device is fabricated using photolithography and dry-etch technology.The substrate LiNbO 3 is a birefringence crystal with refractive index n o = 2.2119 and n e = 2.1386 (λ = 1531 nm), placed in x-cut, y-propagation manner.The titanium diffused waveguide is fabricated through sputtering of a 95 nm thick titanium layer, patterning into 7 μm wide strip with photolithography and reactive ion etching (RIE), diffusion for 9 hours at 1025˚C and optical polishing on end-facets. For As 2 S 3 waveguide fabrication, a layer of 0.47 μm thick As 2 S 3 film is deposited on the titanium waveguide sample using an RF sputtering system, along with a protective layer of SiO 2 and Ti, which protects the As 2 S 3 from being dissolved by commercial alkaline-based developers.Then the projection photolithography is carried out, and the 1.0 μm wide taper tip can be produced, nevertheless the subsequent hardbake causes an expansion to certain degree.After that, the Ti-SiO 2 -As 2 S 3 stack is etched through to the substrate by RIE.And Ti-SiO 2 is removed in diluted hydrofluoric solution at last. The hardbake time is prolonged in order to obtain smother sidewalls by the resist reflow process, which, however, causes an expansion of As 2 S 3 waveguide to certain degree, up to 0.5 μm.The average tip width (i.e. the initial width) of tapered As 2 S 3 waveguide after fabrication is 1.3 μm.Depending on the process conditions Measurement results confirm the function of the taper coupler following the design in section III (Table 2). Generally, the cross port accounts for 50% to 90% of the total output power.Neglecting the excess loss caused by propagation in the low-loss As 2 S 3 and Ti waveguides, the average coupling efficiency is 73.2%.However, prior to extracting the precise coupling efficiency, the propagation loss and bending loss in As 2 S 3 waveguide have to be calibrated first.Many experiments need to be done for that, and the work is still ongoing. Instead of working at a single wavelength, these practical taper couplers are designed to work for a wavelength range.Accordingly, their coupling behaviors in frequency domain are studied.The measured spectrum at the cross port is presumably to have the same trends of the coupling spectrum, with an offset from the exact values.That offers the information of taper couplers in the frequency domain and can be used as another means to test our simulation method.The typical measured spectrum, along with simulation results is shown in Figure 9. In simulation the wavelength is scanned correspondingly from 1520 nm to 1600 nm, at the interval of 2 nm.The results show that, though the taper coupler exhibits certain degree of wavelength dependency, it has high coupling efficiency over a broad bandwidth. From the curve, we can see that the period of oscillation is less than 10 nm, and longer wavelengths have a makes the mode at different wavelengths see the taper coupler slightly different, and the energy transfer does not take place at the same location: the mode of shorter wavelength couples before that of a longer wavelength does.From the inset plot of γ in Figure 10, we also see that as wavelength increases, the rate of shift increases, confirming the presence of dispersion. Because of a 0.3 μm expansion during fabrication, the average tip width of tapered As 2 S 3 waveguide is 1.3 μm, and accordingly the transition width is 1.9 μm.Whether it is smaller or larger than that is dependent on the process conditions, which is hard to control and manifested in the measured coupling spectra, as shown in Figures 11(a In Figure 11, there is a drop in coupling efficiency in long wavelength region, while the model shows if the tip width is reduced to 1.2 μm, correspondingly the end width of the first stage is 1.8 μm, such a coupling spectrum will be resulted.The phenomenon can be understood from the plot of γ in Figure 10: at the wavelength of 1600 nm, the critical width is read to be 1.87 μm, which is larger than the actual transition width (1.8 μm).Hence the transfer of the energy has not completed yet at the end of the first stage, and resumes at the second stage where the width varies very fast, and considerable energy is coupled to odd mode.Consequently, the coupling efficiency drops. Similarly, the model explains the drop of coupling efficiency in the short wavelength region for Figure 12.Provided that the tip width is larger, e.g.1.4 μm, for short wavelengths such as 1525 nm, the critical width is 1.32 μm, which is smaller than the initial tip width.As a result, the odd mode is excited at the input of the taper coupler, and the coupling efficiency in this wavelength region is degraded, as shown in the curve.From the above analysis, we see a tradeoff between the applicable wavelength range and the design parameters, which, on the other side, provides a way of controlling the frequency domain behavior of taper couplers: by adjusting the transition width between two stages we can cut off longer wavelengths and by changing the tip width we can suppress the coupling of shorter wavelengths. Conclusion A modeling method for taper couplers is developed and applied to the study of As 2 S 3 -Ti: LiNbO 3 taper couplers, which are generally not adiabatic but highly efficient in terms of practical use.Simulations show that for those practical tapers, both adiabatic coupling and resonant coupling play an important role.There exists an optimum taper design with respect to the tip width, end width and length.A two-stage taper design can largely reduce the total length of the taper by 64% while keeping high coupling efficiency above 90%.Following the guidelines, test structures are fabricated.The measurement results agree with the simulation results well, suggesting a good coupling efficiency.Frequency domain analysis shows that the taper couplers work for a range of wavelengths, which can be controlled by adjusting the transition width and the tip width. Figure 1 . Figure 1.The supermodes of a taper coupler. 1 Discretize the taper coupler into a sequence of sufficiently small divisions; 2 Regard each section as a directional coupler and model it to obtain mode propagation constants β 1 , β 2 , β e and β o , and compute δ, β c and κ; 3 Calculate individual transfer matrix of each division based on parameter δ, β c and κ; 4 Cascade all the divisions together by multiplying matrices in order; 5 Calculate the coupling efficiency. Figure 2 . Figure 2. The fundamental mode of the Ti waveguide (a) and the As 2 S 3 waveguide (b) and the odd (c) and even mode (d) of coupled waveguides. Figure 3 . Figure 3. Configuration of an As 2 S 3 -Ti: LiNbO 3 taper coupler (two-stage taper design).The inset picture shows a top view. Figure 4 . Figure 4.The propagation constants of four modes.The inset picture shows them in a larger scale (from 0.6 μm to 4 μm). Figure 6 . Figure 6.Coupling efficiency for tapers of different length, with the inset figure showing the coupling process of a 5 mm long taper, i.e., the coupling efficiency versus the location along the taper. Figure 7 . Figure 7. S bend structures for testing taper couplers. Figure 8 . Figure 8. Influence of tip width variation (a) and the coupling process of a two-stage taper (b). Table 2 . Measurement results.oscillationperiodthan shorter wavelengths: both are captured by the simulation.The oscillation of the coupling curve is a strong indication of mode beating while the phenomenon that longer wavelengths have a slightly larger oscillation period possibly comes from waveguide dispersion: the wavelength-dependent propagation constant.Simulation shows that when the wavelength varies from 1530 nm to 1540 nm, the confinement of the mode in As 2 S 3 waveguide changes from 0.4536 to 0.4459 and the effective index changes from 2.2345 to 2.2331.Consequently, the propagation constant changes from 9.1763 to 9.1110, decreasing by 0.7%.From the plot of γ in Figure10, we can learn that different wavelengths have different critical widths, which shifts to a larger value as the wavelength increases.Such change larger
6,006
2012-12-28T00:00:00.000
[ "Physics" ]
Health-BlockEdge: Blockchain-Edge Framework for Reliable Low-Latency Digital Healthcare Applications The rapid evolution of technology allows the healthcare sector to adopt intelligent, context-aware, secure, and ubiquitous healthcare services. Together with the global trend of an aging population, it has become highly important to propose value-creating, yet cost-efficient digital solutions for healthcare systems. These solutions should provide effective means of healthcare services in both the hospital and home care scenarios. In this paper, we focused on the latter case, where the goal was to provide easy-to-use, reliable, and secure remote monitoring and aid for elderly persons at their home. We proposed a framework to integrate the capabilities of edge computing and blockchain technology to address some of the key requirements of smart remote healthcare systems, such as long operating times, low cost, resilience to network problems, security, and trust in highly dynamic network conditions. In order to assess the feasibility of our approach, we evaluated the performance of our framework in terms of latency, power consumption, network utilization, and computational load, compared to a scenario where no blockchain was used. Introduction The emergence of ubiquitous and pervasive computing and recent advancements in wearable and smart sensing technologies is revolutionizing the conventional modes of accessing and delivering healthcare services [1]. In addition to that, the advent of 5G and relevant enabling technologies provides several opportunities for the rapid developments of future healthcare systems [2,3]. For example, the traditional means of measuring patients' health parameters and vital signs are being replaced by automatic medical sensing technologies (medical sensors) [4]. Furthermore, technology development has facilitated healthcare-related processes, such as patient registration to the hospital, keeping track of their electronic medical/health records (EMR/EHR), and authorized access to these records. Future healthcare systems require a secure, trusted, and dynamic service and computing environment, i.e., "personalized and connected health" [5]. These smart and connected healthcare systems are expected to provide advanced medical services, such as advanced diagnostic, remote and real-time patient monitoring, efficient handling mechanisms for the healthcare big data, and digital solutions for addressing sudden challenges such as global pandemics [6,7]. Thus, the design of future digital healthcare systems must fulfill the resulting high requirements, for instance in terms of providing secure and trusted mechanisms for healthcare data sharing, data management of the massive data, and ensuring ubiquitous availability of the needed services in the desired amount of time. To fulfill these requirements, recent network and communications-related enabling technologies are going to play a significant role [8]. With the change in the demographics of the world population, the aging of people is becoming a key challenge for future healthcare services [9]. The expenses of providing the needed digital healthcare infrastructure for the elderly population are expected to rise in the future. Most elderly people want to live independently as long as possible. Since many of them live far from hospitals, one of the crucial requirements is to provide safe, secure, and timely remote healthcare monitoring services [10,11]. In addition, future hospitals are likely to have the additional burden of patients with chronic diseases who require continuous care for longer periods. Furthermore, in the case of highly contiguous diseases, (such as the recent COVID-19 pandemic) [12], it is desirable to handle patients with mild symptoms remotely at their home. Edge computing is considered as a vital technology enabler in providing timedependent or delay-critical healthcare services, specifically in the case of emergencies, realtime patient monitoring (intensive care unit (ICU) patients), and for faster data analysis of patients with conditions requiring fast medical response or contiguous diseases [13,14]. This brings computational and processing resources near to the end-users and devices to perform necessary real-time data analysis and decision-making functions and to improve resource-efficiency by reducing the amount of data transferred between the end systems and centralized cloud servers [15,16]. The blockchain is yet another promising technology in the context of the future healthcare domain and can provide a number of key characteristics in terms of decentralization, traceability, transparency, and immutability [17]. In addition, it enables a trusted computing environment required for the involved network entities/healthcare stakeholders to securely share information and resources among them [18]. For example, EMR/EHR records can be securely managed and monitored by the blockchain, and only authorized stakeholders are given access to append or retrieve the data. Some of the potential blockchain applications in the healthcare sector include: clinical data sharing, maintaining medical history, drug supply chain management, and billing/insurance claims, among others [19,20]. Therefore, in this paper, we integrated blockchain with a three-tiered IoT edge architecture for a elderly remote monitoring use case to ensure trusted data sharing among different healthcare stakeholders, tracking or monitoring of various processes and their phases, and maintaining the medical history of senior citizens, among others. In this direction, this paper focused on the integration of the blockchain and edge computing and their combined impact on the efficacy and efficiency of remote health monitoring systems, which furthermore contributes to the overall efficacy and efficiency of the digital healthcare system. The core aim behind combining these two enabling technologies is to fulfil the crucial requirements of remote monitoring use cases, including, e.g., delay-critical monitoring of patients' vital signs, requiring low-latency, trusted, and privacypreserving automated data management and decision-making. The main contributions of this paper include: • A blockchain-edge-based conceptual network framework for remote healthcare monitoring applications; • The performance and efficiency evaluation of the proposed framework and comparison to a baseline architecture without the blockchain; and • The analysis of the achieved results and their real-world impact. The rest of the paper is organized as follows: Section 2 presents the existing work related to the blockchain and edge computing for healthcare applications. Section 3 introduces the proposed Health-BlockEdge framework and the central performance and efficiency parameters in the context of a remote patient monitoring use case. In Section 4, the proposed framework is evaluated against a baseline setup without blockchain. Section 5 provides a discussion and future research directions, and finally, Section 6 concludes the paper. Edge and Fog Computing Edge computing brings computational and processing resources near to the end-users and devices to perform necessary real-time data analysis and decision-making functions and to improve resource efficiency by reducing the amount of data transferred between the end systems and centralized cloud servers [15,21]. In addition, the inclusion of the edge network improves the data privacy, security, and reliability of the systems running on the edge architecture [22]. Multi-access Edge Computing (MEC) is the European Telecommunications Standards Institute's (ETSI) standard for unleashing the full potential of low-latency 5G radio communications in the mobile access network [23]. MEC brings cloud computing 1-2 hops away from the IoT devices, which include the cellular base station or access point [21]. Fog computing [24] is a term closely related to edge computing, the distinction between these two terms being vague due to various overlapping definitions in the literature. We define the difference as follows: EC refers to the computational edge infrastructure and FC to the distributed service architecture above the edge computing infrastructure and local nodes. FC typically covers functions such as caching, data processing, and analytics occurring near the source of the data that improve the performance at the edges of the network, reduce the burden on data centers and core networks, and improve the resilience against networking problems [24][25][26]. Local Edge and Mist Computing In mission-critical applications, the role of the local network in delivering the most critical services is emphasized in the cases when the connection to the global network is not available or not stable [25]. Mist computing refers to the distributed service architecture above the local edge computing infrastructure and local nodes [27]. Complex sensors like surveillance cameras and healthcare monitoring devices are composed of a micro-controller or microcomputer that can perform certain tasks locally [28]. Although these devices have limited processing power, this network layer has not yet been explored by scientists and researchers. With more advanced health monitoring and imaging solutions, such as MRI scanners, the available computational capacity is much higher compared to traditional IoT devices, allowing also more demanding edge computing tasks to be deployed at the local level. The only disadvantage involved in mist computing lies in its complexity. The devices used for mist computing are usually application specific, and the sensors are often heterogeneous, making the implementation of a solution more complicated. The authors in [29] proposed an efficient and secure mist computing framework that ensures the requirements of a recently released public geospatial heath data set. The aim of the model is to enhance the security features with the help of mist nodes for effective management of geospatial health data. The proposed prototype was evaluated and the results compared with the traditional cloud framework. Furthermore, in [30], a mistfog computing framework for the Internet of Healthcare Things (IoHT) was proposed. The model achieved ultra-low latency with the assistance of mist-fog computing in the healthcare monitoring system. Blockchain In the early days of the blockchain, it was originally used mainly as a technology for banking and finance applications, e.g., cryptocurrency [31]. Bitcoin was the first cryptocurrency, implemented by Satoshi Nakamoto in 2009 [32]. Since then, the blockchain has been a significant technology enabler for various IoT applications due to the number of key characteristics it provides, e.g., decentralization, immutability, transparency, and trusted computing environments [33]. Some of the well-known blockchain-based IoT applications include: smart healthcare, transportation, supply chain management, and the Industrial IoT (IIoT) [34]. The blockchain belongs to distributed ledger technologies (DLT), where all transactions are replicated and recorded among all involved parties in a peer-to-peer network architecture and are secured using a strong cryptographic mechanism [35]. The blockchain has significant utility in various areas of the healthcare domain, e.g., secure data sharing among various healthcare entities, maintenance of healthcare records, remote monitoring of patients, pharmaceutical supply chain management, and health insurance claims, among many other areas [19,36,37]. The blockchain enables various key features in healthcare, such as a fine-grained access control mechanism for secure access to healthcare records, the needed distributed trust among various healthcare entities (patients, healthcare professionals, hospital administration, and service providers), authentication, traceability, process tracking and monitoring, and data privacy protection [38][39][40]. In addition, the blockchain offers valuable opportunities in terms of trusted data management mechanisms for digital healthcare systems [41][42][43][44]. However, despite its many desirable features from the viewpoint of healthcare applications, there are still concerns related to, e.g., the achievable performance and efficiency such as ensuring low latency, enhanced scalability, and the increased storage capabilities (ledger size) [45,46]. In this paper, we explored addressing these challenges by integrating it with edge computing to achieve higher performance and efficiency. The Evolution of Cloud IoT Models Cloud computing is evolving from a fully centralized computational model towards more decentralized edge-cloud models [47]. In the following subsections, we briefly introduce the three fundamental architectural cloud IoT models. Figure 1a highlights the traditional cloud-based IoT architecture, which consists of three main tiers: core, access, and local tier [47]. The lowest tier of the cloud IoT model is the local tier, which involves low-power end-devices such as sensors and actuators (e.g., instrumentation and monitoring devices) that sense the data and interact with the surroundings. The middle tier is the access tier, which is comprised of different gateway devices (access points, switches, routers, etc.) connecting local devices to core networks. The uppermost tier is the core tier, which includes the servers at the data centers, as well as high-performance switches and routers to deliver the data to/from lower tiers. Cloud servers handle all application logic, decision-making, data-management, and storage functions in this model. Figure 1b illustrates the edge-cloud IoT model, where the access tier is-in addition to its role connecting the local tier to the core tier-considered as a middle tier in the cloud IoT system, handling a part of the cloud services closer to the end-nodes (i.e., IoT devices). It reduces the physical distance between IoT devices and the computation processing server and, therefore, also provides lower latency between the end-device and the processing unit, compared to centralized cloud-based computation. It also reduces the processing burden on centralized cloud servers by performing some of the data processing at the access layer and therefore closer to the end-users. This model is also more resilient to core network problems and therefore more reliable to fulfill the requirements of missioncritical applications. Furthermore, security is improved by limiting the data propagation within the access network when needed. Together, these new features enable real-time and mission-critical cloud applications and services. Local Edge-Cloud IoT Model In our previous work, we proposed a local edge-cloud IoT model [25,47,48], which utilized the capacity of local nodes at the local tier to manage some parts of the processing and data analytics as presented in Figure 1c. This model allows a part of the computation and decision-making to take place in local nodes with sufficient computational capacity and stability. Therefore, in this model, the cloud applications and services and their parts can be deployed to the most suitable of three network tiers: local, access, and cloud. To enable this scenario, we introduced the concept of nanoservices [25,49]. A nanoservice is a lightweight microservice, which, with resource-aware orchestration, can be deployed on resource-constrained IoT nodes. We aimed at developing the nanoservice concept towards full compatibility with current cloud microservice systems. The local edge-cloud IoT model improves security and privacy by allowing the analysis of sensitive data to be managed locally instead of sending them to public servers for analysis. Furthermore, the model enables deploying most critical functions locally, therefore improving their resilience to access network problems and context awareness through available local sensor data. The authors in [50][51][52] introduced a two-tier IoT model using the iFogSim simulator and analyzed the network parameters (including latency, power consumption, network usage, cost). Furthermore, in one of our previous works [47], we exploited the benefit of local tier processing, proposed a three-tier IoT edge model, and analyzed the performance of the network. In [53], we extended the work performed in [47] by integrating the blockchain technology in the three-tier IoT edge models and evaluated the performance with and without the addition of the blockchain. However, in this paper, we extended our previous research [53] for the delay-critical healthcare use case and measured a number of key parameters such as network latency, power consumption, network usage, total cost, and number of operations executed in order to evaluate the overall performance of our proposed framework. Health-BlockEdge Concept This paper extends the work in [53,54] by proposing the conceptual BlockEdge framework for the remote healthcare monitoring use case. This conceptual framework is called Health-BlockEdge. We provide an overview of the concept and analyze its performance in a healthcare-related scenario by comparing two scenarios: one with the blockchain and the other without the blockchain. In the following subsections, we present the use case and the conceptual framework in detail. Use Case: Remote Healthcare Monitoring In order to analyze the proposed Health-BlockEdge concept, we used a remote healthcare monitoring use case. In the use case, the activity and various health parameters of an elderly person living at home were monitored remotely. Another remote care example could be the case of a contagious disease, such as COVID-19, where remote home care effectively prevents the disease from spreading through the physical contact of the patient with others. In this case, the patients with mild symptoms can be treated and monitored remotely from home. The monitoring includes the tracking of the patient health parameters, activity, and behavior through smart sensors and devices. In the case of exceptional situations, such as a detected accident or a health parameter (such as blood pressure, oxygen level, blood sugar, etc.) going outside the normal range, the system can notify a healthcare professional, who can further analyze the health conditions and accordingly provide recommendation of advanced treatment or hospitalization if necessary. Model Overview The framework consists of three tiers, i.e., local tier, access tier, and core tier. Local tier: The local tier is comprised of numerous sensors and devices, including onbody/in-body medical sensors/devices that can measure health-related data/parameters (vital signs), do the basic data-prepossessing and analysis, and forward this to the assigned high-computational nodes/servers (edge and cloud servers). Figure 2 depicts multiple healthcare services that need to be delivered to remote patients. The nature of a particular healthcare service depends on the need/requirement of the users/patients in elderly care. For example, "Healthcare Service 1" may contain medical sensors that provide services related to measuring the heart rate. Likewise, "Healthcare Service 2" can be related to the video surveillance/monitoring of an elderly person. The resource-constrained medical nodes at the local tier are connected to the higher capacity computational nodes (local edge nodes) for local data processing, analysis, decisionmaking, and forwarding the information/data further to the edge or the global networks. The local edge nodes are located in the same location, i.e., at the home of a disabled person or at the elderly care home. In addition, another major task of these high-resource-capable local edge nodes is to provide the needed resources and computational capabilities to run the local blockchain. A consortium/permissioned blockchain will run at the local network (at the local edges) to ensure trusted healthcare data/information sharing among various entities (users, doctors/staff, service providers, emergency units) in the network. The local network can be seen as the connection between the resource-constrained IoT medical sensors and devices and the local edge nodes that can perform the required sensing, collection, local data pre-processing, and decision-making and send the request of the high-computational tasks to the higher tiers. Access tier: In comparison with the local tier, the access tier is considered much richer in terms of the resources/computational capabilities. The access tier includes edge servers, i.e., MEC servers, that provide computational resources for remote monitoring services, as shown in Figure 2. This tier enables the crucial and demanding computational features such as AI-based data analytics and decision-making, adaptive/customized security and privacy solutions, dynamic allocation/orchestration of the available resources, etc. At this tier, a public/permissionless blockchain is run to share the necessary patient information or keep a record for resource sharing among various edge nodes. The structure of the preceding sentence is a bit unclear: the blockchain can also provide the resources for the auction/renting functionality in various entities in the network and can trade various resources, i.e., resources may include computational/processing capabilities, storage capabilities, or using of hospital resources (e.g., ambulances and workforce). Core tier: The core tier includes the global Internet core architecture and the cloud data centers providing a practically infinite amount of resources and computational capacity for cloud services. The key role of the global tier is to provide the highest layer service logic to manage and supervise the overall phases/processes of the healthcare systems and can provide the needed resources. In the case of the traditional cloud IoT model, all services, data management functions, etc., are managed at the data centers. Performance Metrics In this section, we present the key performance factors used to evaluate the efficiency of the proposed scheme. Latency: We define latency in the IoT computational offloading as the time between the moment an observed event has occurred and the moment of the system's response to this event. Total latency L is the sum of the communication and computational latency, so we can write: where L U is the time to upload the computational task/data to the cloud/fog/local device for processing/storage, L C is the computational latency for task execution, and L D is the communication delay of the control message/result of the computation from the server to the IoT node. Power consumption: This refers to the power consumption of data forwarding, computation, and data storage at each network layer. The power consumption P of the task execution can be expressed as: Here, P C is the power consumption of the computational infrastructure (servers) at the core level, P E the power consumption at the edge level, P L the power consumption at the local level, and P N the power consumption of the communication infrastructure of the network. Network usage: The network usage can be referred as the utilization of each of the three network layers in the defined healthcare use case. It is measured as the number of MB/s transmitted over the communication networks. The network usage increases with the increase of the number of data processing and network devices. Total cost: The total cost of the system C is the sum of the communication (network) cost C net and the computational (server) cost C comp . Here, C T is the total cost, C net the communication cost, and C comp the computation cost. Communication cost: This depends on the amount of packets relayed through the network and the cost per packet: Here, N is the number of devices, D n the size of the sensed data (MB) at device n, l the size of the packet (MB/packet) defined by the provider, C p the cost of forwarding each packet (x/packet), and I the number of nodes on the path between the IoT sensor n and the server processing the data from sensor n. Here, x is a value that represents the cost. Computational cost: This refers to the cost of the resources (CPU, power, storage, memory) used in each computational node in the network. Here, K is the number of devices (cloud/edge/local used for computation) in the network, C M the cost per memory (x/GB), MU k the memory used (GB), C S the cost per storage (x/MB), ST k the storage consumed (MB), C P the cost per power (x/W), P k the power consumption (W) of server k, C mips the cost per MIPS(x/mips) allocated, and S k the total size of the algorithm executed at k in millions of instructions (MI). Total number of operations executed: This is the sum of all operations necessary to execute the sensed data processing algorithm. In order to execute the algorithm, the system needs to orchestrate resources and to process the task at the server, as well as perform control operations in the communication network for sending the sensed data from the end node IoT sensor to the server. Here, H i is number of control plane operations for handling the task S n (receiving, pre-processing, forwarding operations) by each device on the path between end-devices (such as gateways, WiFi, routers/switches, etc.) and R n the number of operations executed to orchestrate the resources of the server for the execution of the algorithm with S n number of instructions. Results We evaluated the performance and efficiency of our proposed framework on the three IoT models and compared the results with the case where the blockchain was not in use [25]. We considered the following performance key factors as the evaluation metrics: (1) latency; (2) power consumption; (3) network usage; (4) total cost; and (5) total number of operations executed required to realize the full potential of the traditional cloud, edge IoT, and fog for real analytics. These performance metrics were described in more detail in the previous section. Evaluation Setup There are a number of simulation tools available, but only a few tools are capable of analyzing the performance of the fog and edge computing scenarios. We used the iFogSim simulator for our evaluations. It provides a high-level hierarchy, and the key reason for choosing this tool is that it provides application placement policies at different layers in the network and allows simulating real-time applications. The simulations were carried out by using a remote patient health monitoring use case and comparing the results of the three IoT models, with and without the blockchain. Figure 3 illustrates the Health-BlockEdge architecture implemented in the iFogSim simulation. We analyzed three different scenarios. The algorithm that continuously analyzes the sensed patient's data can be placed at the local tier, access tier, or cloud tier. We implemented these scenarios with the blockchain and compared the results with the system without the blockchain. We modeled the local tier of the system in iFogSim by deploying N = 4 resourceconstrained IoT nodes and four local edge nodes, together with the lightweight blockchain. The local/permissioned blockchain allowed the data to be exchanged with other edge nodes in a trusted manner. The sensed data collected at the IoT nodes were sent to the local nodes for processing and decision-making in the local algorithm placement scenario. Two fog/MEC nodes with higher computational capabilities were deployed in iFogSim to represent the access tier. Each fog node in the access tier together with the fog blockchain connected to two local edge nodes in the local tier. In the simulation scenario, the data processing algorithm was placed in the access tier edge nodes and provided the necessary computational resources for each user's data. The core network contained the cloud server, which was the highest in available resources and responsible for the overall system management. In the scenario where the algorithm was placed at the core layer, the cloud server was responsible for running it. The communication delay between the system components is presented in Figure 3. Table 1 presents an overview of the simulation parameters used during the performance evaluation of the proposed framework. The parameters were defined according to a literature review of typical devices and networks used in similar application scenarios [50,51,55]. In this paper, we used the consortium blockchain for the proposed Health-BlockEdge framework. However, the iFogSim simulator (used in this paper) does not support the feature of modeling and simulating any particular type of blockchain. Therefore, to analyze the overall network performance of the proposed system, we used the parameters describing blockchain preprocessing power allocation and the number of instructions (MI), handled by each blockchain module, from [55]. The simulation parameters are illustrated in Table 1, where blockchain devices with different capacities and capabilities were considered in the simulation. In the following, we present the simulation results for the key performance parameters. Latency Without the blockchain: Figure 4 presents the end-to-end latency in milliseconds (ms) for different complexities of the analysis algorithm (millions of instructions, MI), in scenarios where the algorithm was placed at different tiers of the cloud IoT architecture and when blockchain pre-processing was not used. End-to-end latency is defined by (1). From the results, we can see that for low-complexity tasks (below 185,000 MI), the local tier, i.e., a local computing node, provided the lowest end-to-end latency and therefore the most optimal placement for the analysis algorithm. This is the region on the left side of the junction of yellow and red lines of Figure 4. When the algorithm complexity was moderate, between 185,000 MI and 550,000 MI, the most optimal tier for its placement was the access tier, i.e., the MEC server. In Figure 4, this is the region between the red/yellow line and red/blue line junctions. For high algorithm complexity, above 550,000 MI, the core tier, i.e., data center, provided the lowest end-to-end latency of the task execution. In Figure 4, this is the region on the right side of the red/blue line junction. With Blockchain: Similarly to Figure 4, Figure 5 shows the end-to-end latency (ms) for different complexities of the analysis algorithm (MI), in scenarios where the algorithm was placed at different tiers of the cloud IoT architecture, when blockchain pre-processing was used. In this case, the local tier provided the lowest end-to-end latency, when the analysis algorithm complexity was below 195,500 MI. With the algorithm complexity between 195,500 and 575,000 MI, the MEC server (access tier) had the optimal placement for the algorithm. With the algorithm complexity above 575,000 MI, the data center (core tier) became the most optimal placement for the algorithm. When blockchain pre-processing was used, the local tier remained the most suitable for slightly more complex algorithms (195,500 MI vs. 185,000 MI) compared to the scenario without blockchain-pre-processing. Overall, the end-to-end latency decreased slightly as the blockchain was introduced for per-processing at each tier of the network design. junction of yellow and red lines of Figure 4. When the algorithm complexity is moderate, 373 between 185,000 MI and 550,000 MI, the most optimal tier for its placement is the access 374 tier, i.e. MEC server. In Figure 4, this is the region between the red/yellow line and 375 red/blue line junctions. For high algorithm complexity, above 550,000 MI, the core tier, 376 i.e. data center, provides the lowest end-to-end latency of the task execution. In Figure 4, 377 this is the region on the right side of the red/blue line junction. End-to-End Latency Algorithm at Core Tier Algorithm at Access Tier Algorithm at Local Tier Latency=122 Complexity= 185,000 Latency=284 Complexity= 550,000 Similarly to Figure 4, Figure 5 shows the end-to-end latency (ms) for different 381 complexities of the analysis algorithm (MI), in scenarios where the algorithm is placed 382 at different tiers of the cloud-IoT architecture, when blockchain pre-processing is used. 383 In this case, the local tier provides the lowest ene-to-end latency, when the analysis algo-384 rithm complexity is below 195,500 MI. With the algorithm complexity between 195,500 385 and 575,000 MI, the MEC server (access tier) is optimal placement for the algorithm. 386 With the algorithm complexity above 575,000 MI, data center (core tier) becomes the 387 most optimal placement for the algorithm. 388 When blockchain pre-processing is used, the local tier remains the most suitable 389 for slightly more complex algorithms (195,500 MI vs 185,000 MI) than with the scenario 390 without blockchain-pre-processing. Overall, the end to end latency decreases slightly as 391 the blockchain is introduced for per-processing at each tier of the network design. Power Consumption Without the blockchain: The power consumption (2) of the three different IoT algorithm placement scenarios, where blockchain pre-processing was not used, is presented in Figure 6. In the figure, we present the power consumption of each of the tree network tiers for each deployment scenario. When the analysis algorithm was placed at the data center, the core tier power consumption was 810.27 W, the access tier consumption 232.87 W, and local tier consumption 28.93 W. In this scenario, the power consumption consisted of the algorithm processing and communication costs at the core tier and only the communication cost at the lower tiers. The calculations excluded the power consumption of the sensing and actuation functionalities of the IoT nodes at the local tier, since they remained intact despite the location of the algorithm. The total power consumption of this deployment scenario was 810.27 W. When the algorithm was run on an MEC server at the access tier, the power consumption was distributed as follows. The core tier power consumption was 173.29 W, by keeping the reserved data center resource idle. This tier took care of the service management and, therefore, needed to be active, even when the analysis algorithm was deployed on the lower tiers. In this scenario, the access tier resources (MEC server) ran the algorithm, and therefore, the computational load was focused on this tier. Together with the communication cost, the power consumption on the access tier was 411.91 W. At the local tier, the power consumption was 28.73 W. At this tier, only the communication costs were present in this scenario (in addition to the consumption from IoT sensing and actuation functions, which were excluded from the calculation). The total power consumption of this deployment scenario was 613.93 W. When the algorithm was run in a local node at the local tier, the power consumption distributed as follows: 172.65 W at the core tier, 131.25 W at the access tier, and 63.75 W at the local tier. In this scenario, the local tier included the computational cost at the local node running the algorithm and the communication costs between it and the IoT devices with sensing and actuation functionalities. The access and core tier computational and networking elements were idle. The total power consumption of this deployment scenario was 367.65 W. When comparing the three deployment scenarios, it can be clearly seen that when the distance between the sensing and actuation nodes and the node running the analysis algorithm increased, the total power consumption increased. Since our use case was dataintensive, containing, e.g., a high-definition video feed from the sensing device to the analysis algorithm, the distance between the source of the data and the processing node significantly affected the total power consumption. With the blockchain: Figure 7 depicts the power consumption of three IoT models when blockchain pre-processing was used. When the analysis algorithm was placed at the core tier (data center), the total power consumption was 895.87 W. The core tier power consumption was 612.00 W, the access tier consumption 251.94 W, and local tier consumption 31.93 W. When the analysis algorithm was placed at the access tier (MEC server), the total power consumption was 695.28 W, consisting of the core tier power consumption of 197.24 W, the access tier consumption of 465.91 W, and local tier consumption of 32.13 W. When the analysis algorithm was placed at the local tier (local node), the total power consumption was 442.60 W, consisting of the core tier power consumption 196.48 W, the access tier consumption 159.36 W, and the local tier consumption 86.76 W. Similar to the scenario without blockchain pre-processing, the total power consumption was significantly affected by the distance between the sensing and actuation nodes and the node running the analysis algorithm. The blockchain pre-processing increased the power consumption in all deployment scenarios: 10.6% when the algorithm was deployed at the core tier, 13.3% when the algorithm was deployed at the access tier, and 20.4% when the algorithm was deployed at the local tier. Network Usage Without the Blockchain: Figure 8 illustrates the network usage of the three IoT models in MB/s when blockchain pre-processing was not in use. In our use case, the raw data from various sensors, including the full-HD video monitoring feed (constant bit-rate of 1080p video, H.264 compression, and 40 FPS), as well as the necessary control data were exchanged between the sensor and actuator nodes and the computational node hosting the algorithm analyzing the sensor data. In addition, the main service at the data center communicated with the computational node hosting the algorithm analyzing the sensor data. These data included the exchange of the analyzed and processed data and control data. When the data were analyzed at the core tier (data center), the network utilization in each of the three tiers was 6.4 MB/s (all data, including control and raw data, were delivered through the whole data path between the IoT nodes and the data center). When the data were analyzed at the access tier (raw data delivered from IoT nodes to the MEC server), the network utilization at the local and access tiers was 6.4 MB/s, and the network utilization between the access and core tiers was 1.12MB/s. When the data were analyzed at the local tier (raw data not moved outside the local network), the network utilization at the local tier was 6.4 MB/s, 1.12 MB/s at the access tier, and 1.13 MB/s at the core tier. With the blockchain: Figure 9 depicts the network usage of the three IoT models in MB/s when blockchain pre-processing was in use. As with the previous case, the raw data from various sensors, including the full-HD video monitoring feed, as well as the necessary control data, were exchanged between the sensor and actuator nodes and the computational node hosting the algorithm analyzing the sensor data. In addition, the main service at the data center communicated with the computational node hosting the algorithm analyzing the sensor data, including the exchange of the analyzed and processed data and control data. Furthermore, the parts of the distributed blockchain architecture, operating at all tiers of the architecture, exchanged data between each other and the operational entities. When the data were analyzed at the core tier (data center), the network utilization in each of the three tiers was 7.34 MB/s. In this scenario, all data, including control and raw data, were delivered through the whole data path between the IoT nodes and the data center. In addition, the extra network utilization from the blockchain operation was included. When the data was analyzed at the access tier, the network utilization at the local and access tiers was 7.34 MB/s, and the network utilization between the access and core tiers was 1.47MB/s. When the data were analyzed at the local tier, the network utilization at the local tier was 7.34 MB/s, 1.48 MB/s at the access tier, and 1.47 MB/s at the core tier. Total Cost Without the blockchain: The total operational cost, as defined in (3)-(5), of the different deployment scenarios without blockchain pre-processing is presented in Figure 10. The total cost was highest when the analysis algorithm was placed at the core tier, and it amounted to 538, compared to 355 when the algorithm was deployed at the access tier. The least costly solution was placing the algorithm at the local tier. In this case, the total cost was 204. In each scenario, the largest portion of the cost came from the tier where the algorithm was placed, as expected. When the application was placed at the access tier, the total cost decreased by 34.0% compared to the cloud placement. When the algorithm was moved further down to the local tier, the total cost decreased by 62.1% compared to the cloud placement. Therefore, placing the analysis algorithm at the local tier significantly reduced the total cost of the system compared to the access (MEC server) and core (data center) tier solutions. With the blockchain: Figure 11 presents the total cost, as defined in (3), for the three algorithm placement scenarios when the blockchain pre-processing was used. Similar to the scenario without the blockchain, the total cost was largest when the analysis algorithm was placed at the core tier, 607, while the access tier placement scenario cost amounted to 410. The least costly solution was again placing the algorithm at the local tier where the total cost was 249. In each scenario, the largest portion of the cost came from the tier where the algorithm was placed. The reduction in cost when the algorithm was moved from the core tier to the access tier was 32.5% and when moved to the local tier, 59%. Having the blockchain pre-processing in the system slightly increased the total cost. At the core tier, the placement cost increased by 12.8%, at access tier by 15.5%, and at the local tier by 22%. Number of Operations Executed Without the blockchain: The number of operations executed at each layer of the three scenarios is given in Figure 12, in case no blockchain was used. In the cloud placement scenario, the sum of all the operations necessary to perform N tasks was 2882. This included data processing and routing, resource management, control operations in the network, etc. With the access tier placement, the number of operations executed was 1731 while at the local tier, it was 1114. Placing the algorithm further from the end-device significantly increased the number of instructions necessary for the execution of the same tasks, due to the communication overhead and the larger number of devices involved in data and control message forwarding. In the access tier placement, the number of control operations executed at the core layer decreased from 2149 to 276 instructions in the cloud tier placement. This was because the core layer only performed communication operations between the cloud and edge server and forwarding operations. The number of operations executed at the access layer included algorithm execution, data forwarding, and network communication operations. When the algorithm was placed at the local tier, the number of operations executed at local layer was 614. This included data processing, resource management, and control operations. On the other hand, the number of operations at the core and access tiers was much smaller and accounted for system control and orchestration. Number of Operations Executed With the blockchain: Figure 13 illustrates the number of operations executed in the three placement scenarios when the blockchain was used. If the algorithm was placed at the core tier, the total number of operations executed within the local network was 3175. Compared to the scenario without blockchain, there was a 10.1% increase in the number of instructions in this case. With the access tier placement, the total number of instructions was 1914, which was a 10.5% increase compared to the scenario without the blockchain. Finally, the total number of instruction executed when the algorithm was run at the local tier was 1286, which was a 15.4% increase compared to the implementation without the blockchain. Summary of the Results In this section, we evaluate the impact of using the blockchain in a remote healthcare monitoring use case on the performance, efficiency, and resource utilization in three IoT edge cloud deployment scenarios. The main observations were as follows: • From the latency evaluation (Figures 4 and 5), we can see that the use of blockchain pre-processing did not have a significant effect on the latencies. The optimal regions for each deployment scenario were changed by 5-6% towards favoring more local setups. • From the power consumption evaluation (Figures 6 and 7), we can see that blockchain pre-processing increased the power consumption by 10.6% when the algorithm was deployed at the core tier, 13.3% when deployed at the access tier, and 20.4% when deployed at the local tier. • When comparing the network usage (Figures 8 and 9), we can see that the overall network usage was increased by roughly 15-31% when blockchain pre-processing was used. • With respect to the total operational cost (the sum of communication and computation cost; Figures 10 and 11), the blockchain pre-processing increased the total cost by 12.8% when processing was done at the core tier, by 15.5% when the processing was done at the access tier by, and by 22% when the processing was done locally. • When comparing the number of computational operations executed at different levels of the architecture (Figures 12 and 13), blockchain pre-processing increased the total number of instructions by 10.1%, when the processing was done at the core tier, by 10.5% when the processing was done at the access tier by, and by 15.4% when the processing was done locally. Discussion and Future Directions In this paper, we dealt with two highly important enabling technologies for digital healthcare: edge computing and the blockchain. The integration of the blockchain with edge computing is considered vital to providing secure, trusted, and delay-critical healthcare services for remote monitoring scenarios, e.g., elderly home care. The proposed conceptual Health-BlockEdge framework brings computational and processing resources near to the end-users and devices to perform necessary real-time data analysis and decision-making functions and to improve system-level resource efficiency. The consortium blockchain technology used in this paper improves the overall security of remote health monitoring systems by providing key features such as confidentiality, transparency, immutability, traceability, data privacy, and availability [41,56].The inclusion of the blockchain also enables various highly important characteristics for the selected healthcare use case, e.g., trusted data sharing, secure monitoring or tracking of different processes and their phases, and keeping the electronic medical records of users. The proposed blockchain-edge approach (Health-BlockEdge concept) improves the data privacy protection by limiting the propagation of sensitive data at the local and edge networks instead of sending all data to the cloud. Our proposed framework does this in two ways. First, since we can provide analysis capability at the edge, not all private data need to pass through the public (cloud) servers. Second, data that need to be processed on public servers can be anonymized at the edge. In addition, the blockchain can also fulfill the anonymity requirements of end-users by preventing the leakage of their real identities to the access/edge and core tiers. For example, similar anonymous settings can be adopted as the authors developed in [57], where they proposed blockchain-based anonymous authentication mechanisms for edge computing-based smart grid systems. We evaluated the performance and efficiency of our proposed framework in the three IoT edge cloud deployment scenarios and compared the results with the case where the blockchain was not in use. According to our results, the end-to-end latency decreased slightly as the blockchain was introduced for pre-processing at each tier of the network design. With respect to power consumption, the blockchain pre-processing increased the power consumption by roughly 10-20%, depending on which tier of operation the data analysis functions were deployed. The blockchain pre-processing increased the system overhead with respect to the combined effect of computational and communication cost by roughly 13-22% and the number of computational instructions on the system level by roughly 10-15%. In conclusion, when considering the potential of blockchain usage in improving the security, privacy, and trust in healthcare monitoring scenarios, the increase in the system cost can be considered tolerable. While evaluating the performance of the proposed system, we also observed some potential limitations of this work. For example, in our setup of the simulations, we were not able to model the actual blockchain (consortium) blocks because the iFogSim simulator does not provide any features for the blockchain. Therefore, we only considered the blockchain processing power and the number of blockchain instructions (MI) handled by each blockchain module for the performance evaluation of our proposed framework. By simulating or implementing the actual blockchain model in this three-tiered architecture, the analysis of the overall network performance could be improved. The proposed framework also represents the infrastructural bases for enabling future healthcare services such as remote health monitoring and contactless patient treatment. These innovations aim at reducing the price of healthcare services and improving the availability of healthcare. The proposed framework in this paper can be extended to interesting avenues for future work, e.g., artificial intelligence (AI)-based optimization of the combined use of the blockchain and edge computing in the healthcare domain for improved performance and efficiency. Another direction for future work is to develop solutions to maximally utilize the features of the blockchain in bringing trust between different stakeholders of complex distributed healthcare communication and data management systems. Conclusions The rapid evolution of technology allows the healthcare sector to adopt intelligent, context-aware, secure, and ubiquitous healthcare services. It has become highly important to propose value-creating, yet cost-efficient digital solutions to healthcare systems. These solutions should provide effective means of healthcare services in both hospital and home care scenarios. The blockchain is a promising technology for enabling a trustful distributed computing environment in the context of future healthcare. However, despite its many desirable features, there are still concerns related to, e.g., the performance and efficiency of blockchain technologies. In this paper, we addressed these challenges by integrating blockchain with edge computing to cope with some of the key requirements of smart remote healthcare systems, such as long operating times, low cost, resilience to network problems, security, and trust in highly dynamic network conditions. Through simulations of our proposed Health-BlockEdge concept, we evaluated the performance of our approach in terms of latency, power consumption, network utilization, and computational load, compared to a scenario where no blockchain was used. According to the results, our concept demonstrated the feasibility of the combined use of blockchain and edge computing to provide decentralized trust, reliable real-time access, and control of the network and computational capacity in the digital healthcare environment, without compromising the system performance and resource efficiency.
11,043.2
2021-04-01T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Synchrotron 3D characterization of arrested fatigue cracks initiated from small tilted notches in steel High resolution synchrotron X-ray tomography has been used to obtain 3D images of arrested cracks initiated at small artificial defects located on the surface of cylindrical steel specimens subjected to mode I fatigue loading. These defects consist in small semi-circular slits tilted at 0°, 30° or 60° with respect to the plane normal to the loading axis; all of them had the same defect size, area = 188 μm, where the area denotes the area of the domain defined by projecting the defect on a plane normal to the loading axis. Arrested cracks initiated from the notch were observed for all tilt angles at the surface of samples cycled at the fatigue limit (stress amplitude at which the specimen does not fail after 1×107 cycles). High resolution synchrotron X-ray tomography has been used to obtain 3D images of those small defects and non-propagating cracks (NPC). Despite the fact that steel is a highly attenuating material for X rays, high resolution 3D images of the cracks and of the initiating defects were obtained (0.65 m voxel size). The values of surface crack length measured by tomography are the same as those obtained by optical microscope measurements. The area values present the same tendency as the surface length of NPC, i.e. larger non-propagating cracks areas were observed in the softer steel. In the extreme case of 60o tilted defect, the crack fronts appear much more discontinuous with several cracks propagating in mode I until arrest. INTRODUCTION n a previous work we reported an experimental test campaign that was carried out in order to clarify the effect of defect orientation on the fatigue limit of three types of steels; JIS-S15C, JIS-S45C and JIS-SNCM439 [1].The average Vickers hardness HV of each material measured with a load of 9.8 N was 117, 186 and 293, respectively.A semicircular slit or hole was introduced into a cylindrical specimen with a diameter of 7 mm by electro-discharge machining or drilling.The slits were tilted at 0° 30° or 60° with respect to the plane normal to the loading axis, but all of them had the I same defect size, area = 188 µm, where the area denotes the square root of the size of projection of the defect on a plane normal to the loading axis.Fully reversed tension-compression fatigue tests were carried out at a test frequency of 150 Hz.The fatigue limit FL was determined as the stress amplitude at which the specimen does not fail after 1×10 7 cycles.For every material and notch tilt angle a non-propagating crack was found at the fatigue limit.After performing fatigue tests, and carrying out fatigue crack propagation studies, the following conclusions were obtained: -In JIS-S15C and JIS-S45C, the fatigue limits were nearly independent of the tilting angle, and were found in good agreement with the predicted values by the area parameter model [1][2].On the other hand, in JIS-SNCM439, the fatigue limit was also in agreement with the prediction at the tilting angle of 0°, but it increased with an increase in the tilting angle.-As shown on Fig. 1, a typical growth behavior of small crack was observed irrespective of the tilt angle; when the stress amplitude was slightly above the fatigue limit, the growth first decelerated to a certain crack length, and then started accelerating with fatigue cycles until the final failure. (a) (b) Figure 1: Propagating and non-propagating cracks found above and below the fatigue limit in case of JIS S45C steel and 30º notch (a) crack length vs. number of cycles (b) Crack growth rate vs. crack length. -The length of non-propagating cracks varied greatly according to the steel types, i.e. longer non-propagating cracks were observed in softer steel.The crack length measured at the surface was also found to be independent of the tilting angle in the case of JIS-S15C and JIS-S45C, whereas in the case of JIS-SNCM439, the length increased with an increase in the tilting angle.-The observed crack paths were in good agreement with the direction normal to the maximum principle stress near the defect (2D analysis).To complement the information reported above and understand the phenomenon in more detail, it was noted that further investigation was needed to clarify the three dimensional shapes of the non-propagating cracks.As a step forward in this direction, this paper shows 3D experimental characterization of the non-propagating fatigue cracks using high resolution X-ray tomography. EXPERIMENTAL Specimens n order to perform the X-ray analysis of non-propagating fatigue cracks, a smaller specimen with a square crosssection ranging from 0.8 to 1 mm 2 was cut from the larger sample, such that it contained the notch and the nonpropagating fatigue crack.As the cross sectional area of the small samples cannot be larger than 1 mm 2 due to experimental restrictions and some of the samples contained cracks measuring up to 700 μm in surface length, extreme care was taken when cutting the sample.The procedure consisted in extracting a larger sample containing the notch plus crack and then applying successive steps of polishing and microscope observation until the final geometry was reached. Synchrotron X-ray microtomography The tomography experiments were performed on ID19 beamline at the European Synchrotron Radiation Facility (ESRF) located in Grenoble, France.At it is shown in Fig. 2, a monochromatic X-ray "pink beam" having a photon energy of 60 keV is used.A Pco Edge CCD camera with a 2160 x 2560 pixel chip was used.In order to load the samples for producing crack opening, a dedicated in situ tensile rig was mounted onto the rotation stage of the beamline.2000 radiographs were taken while the sample was rotating over 180º along its vertical axis; with an exposure time of 0.07 s. (scan acquisition time of 3.68 min).Reconstruction of the tomographic data was performed with a standard filtered back-projection algorithm.A 0.65 m voxel size was obtained, allowing a good detection of the notch plus cracked areas.Fiji and ParaView open softwares were used for post-processing the images obtaining separately the notch and crack geometries for every steel and defect tilting angle. 3D crack analysis 3D image of the small initiating defect and of the non-propagating fatigue crack was obtained for every defect geometry and steel type.Fig. 3 shows the results in the case of the annealed JIS-S45C steel.Figures on the left show 3D images of non-propagating cracks found at the fatigue limit and figures on the right corresponds to the crack projection (minimum intensity values) in to Z, Y and X planes, Z being the direction parallel to the loading axis.Tab. 1 shows a comparison between the length of non-propagating crack measured at the specimen surface by optical microscope and the surface length obtained by analyzing the 3D images.These measurements are very similar, having a maximum difference of 4 % for the case of S45C steel, 30º notch.Regarding the different materials, the NPC size varied A greatly according to the steel types, i.e. larger non-propagating cracks observed in the softer steel.This observation is consistent with the previous 2D crack analysis performed at the surface in previous work.The area values corresponding to the square root of the total defect area (notch plus crack) projected on a plane normal to the loading axis are also shown in Tab. 1.These values present the same tendency as the surface length of NPC.These results give confidence in the ability of tomography to measure reliably the length of those small cracks.From these images it can clearly be seen that longer cracks are always observed at the surface where larger SIF values are found, for every notch tilt angle.In case of the 60º semi-elliptical slit, the crack observed at the notch root seems to be smaller but the projected area of the defect plus crack is similar to those obtained for 0º and 30º.Furthermore in the case of 60º tilted defects, irrespectively of the steel type, cracks do not merge in to a single crack front, propagating in mode I leading to a much more discontinuous front.On the Z projection the NPC presents discontinuities, showing deviations from the typical semi-elliptical single-front shape. FEM mesh reconstruction As mentioned before, a calculation of the SIF values for 3D cracks will be helpful for a better understanding of the experimental results.In previous work, a first 2D approach was carried out where the SIFs of two-dimensional cracks were calculated by using finite element analyses and implementing the stress extrapolation method [3].KI values where obtained for the different tilt angles and results showed that K I values of the cracks grown from tilted cracks do not significantly differ from that of non-tilted crack with equivalent projected crack lengths.The maximum difference was about 6% in the crack length range where non-propagating cracks were observed.Of course this is a basic analysis in comparison with the actual notch 3D geometry.In order to extend these results to the 3D field, a FEM analysis including the 3D defect geometry is necessary.As a first step, a FEM mesh of the actual defect was obtained by treating with AVISO software the 3D X-ray images of defects, at it is shown in Fig. 4.This mesh will be a starting point for computing K values near the crack tips for every tilt angle; those values will be used to analyze the 3D shapes of the cracks reported here. CONCLUSIONS igh resolution (0.65 μm voxel size) synchrotron tomography was used to obtain 3D images of arrested cracks initiated from artificial defects in 3 different steels.Good contrast level between crack and bulk facilitates accurate measurements.Surface crack length measured by this technique gave the same results as optical microscope measurements.The area values present the same tendency as the surface length of NPC, i.e. larger non-propagating cracks were observed in the softer steel.In the case of 60º tilted defect, the crack fronts appear much more discontinuous with several cracks propagating in mode I until arrest. IFig. 1 Fig. 1 details the shapes and dimensions of the samples at various stages of preparation (a) fatigue specimens (b) tomography specimen extracted from the fatigue specimen (c) notch and non-propagating cracks located in the small specimen (d) types of defects and tilting angles. Figure 1 : Figure 1: (a) Fatigue specimen (b) tomography specimen extracted from the fatigue specimen (c) notch and non-propagating cracks located in the small specimen (d) types of defects and tilting angles. Figure 2 : Figure 2: Schematic view of the tomographic acquisition process, volume reconstruction and 3D crack analysis. Figure 3 : Figure 3: Left: 3D images of non-propagating cracks found at the fatigue limit.Right: Crack projections in to Z, Y and X axis (minimum intensity).Material: Annealed JIS-S45C steel. Figure 4 : Figure 4: FEM mesh of the actual notch geometry. Table 1 : Non-propagating crack length measured by Optical microscope (O.M.) and the analysis of 3D images (3D).The area
2,510.2
2015-06-19T00:00:00.000
[ "Physics", "Materials Science" ]
SMRT sequencing revealed the diversity and characteristics of defective interfering RNAs in influenza A (H7N9) virus infection ABSTRACT Influenza defective interfering (DI) particles are replication-incompetent viruses carrying large internal deletion in the genome. The loss of essential genetic information causes abortive viral replication, which can be rescued by co-infection with a helper virus that possesses an intact genome. Despite reports of DI particles present in seasonal influenza A H1N1 infections, their existence in human infections by the avian influenza A viruses, such as H7N9, has not been studied. Here we report the ubiquitous presence of DI-RNAs in nasopharyngeal aspirates of H7N9-infected patients. Single Molecule Real Time (SMRT) sequencing was first applied and long-read sequencing analysis showed that a variety of H7N9 DI-RNA species were present in the patient samples and human bronchial epithelial cells. In several abundantly expressed DI-RNA species, long overlapping sequences have been identified around at the breakpoint region and the other side of deleted region. Influenza DI-RNA is known as a defective viral RNA with single large internal deletion. Beneficial to the long-read property of SMRT sequencing, double and triple internal deletions were identified in half of the DI-RNA species. In addition, we examined the expression of DI-RNAs in mice infected with sublethal dose of H7N9 virus at different time points. Interestingly, DI-RNAs were abundantly expressed as early as day 2 post-infection. Taken together, we reveal the diversity and characteristics of DI-RNAs found in H7N9-infected patients, cells and animals. Further investigations on this overwhelming generation of DI-RNA may provide important insights into the understanding of H7N9 viral replication and pathogenesis. Introduction Influenza defective interfering (DI) particles are defective virions that harbour large internal deletions in at least one segment of the viral genome [1]. This loss of essential genetic information, mostly found in the three polymerase genes, renders these virions non-replicative, which can be reversed by co-infection with an intact helper virus [1,2]. DI particles are described as "interfering" because of their hindrance to viral replication and packaging of the co-infected helper virus [3]. Influenza DI-RNAs have been observed in embryonated chicken eggs and cell cultures with a serial passage of the virus at a high multiplicity of infection [4,5]. Generation of influenza DI-RNAs has been described in different strains of influenza infection, including H1N1 [4,6,7], H3N8 [8], avian H5N2 [9], avian H7N7 [10] and influenza B viruses [11]. Though the recently emerged H7N9 virus has caused severe human infections, little is known about the existence of H7N9 DI-RNAs. High-throughput sequencing revolutionizes the field of DI particle research because of its ability to identify and quantitate different DI-RNA species. Recently, several studies attempt to use Illumina sequencing as a new approach for identifying different influenza DI species [11,12]. Unfortunately, short-read length Illumina sequencing may only provide partial information of individual DI species. Recently, the advancement in a long-read sequencing platform, Single Molecule Real Time (SMRT) sequencing, may overcome the existing barrier and allow further characterization of these DI species. SMRT sequencing is a third-generation sequencing technology that generates sequencing reads of extended length (>10 kb). Without the need of RNA fragmentation before sequencing and the later bioinformatic read assembly, SMRT sequencing allows precise discrimination of different alternatively spliced transcripts or isoforms [13]. Since DI-RNAs containing large internal deletion are analogous to alternatively spliced transcripts, SMRT sequencing can be an alternative approach to next generation sequencing (NGS) for determining the comprehensive information of DI-RNA species. In this study, we observed the presence of DI-RNAs in nasopharyngeal aspirates (NPAs) from patients who were infected by H7N9 in high abundance and adopted SMRT sequencing to identify the entire sequence of these DI-RNA species. From the identified DI-RNA sequence, the highly diverse DI-RNA species generated in patients with H7N9 infection could be observed. Intra-species overlapping sequences at both ends of the large internal deletion around the putative breakpoint were noted. We have further extended our study of DI-RNA species in cell line and mouse model infected with H7N9 AH1 strain with SMRT sequencing. We also compared the identified DI-RNA species from infected mouse model with both Illumina and SMRT sequencing, and found that both platforms were able to identify the common large internal deletion breakpoint but SMRT could also detected DI-RNA species with multiple breakpoints owing to its long sequencing read length. While the exact mechanism of the generation of the DI-RNA by influenza has not been revealed, SMRT appears a promising tool for characterizing DI-RNA entities in future mechanistic studies. Human clinical samples Testing of all patients' specimens has been approved by the Institutional Review Board of the University of Virus infection NHBE cells were seeded in six-well plates and infected with H1N1 or H7N9 at multiplicity of infection (MOI) of 0.5 for 1 h at 37°C supplemented with 5% CO 2 . Cells were then washed twice with blank RPMI 1640 and cultured in RPMI 1640 medium supplemented with TPCK-trypsin (Sigma). Total cellular RNA was extracted at 24 h post-infection. Animals and virus inoculation All animal experiments were followed to the standard operating procedures approved by the University of Hong Kong committee on the use of live animals in teaching and research (CULATR: . Six-to eight-week-old female BALB/c mice were obtained from the Laboratory Animal Unit of the University of Hong Kong. Virus challenge experiments were performed in biosafety level 2 (WSN virus) and 3 (H7N9 virus) animal facilities. Mouse 50% lethal dose (LD50) was determined with serial dilutions of H7N9 virus by intranasal inoculation as previously described [17]. The LD50 of H7N9 virus was determined 10 4.8 PFU [18]. Group of 6-8-week-old female BALB/c mice were intranasally inoculated with 10 3 PFU of H7N9 virus. The mouse lung tissues and homogenates were collected on 2 and 4 days after virus inoculation for the quantitation of DI-RNA. RNA extraction, RT-PCR and Sanger sequencing Viral RNA was extracted from NPA of H7N9-infected patients and H7N9-infected NHBE using a QIAamp Viral RNA Mini kit (Qiagen) following the manufacturer's instructions. Total RNA was extracted from virus-infected NHBE cells and mouse lung tissues using Trizol reagent (Invitrogen) following the manufacturer's instructions. Random hexamer primers were used to synthesize cDNA from viral or total RNA using a Transcriptor First Strand cDNA Synthesis kit (Roche Molecular Systems, Inc). Primers designed for DNA amplification of PB1 DI-RNA detection in mice and SMRT sequencing were shown in Supplementary Table S1. The viral genome and total RNA were amplified using a PrimeSTAR® GXL DNA Polymerase (TaKaRa). PCR amplification cycled under the following condition: 95°C for 3 min, followed by 30 cycles of 98°C for 10 s, 60°C for 15 s and 68°C for 3 min. Following electrophoresis in 1% agarose gel and the relative intensity of DI-RNA and full length RNA was determined by ImageJ v1.52. All fragments within 100-1000 bp were considered as DI-RNA and fragments within 1500-2500 bp were classified as full length RNA. To perform Sanger sequencing on the common DI-RNA isolated from the PB1 segment of H7N9-infected patient NPA sample, its PCR product was purified using a PureLink TM Quick Gel Extraction Kit (Invitrogen). The gel-purified product was TA-cloned into pGEM-T vector using pGEM-T and pGEM-T Easy Vector Systems (Promega) which was then plasmidextracted with PureLink TM Quick Plasmid Miniprep Kit (Invitrogen) and subjected to Sanger sequencing. Sequences of eight individual clones were aligned to the full length PB1 sequence. Illumina sequencing One of the H7N9-infected mice was selected for identifying DI-RNAs using Illumina sequencing. The Illumina sequencing and library construction was performed by Novogene. In brief, NEBNext® Ultra RNA Library Prep Kit was used for library construction. After adapter ligation, 10 cycle of PCR amplification was performed for sequencing target enrichment. Sequencing was performed with 150 bp paired end reads on an Illumina NovaSeq 6000. FASTQ sequences of each sample were aligned against the H7N9 (A/Anhui/1/2013) sequence using TopHat2 [19] with 5-100,000 nucleotides intron length allowed. Sequences with internal deletion were extracted and aligned against A/Anhui/1/2013 strain (EPI439503-5; EPI439508) limited to at least seven nucleotides at one side of the internal deleted sequences. The breakpoint position of these sequences was determined by the gap opening and closing position of the alignment by using inhouse python script based on pairwise2 [20]. SMRT sequencing The remaining PCR products (NPAs of patient #3, #5, #6, and H1N1-and H7N9-infected NHBE or mice samples) were cleaned up using GeneJET Gel Extraction and DNA Cleanup Micro Kit (Thermo Fisher). The second round DNA amplification was performed using KAPA HiFi HotStart PCR kit (Roche) with Barcoded Universal Primers (Pacific Biosciences). The barcoded PCR products were pooled for the SMRTbell library construction using SMRTbell TM Template PreP Kit 1.9-SPv3. One Sequel SMRT cell was used to sequence 7 amplicons. The raw reads were separated and circular consensus sequence (CCS) reads were generated using SMRT Link v5.0.1. The SMRTbell adapter sequences were removed and CCS reads within 150-2400 bp were selected for further analysis. The histograms of numbers of counts and length of DI-RNA sequence were generated using R [21]. The sequencing data of H7N9-infected NPAs (#3, #5 and #6) and NHBE in Supplementary Table S3 was analysed as followed. Sequences were aligned against the reference genome of H7N9 A/Anhui/1/ 2013 strain (EPI439503-5; EPI439508) using NCBI blastn alignment tools to determine the position of breakpoint. The identity of a single DI-RNA species was established by the identical sequence of 40-100 nucleotides long flanking the breakpoint of DI-RNA. The sequencing data of the remaining samples (H7N9-infected mouse in Supplementary Table S3 and samples in Supplementary Table S2) was analysed as followed. Sequences were first aligned against the reference genome of H7N9 A/Anhui/1/2013 strain (EPI439503-5; EPI439508) and the breakpoint position of internal deleted sequences was determined using in-house python script that used in Illumina sequencing. Read counts of the identical DI-RNA species were calculated to determine the abundancy of DI-RNA species. Results DI-RNAs are most frequently identified in polymerase genes PB2, PB1, and PA, but not in nucleoprotein NP [22]. Therefore, we mainly focused on characterizing DI-RNAs of the three polymerase genes, with inclusion of NP gene as a negative control. Six NPAs of 10-76year-old Chinese patients with H7N9 infection were collected in the third and fifth waves of H7N9 influenza epidemic. Among these six patients, two patients were scored as critically ill and two result in death after H7N9 infection. The details of potential poultry exposure and severity of illness among these six Chinese patients were listed in Table 1. In these NPA, multiple DI-RNA species in the range from 200 to 800 bp could be detected by RT-PCR in PB2, PB1 and PA segments ( Figure 1A). No correlation was observed between the production of DI-RNAs and severity of illness among these six patients. To demonstrate RT-PCR technique may less likely result in generation of defective genomic cDNAs, NPAs of four patients with H3N2 infection were collected during 2017 in Hong Kong and were used as the controls. A very strong signal of full length PB1 and PB2 was observed in these four NPA samples but very low or no signal for DI-RNAs ( Figure 1B). Compared to the NPA of H3N2infected patients, the expression of DI-RNAs was relatively higher in H7N9-infected patient's NPAs. To confirm that these DI-RNAs contained internal deletions, one of the consistently detected PB1 DI-RNAs from NPA of H7N9-infected patient #6 (indicated with an asterisk in Figure 1A) was purified and Sanger-sequenced. To our surprise, seven DI-RNA species from eight sequenced plasmid clones were detected, exemplified by one DI-RNA containing a single large internal deletion and another containing two consecutive internal deletions ( Figure 1C). Because of abundant DI-RNAs found in the NPAs of H7N9-infected patients, one of the reference To gain a more comprehensive understanding on the variety of DI-RNA species, high-throughput sequencing was applied to detect and quantity the abundant of different DI species. Currently, the sequencing platform used for identifying internal deletion of influenza genome is Illumina sequencing [11,24]. As the rise of the third-generation sequencing, the sequencing read length was improved tremendously and may increase the amount of information obtained from the sequencing. Therefore, we selected one of the H7N9-infected mice sample for the comparison of both sequencing platformspaired end 150 bp Illumina sequencing and SMRT sequencing in this study. The raw reads of SMRT and Illumina sequencing of H7N9-infected mice were listed in Supplementary Table S2. Due to requirement of amplification in SMRT sequencing, higher depth of coverage was achieved from SMRT sequencing, leading to more aligned DI-RNA sequences and unique DI-RNA species were obtained compared to Illumina sequencing (Table 2). Moreover, the abundant DI-RNA species identified from Illumina sequencing can also be found in SMRT sequencing, such as PA DI-492, PB1 DI-675 and PB2 DI-376 (highlighted as blue in the Supplementary Table S3). Most importantly, multiple internal deletions of DI-RNAs can be identified using the long-read length SMRT sequencing, which were unable to achieve using Illumina sequencing. Within all the DI-RNAs obtained from Illumina sequencing, no multiple internal deletion DI-RNA was identified, including PA DI-492 and PB1 DI-675 that were defined as double internal deletion DI-RNA in SMRT sequencing. SMRT sequencing was applied for the following experiments due to the identification of rare DI-RNA species and receiving more comprehensive information. The raw reads of SMRT sequencing for all H7N9-infected individuals were listed in Supplementary Table S2. The three NPAs of H7N9-infected patients with the highest sample quality and virusesinfected NHBEs were selected and subjected to SMRT sequencing. Consistent with the DI-RNA pattern observed in gel electrophoresis, high abundance of DI-RNAs in PA, PB1 and PB2 was revealed in SMRT sequencing but relatively less abundant for NP segment (Figure 3). In-depth sequence analysis on breakpoint identification suggested that the SMRT long-read sequencing could precisely display the diversity and relative abundance of DI-RNAs species found in the three patient-derived NPAs (Figure 3 and Supplementary Table 2). As representative, the three most abundant DI-RNA species of the PA, PB1, PB2 and NP genes from each NPA sample were selected. The internal deletion of these abundantly expressed DI-RNAs was graphically illustrated in Figure 4. The sequence information including break point positions and number of counts was detailed in Supplementary Table 2. Using this long-read sequencing approach, we were able to identify various DI-RNA species with single, double or even multiple internal deletions in the NPA of H7N9-infected patients (Figure 4 and Supplementary Table 2). Most of these double or triple internal deletions consist of less than 20 nt deletions before the large internal deletion. Five out of seven DI-RNA species found in NPA of H7N9-infected Supplementary Table S3. patient #6 using Sanger sequencing could also be identified by using SMRT sequencing. Among the seven DI-RNA species detected by Sanger sequencing, PB1 2109 and 2127 DI-RNA species were the only DI-RNA species that obtained more than 10 reads in SMRT sequencing (Supplementary Table 2); only one or two reads were obtained for DI-RNA species detected by Sanger sequencing. After aligning DI-RNA sequences to the referenced sequences, similar observation was invariably resulted from SMRT sequencing of H7N9-infected NHBEs while DI-RNA was minimally detected in H1N1infected cells by contrast ( Figure 5 and Supplementary Table 2). The three representative DI-RNA species of PA, PB1, PB2, and NP genes were showed in Figure 6. In line with the published sequences of H1N1 DI-RNAs, no conserved motifs at the breakpoint sites could be observed among different H7N9 DI-RNA species. However, the identical breakpoints at 2109 and 2089 nt in PB1 segment could be found in all three patient's NPA and H7N9-infected NHBEs (Supplementary Table 2; highlighted in green). In addition to identical breakpoints founded in PB1 segment of the three clinical specimens, sequences before the breakpoint sites were similar or identical to the deleted sequences on other side of the breakpoint sites (previously described as overlapping sequences [7]). Overlapping sequences (ranged from 2 to 23 nt) were observed around the breakpoint region of several DI-RNAs (Supplementary Table 2). As representative, longer overlapping sequences (ranged from 5 to 23 bp) in the three patient's NPA and the identical PB1 DI-RNA species found in the clinical specimens graphically illustrated in Figure 7. Longer overlapping region (around 5-23 bp) was identified at the breakpoint of some of the most abundant DI-RNA species of PB2 and PA (Figure 7), which has not been previously reported in the literature. . Abundance of DI-RNAs in NPAs of three H7N9-infected patients determined by SMRT sequencing. RNAs were extracted from the NPAs of three H7N9-infected patients and subjected to long-read SMRT sequencing. The most abundantly expressed DI-RNA species as well as the full length RNA in PA, PB1, PB2, and NP segments were presented in relative abundance (%). Black bars represent the full length influenza RNAs and DI-RNA species with more than 3% relative abundancy of PA, PB1, PB2, and NP segments. Grey bars represent the remaining DI-RNA species that were less than 3% relative abundancy. Dotted lines represent internal deletions. Sequences of DI-RNAs were listed in Supplementary Table S3. The DI-RNA species with longer overlapping sequences around breakpoint region were indicated (a-b, d-f) and details are showed in Figure 7. Discussion This study demonstrated the robustness of SMRT sequencing in identifying the DI-RNA species. SMRT sequencing could successfully demonstrate the diversity of DI-RNA species and the relative abundance of each species. With the long-read length of SMRT sequencing, complete information of each DI-RNA could be obtained due to no fragmentation of DI genome required. For example, long overlapping sequences at the breakpoint region, and most importantly, DI-RNAs with multiple internal deletions were identified. In addition, the influenza DI-RNAs have been reported in cultured cells with influenza B virus infection [11] and H1N1-infected cultured cells [4], mice [25] and patient's NPA [7,12]; however, the DI-RNA existence in human infections with avian influenza A H7N9 virus has not been revealed. This study provides evidence on the existence of DI-RNA in the clinical specimens, cultured cells and mice during influenza A (H7N9) virus infection. However, whether the production of DI-RNA can be generalized to all influenza A (H7N9) virus warrants further investigation since a robust production of H7N9-infected clinical specimens was observed, but H7N9 (AH1) used in this study is a low-pathogenic (LPAI) influenza A virus. It will be especially important to investigate and compare the DI-RNA production by both LPAI and high-pathogenic (HPAI) H7N9 influenza A virus. The introduction of Sanger sequencing revolutionizes the field of DI particles due to identification of the breakpoint location for a single DI genome. Since then, Sanger sequencing becomes a traditional and standard sequencing platform for detecting and identifying DI-RNA species. Recently, NGS has been introduced and several studies attempt to use this new Supplementary Table S3. approach for identifying different DI species [7,11,12]. Apart from accessing the genomic information of a single DI genome, multiple DI species and their abundance can be determined by a single run of Illumina sequencing. Recently, a long-read sequencing platform -SMRT sequencing showed high quality and promising sequencing results that may also feasible for identifying DI-RNA species in clinical specimens. The comparison of Illumina and SMRT sequencing was performed before using SMRT sequencing as further investigation on DI-RNA species in clinical specimens. Our results demonstrated that both SMRT and Illumina sequencing can identify and showed the diversity of DI-RNA species in H7N9 (AH1)-infected mouse. Especially, the DI-RNA species with high abundance detected in SMRT sequencing were also abundantly found in Illumina sequencing. However, higher read counts and more comprehensive information of DI-RNAs were received from SMRT sequencing. As the result, SMRT sequencing was used for the further investigation on DI-RNA species in clinical specimens and infected NHBE cells. Detailed comparison between SMRT and Illumina sequencing was listed in Table 3. In brief, with the generation of a single-stranded circular DNA, a longer read length sequencing allows clear identification of DI breakpoint position without fragmentation of DI genome. Moreover, SMRT sequencing can also determine the relative abundance of DI species because each small chamber on SMRT chip associates with a single DNA entity. Our sequencing data demonstrate the robustness of SMRT sequencing as an alternative sequencing The most abundantly expressed DI-RNA species as well as the full length RNA in PA, PB1, PB2, and NP segments were presented in relative abundance (%). Black bars represent the full length influenza RNAs and DI-RNA species with more than 3% relative abundancy of PA, PB1, PB2, and NP segments. Grey bars represent the remaining DI-RNA species that were less than 3% relative abundancy. Dotted lines represent internal deletions. Sequences of DI-RNAs were listed in Supplementary Table S3. With the ability to obtain long-read length by SMRT sequencing, the comprehensive information of DI-RNA species could be obtained from both H7N9infected clinical specimens and primary culture cells. From the high-throughput sequencing data of the three clinical NPA samples, it was observed that neither of these samples has identical DI formation pattern in any of the polymerase genes, apart from a few conserved DI patterns that was identified in multiple samples. Given that the current understanding in the formation of influenza DI-RNA is limited, the exact mechanism leading to the diversity of DI-RNA species production is still unknown. It could be possible that either the polymerase complex initiates the generation of DI-RNA in a random manner or the molecular difference of the sequence in the internal zgenes of these patients could give rise to this observation. With the feasibility to identify exact DI species present with influenza infection in cultured cell system, the molecular mechanism of DI-RNA generation could be elucidated with bioinformatics analysis of SMRT-sequenced in-vitro infection experiment in further studies. In addition to the identification of multiple DI-RNA species in SMRT sequencing, similar or identical sequences are shown around the breakpoint region and the other side of deleted region (previously described as overlapping sequences [7]), which has been described previously [7]. According to those studies reported previously, the length of these overlapping sequences is often around 2-6-nt long in H1N1infected individuals [7,26]. In this study, apart from short similar or identical overlapping sequences, longer overlapping sequences with up to 23-nt long were also observed around the breakpoint of several abundantly expressed DI-RNA species. These overlapping sequences may indicate that the translocation of viral polymerase possibly occurs during influenza A virus replication. Although the molecular mechanism of DI-RNA generation is still unknown, several reports showed increased accumulation of DI-RNA with mutations in PA [12,27] and NS segments [28,29]. Recently, a report suggested that polymerase PA D529N mutation led to the reduction of DI-RNA production during H1N1 infection [12]. However, PA D529N mutation is absent in our tested samples and other commonly used H7N9 reference strains. A comparative study of overlapping sequences of DI-RNA by SMRT sequencing with the amino acid substitution in different segments may provide insights for further investigation on the mechanism of influenza DI-RNA generation. Robust expression of DI-RNA was observed in the H7N9-infected clinical specimens, cell culture and mice. Recent study suggested that the reduction of DI-RNA production in H1N1 viruses led to low antiviral response induction and increased viral pathogenesis [12]. Unfortunately, no correlation between the severity of illness and DI-RNA expression was observed in the six Chinese patient's NPA with H7N9 infection. Due to insufficient sample number, further investigation on any correlation between the presence of DI-RNA and the severity of illness is required. Because the robust expression of DI-RNA was observed in H7N9-infected patients, we also identified the presence of DI-RNA in NHBE cells and mice with H7N9 infection. Unfortunately, the production of DI-RNA was not observed in the supernatant virions, which might be due to the low MOI infection. In previous reports [1,30], H1N1 (WSN) DI-RNAs were discovered in undiluted passage of H1N1 (WSN) viruses or with high MOI infection. To demonstrate the robustness of DI-RNA generation by H7N9 (AH1) virus, a low MOI infection of H1N1 (WSN) or H7N9 (AH1) was performed in NHBE cells. An insufficient amount of DI-RNAs in H1N1-infected NHBE cells was observed due to the low MOI infection. The DI-RNA production was significantly higher in NHBE with H7N9 (AH1) infection compared to H1N1 (WSN) infection. Moreover, H1N1 (WSN) DI-RNAs were previously reported in the mice lung [30] and DI viruses [5] generated from MDCK cells. However, the H1N1 (WSN) DI-RNA species that reported previously were unable to identify in this study. Therefore, it is possible that low MOI infection may give rise to other DI-RNA species and further investigation of comparing DI-RNA species in low MOI and high MOI infection is required. In conclusion, our data demonstrates the existence of DI-RNAs in clinical specimens and cultured cells and mice model during influenza A (H7N9) virus infection. We also show that SMRT sequencing is a promising alternative sequencing technique which provides comprehensive genetic information and relative abundance of multiple DI-RNA species. Using the third-generation sequencing, we identify multiple abundant DI-RNA species in different clinical specimens and their overlapping sequences at the breakpoint regions. Increased DI-RNA synthesis is observed in H7N9-infected NHBE cells compared to the reported H1N1 (WSN) strain that generates DI-RNAs. All the studies mentioned above revealed the overwhelming synthesis of DI-RNAs during infection of H7N9 influenza A virus. Accession number(s) The raw data of the SMRT sequencing reads were submitted to Genbank under Genbank accession number SRP126530.
5,860.4
2019-01-01T00:00:00.000
[ "Biology", "Medicine" ]
Comparison of Modern Highly Interactive Flicker-Free Steady State Motion Visual Evoked Potentials for Practical Brain–Computer Interfaces Motion-based visual evoked potentials (mVEP) is a new emerging trend in the field of steady-state visual evoked potentials (SSVEP)-based brain–computer interfaces (BCI). In this paper, we introduce different movement-based stimulus patterns (steady-state motion visual evoked potentials—SSMVEP), without employing the typical flickering. The tested movement patterns for the visual stimuli included a pendulum-like movement, a flipping illusion, a checkerboard pulsation, checkerboard inverse arc pulsations, and reverse arc rotations, all with a spelling task consisting of 18 trials. In an online experiment with nine participants, the movement-based BCI systems were evaluated with an online four-target BCI-speller, in which each letter may be selected in three steps (three trials). For classification, the minimum energy combination and a filter bank approach were used. The following frequencies were utilized: 7.06 Hz, 7.50 Hz, 8.00 Hz, and 8.57 Hz, reaching an average accuracy between 97.22% and 100% and an average information transfer rate (ITR) between 15.42 bits/min and 33.92 bits/min. All participants successfully used the SSMVEP-based speller with all types of stimulation pattern. The most successful SSMVEP stimulus was the SSMVEP1 (pendulum-like movement), with the average results reaching 100% accuracy and 33.92 bits/min for the ITR. Introduction Human vision is the most important sense for us to recognize and understand the world around us; the perception of motion is the basic sensation that develops in the early stage of neural system development. A Brain-computer interface (BCI) is a system in which the user can, e.g., control or adjust an electronic device with neither the use of the peripheral nerves nor muscles; typically, utilizing an electroencephalography (EEG) recording. Within the variety of EEG-based BCI paradigms, the steady-state visual evoked potential (SSVEP) is one of the fundamental ones [1]; thanks to it's straightforward implementation it is a common choice for speller oriented applications [2]. The typical SSVEP-based BCI uses a constant flickering light source, e.g., LEDs or objects displayed on a computer screen usually turning on/off with a stable frequency (typically 6-90 Hz) [3]. One of the fastest online SSVEP-based BCIs was presented by Chen et al. [4] in 2015. It reached an average accuracy and information transfer rate (ITR) of 99% and 267 bits/minute (bpm), with a peak ITR of 315 bpm. In the recent decade [5], achieving high accuracy with a high literacy rate (number of users for which the BCI achieved high accuracy) increased significantly [6]. In two studies with 61 participants, ranging from 7.0-14.8 Hz, in 0.2 Hz steps (utilizing the CCA method for frequency classification). For the online experiment, the trial duration varied between 2 and 4 s with an additional 0.5 s for gaze shifting. On average, all eighteen participants reached an accuracy of 94.0% and an ITR of 91.2 bpm, with a mean trial length of 2.8 s. In 2019, Chai et al. tested a radial zoom stimulation in a motion presentation pattern [20]. The authors compared this radial zoom stimuli to the typical on/off flickering and to the modulated Newton's ring motion (e.g. [14]) integrated into an 8-target graphical user interface (GUI). The stimuli were divided into squared and circled forms. The flip frequencies of the 8 targets were between 8 and 15 Hz with ∆f = 1 Hz. The recorded data were processed offline with the CCA utilized for target classification. In total, eight participants reached an average accuracy of 89.7% and 93.4% for the SSMVEP-square-zoom and the SSMVEP-circle-zoom versions, respectively. For the squared and circled Newton's rings stimulations, the achieved accuracies were 76.3% and 77.5%, respectively. The highest offline ITR (42.5 bits/min) was reached for the circular radial motion stimuli with a classification time window of 3 s. In 2020, Zhang et al. presented an SSMVEP BCI approach utilizing a human gaiting sequence as a stimulus. In a four class BCI setup, the system achieved an average accuracy of 88.9% across ten subjects [21]. The authors showed that the gait stimulus generated an additional sensorimotor response. In our preliminary study in 2019, Stawicki et al. tested an SSMVEP based on vertical motion of a stimulus (pendulum behavior) [22]. Seven participants in a copy-spelling task with a 4-target system, achieved an average accuracies of 99.0%, 100%, and 98.7% utilizing the following frequency ranges: 3.0-3.75 Hz, 6.00-6.67 Hz, and 9.23-12.0 Hz, respectively. In another study, Volosyak et al. 2020 compared three major VEP stimulation approaches; the common frequency based SSVEP (also named fVEP), the m-sequence code based stimulus (cVEP), and the SSMVEP [6]. Besides the performance, the study tested also the personal preferences and demographic factors across 86 subjects, with a copy-spelling task. All 86 participants finished the cVEP with an average accuracy and ITR of 97.8% and 40.2 bpm, respectively. Eighty-three (83) subjects successfully accomplished the fVEP speller task with a mean ITR and accuracy of 31.9 bpm and 95.3%, respectively. The SSMVEP was successfully finished by 80 subjects with a mean ITR and accuracy of 26.4 bpm and 91.1%. Here, the SSMVEP stimulus was designed as a continuous vertically decreasing/increasing motion, creating an illusion of a flipping coin. An overview of the recent studies implemented SSMVEP stimulations is presented in Table 1. The SSMVEP can also be utilized for vision testing; in 2020, Zheng et al. compared six motion visual stimuli-oscilating expansion-contraction of concentric rings, reversal checkerboard, reverse vertical square-wave gratings, reversal horizontal and vertical sinusoidal gratings, and brief-onset vertical sinusoidal gratings-for objective visual acuity assestment [23]. The authors found differences in the harmonic component of the motion-based SSVEP response. We realized some of the reported findings were particularly interesting, requiring further investigation. Our results of the CCA maximum coefficient spectrum differ from that of the literature (e.g., Han et al. 2018 [18]), we found out that some of the motion-based stimulation in fact do elicit a harmonic response, as it was always the case in all our previous studies. The presented study was designed and performed to explore and assess different SSMVEP1-5 motion behaviors (linear, radial, zooming), and their harmonic response in a practical usage scenario, through the comparison of the distinct/individual classifying accuracy and overall system speed (information transfer rate). In order to make a general assessment of the SSMVEP stimuli, including the new types SSMVEP4 and SSMVEP5 introduced in this paper, they all were directly compared with the typical SSVEP flickering stimuli. If the motion-based stimuli proves to be faster or more comfortable for users, they could influence visual stimulus designs in future research or application. This paper is structured as follows: Methodology Section 2 followed by the result Section 3 and finally the discussion of our findings in the Section 4. Materials and Methods This section describes the hardware and software solutions of the reported study, all of which are necessary to replicate this experiment. Ethics Statement All participants (healthy young students) gave written informed consent. Information needed for the analysis was stored anonymously. This research was approved by the ethical committee of the medical faculty of the University Duisburg-Essen. Subjects The total number of participants were nine (3 males, 6 females, 0 diverse) with an average age of 24.7, ranging from 20-33 years, and a standard deviation (SD) of 4.30; all students of the Rhine-Waal University of Applied Sciences. The participants had normal or corrected-to-normal vision, 6 had previous experience with SSVEP-based BCI systems, and 8 were right-handed. The experiment took place in a regular laboratory room, while the light intensity was kept on an acceptable level (ambient indirect daylight). The experiment took approximately 60 min. All participants received a financial reward for their participation. Questionnaire The questionnaire for each tested stimulus contained eight questions, two of them were answered after a short familiarization run (spelling of the word "BCI") and six were answered directly after spelling the word "INVITE" (which was the main task for further evaluations). In the questions, the participants had to subjectively assess their impression of the tested system on a Likert scale 1-7, where 1 means full agreement with one term and 7 the total disagreement (agreement with the opposite term). The terms to evaluate after the training spelling ("BCI") were focused on the visual stimulation only: Relaxed-exhausting and comfortable-annoying. The question terms that followed the main spelling tasks addressed the user friendliness of the tested interface and the spelling system itself. The terms used for this were: Efficient-inefficient, clear-confusing, exciting-boring, inventive-conventional, enjoyable-annoying, and fast-slow. The motivation for these questions was the User Experience Questionnaire described in [24], similar to our recent study Volosyak et al. 2020 [6]. Visual Stimulation The self-written custom-made BCI program (based on OpenGL and C++) was developed using Microsoft Visual Studio 2015 Community (Microsoft, Redmond, WA, USA) and presented the visual stimulation on a standard LCD monitor (24-inch, Acer Predator XB252Q) with a screen resolution: 1920 × 1080 pixels and the vertical refresh rate V RR of 240 Hz. There were two types of stimulation: Full color circle (SSVEP, SSMVEP1, SSMVEP2) and a checker-board circle (SSMVEP3-5), see Figure 1. Based on these stimulation objects, we investigated five movement patterns with the following motion designs: The movement-speed of the tested stimulation patterns were generated utilizing the following base-frequencies: 7.06, 7.50, 8.00, and 8.57 Hz, and the trigonometric function cos((2 π f K i) V RR ), where f K is the base-frequency, i is the frame index, and V RR is the refresh rate of the monitor. The length/angle of the movement step/rotation were calculated depending on the number of frames in each cycle of the base-frequency of the corresponding movement behavior. The even number of frames (in a frequency cycle) ensured the same number of steps/angles in both direction (forward and reverse, clockwise and counterclockwise) of the motion (linear, radial, zooming)-behavior. The chosen frequencies shall meet the following conditions: have a smooth movement effect for better user experience, e.g., for the highest frequency 8.57 Hz the number of steps in one direction is 14 (with used V RR = 240 Hz), and for 3 Hz it would be 34 frames, and an induced, fairly strong harmonic response of the VEP stimulation (expect a relative strong harmonic response to certain stimuli designs) which is commonly known to be the case for lower stimulus frequencies [3]. SSVEP In the SSVEP stimuli white colored full circle behind the characters was appearing and disappearing at a specific rate according to the utilized frequency. The stimuli were generated with the squared wave function based on sin(2 π f K i V RR ), where f K is the frequency, i is the frame index, and V RR is the refresh rate of the monitor (see Figure 2a). SSMVEP1 In the SSMVEP1 design, the position of the stimulation circle was based on the cosine function and the condition that the maximum and minimum cosine-based positions are above or below the original (center/starting) position, see Figure 2b. SSMVEP2 In the SSMVEP2 design, the vertical scale (vertical size) of the stimulus changes stepwise, according to the base-function, see Figure 2c. SSMVEP3 The design of the SSMVEP3 stimuli consisted of 6 layers of rotated black-white arcs (cutouts of a full circle). The contraction behavior in this design was applied to every layer in order to ensure equal contraction ratio of the whole stimuli, see Figure 2d. SSMVEP4 The SSMVEP4 stimulus consisted of 6 layers of checkerboard arcs (cutouts of a full circle) stacked, the ratio of the displayed layers was kept constant during the opposite arcs oscillations, see Figure 2e. SSMVEP5Ethics Statement The SSMVEP5 was generated using the opposite rotation angle of the arcs-layer-based black-white circles, that were rotating stepwise in opposite directions, see Figure 2f. Data Acquisition The EEG was recorded with the g.USBamp (g.tec, Graz, Austria) USB-biosignal amplifier (sampling rate 600 Hz), connected to an Intel Core-i7-8700K @3.7GHz CPU based desktop PC running Microsoft Windows 10 Education (1809) (64-bit, Microsoft, Redmond, WA, USA) operating system. The used PC (Dell Precision 3630 Tower) was equipped with 16 GB RAM, and an NVIDIA GeForce GTX 1080 graphics card. Since the g.USBamp has 16 available signal inputs (besides ground and reference), passive EEG electrodes (g.LADYbirdPASSIVE, g.tec, Graz, Austria) were utilized, mounted on the positions available in the g.GAMMAcap 2 (according to the international 10-10 system, extended with intermediate positions): AF Z (ground), C Z (reference), and the position of the 16 used signal electrodes were: P Z , P 3 , P 4 , P 5 , Abrasive electrode gel was used to bring the impedances below 5 kΩ. A digital notch filter around 50 Hz was applied before the data were further processed. Data Analysis An overwhelming majority of the reported studies utilizes the CCA method for data analysis (e.g., [14,15,[18][19][20][21]). Based on our previous experience with the minimum energy combination (MEC) method [7,8,25,26], we decided to evaluate it with the filter banks (FB) in a practical scenario (spelling task), similar to our previous studies implementing the SSMVEP1 or SSMVEP2 stimuli, where the FB-CCA [22] or CCA [6] methods were utilized. Minimum Energy Combination (MEC) The MEC uses principal component analysis (PCA) to cancel out components that do not contribute to the stimulus response. The response to a specific stimulus frequency f , as well as its N h harmonics can be defined as the voltage between the i-th electrode and a reference at time t, which is subjected to a phase-shift and channel specific environmental nuisance and noise signal E i , Generally, we consider M samples of EEG data, recorded for each of N signal electrodes at a sampling frequency of F s Hz. Let N h denote the number of harmonics that are considered for classification; for each stimulus frequency For the reference Y ∈ R f 1 , . . . , R f K corresponding to the fixed stimulation frequency f and the EEG signal matrix X ∈ R N×M holding the data for classification, we can generalize (1) to X T = Y T A + E, where the phases and amplitudes corresponding to Y are stored in the matrix A ∈ R 2N h ×N and the noise signals are stored in E ∈ R M×N . The goal of the MEC is to amplify the SSVEP-amplitudes and filter out the noise signal. For this, the noise matrix E is estimated using orthogonal projectionẼ = X − Y(Y T Y) −1 Y T X. Now, a vectorŵ that minimizes the energy ofẼ is searched, This optimization problem can be solved by calculating the eigenvalues λ 1 ≤ λ 2 ≤ . . . λ N and the corresponding eigenvectors v 1 , . . . , v N ofẼ TẼ , which are then used to define a set of weight vectors, and further, yielding a set of virtual channels (components) To detect the SSVEP response for the specific frequency f , the power of that frequency and its harmonics N h is estimated byP where X k ∈ R 2×N is defined as the sub-matrix of X constructed from the rows containing the sine and cosine data of the k-th harmonic [26]. These SSVEP power estimations are computed for all K considered frequencies yielding the probabilities Filter Banks (FBs) As a last step, a filter bank method utilizing an 8th order Butterworth filter was applied (see example in [27]). The lower and upper cut-off frequencies of the m-th sub-band were: The weights a m in Eq. (6) were set to: a 1 = 0.4, a 2 = 0.2, a 3 = 0.15, a 4 = 0.13, and a 5 = 0.11, as proposed in [27]. A value of ∆C was defined as the distance between the highest and second highestp k , for k = 1, . . . , K. The BCI output corresponding to C was only produced if ∆C > β T . This distance-based classification criteria was successfully tested with the CCA [6] and FB-CCA [22] method. Moreover, in this experiment, the number of considered harmonics was set to N h = 5, the number of signal channels was N = 16 (see Signal acquisition) and the number of stimuli classes was K = 4. Classification Window The data were classified online every 0.25 s, with each new data block, for the time window higher or equal to the minimal time window (1 s). The sliding window technique was introduced in order to collect up to 64 blocks of data (maximum time window of 16 s), with the classification attempts calculated every block on the collected data. After reaching the maximum 64 blocks of data, the oldest EEG data block was shuffled out, the data were shifted and a new block was added at the end of the 16 s time window. This block length of 0.25 s and the maximum time window of 16 s were chosen based on preliminary tests, the amplifier data transfer rate, and the computational load of the FB-based MEC classification technique. This approach was based on our previous studies with the sliding window technique [7,8], under the assumption, that the visual response will build up (till some level) as long as the user continues to gaze at the stimulus. Procedure Participants were seated in a comfortable chair and instructed to stare (fix their gaze) at the center of the target (circle or circled checkerboard pattern) containing the desired letter and not to follow the movements of the SSMVEP stimuli. Once the details of the experiment were explained, the consent form was signed, the participants were prepared for the EEG recording experiment. The starting value of the personal-dependent threshold (β T = 0.3) was empirically determined with a number of pre-tests in order to achieve the best accuracy and typing speed. Afterwards, the threshold (β T ) was manually adjusted (if needed) during a short introduction (few selections) and confirmed by a repeated selection of each target two times. In a total of 9 cases (sessions) the β T was lowered to 0.29 for both spelling tasks. The order, in which the different SSMVEP patterns were tested (sessions), was randomized, as well as the frequency arrangement of the stimuli (shuffled before each spelling task). First, the participants were asked to type a short word "BCI" (9 trials) with a fixed selection time parameters (the minimum and maximum time windows for a selection were set to 4 s; 16 blocks of data) and an additional red arrow was pointing at the proper target. Since the letters were visible the whole time the user could learn the interface behavior. After this short training round, the participants were asked to answer the two questions about the visual stimulation. Afterwards, the main spelling task T S was started. Here, the user was asked to write the word "INVITE" (18 trials) without the red arrow guiding the user as she/he needed to select the proper letter or group of letters by looking at the stimuli behind the letters. For this main task, the classification time window varied between 1 and 16 s (see Section 2.6). After finishing the main task, the participants were asked to answer the six questions regarding the tested system. The main task was used for the evaluation of the system performance. In total, 2 spelling tasks (session) were carried out by every participant. After every target classification (performed selection), a short break was introduced, to allow users to shift their gaze to the next target (gaze shift or pause). This pause time was set to 2 s in both tasks. The time between the different SSMVEP patterns varied between the participants, while some participants were eager to test the next SSMVEP stimuli design (next session) after just 30 s, others waited up to 8 min between the sessions. In the main task (spelling "INVITE"), the frequencies f 1 , f 2 , and f 3 were selected approximately the same number of times, if no errors were made. For more details regarding the spelling behavior of the GUI see [6]. Information Transfer Rate (ITR) The overall performance of the tested scenarios was calculated using the popular information transfer rate (ITR) formula, introduced in [1]: B T is the ITR in bits per trial, P is the accuracy of the experiment, T is the total time of the experiment, C N is the total number of target selections in the experiment, N is the number of targets (here, N = 4), and B M represents the ITR in bits per minute. An online ITR calculator can be found at (https://bci-lab.hochschule-rhein-waal.de/en/itr.html). The maximum possible ITR that can be reached with this setup is 40.00 bits/min (min. classification window of 1 s + 2 s for gaze shift). Statistical Analysis In order to test the statistical significance, we utilized a one-factor ANOVA with repeated measurements. In some cases, the non-parametric equivalent of the ANOVA was used (Friedman's test). Additionally, a paired t-test was applied to determine which pairwise groups were significantly different. The significance level was p < 0.05 for all tests. Results In this paper we present the concept of a new design for the flicker-free steady-state motion visual evoked potentials, evaluate them with well established methods, and compare them to previously reported findings. The performance results of the online experiment, the accuracies and the information transfer rates (ITRs) Equation (8), are presented in the Table 2. In this study we extend the previous findings, SSMVEP1 vs. SSVEP in [22] and SSMVEP2 vs. SSVEP vs. cVEP in [6], in a comprehensive comparison across them and the novel stimulus (SSMVEP4 and SSMVEP5) designs. All participants were able to successfully finish the spelling tasks hence the literacy/efficiency rate was 9/9 for all tested stimulation scenarios. The most successful SSMVEP stimulus was the SSMVEP1 (pendulum-like movement), with the average results reaching 100% accuracy and 33.92 bits/min for the ITR. A detailed Box-Graph presenting the total times of the "INVITE" task is shown in Figure 3. Statistical Tests In order to verify the statistical analysis, a one factor repeated measurement ANOVA was used. Since the accuracy results violate the normality condition of the ANOVA, the Friedman's test (non-parametric equivalent of the ANOVA) was used to compare the accuracy, resulting in p = 0.177. Since no accuracy differences were found, we can not reject the null H 0 . In order to check the ITR and total time results for statistical significance, a detailed pairwise comparison (t-test) was applied and the resulting p-values are reported in Table 3. Questionnaire Results We tested the statistical significance of the questionnaire results separately for each question, with Friedman's test. The results showed that the users responses to the "efficient", "exciting", "enjoyable" and "fast" questions (options) differed significantly between the tested stimuli, with the p-values p < 0.001, p < 0.001, p < 0.05, and p < 0.01, respectively. In order to further analyze the results, we used pairwise comparison-using Wilcoxon signed-ranks test (WSR), see Supplementary Figure S3. The user ratings, in the case of visual stimulation, mostly did not differ significantly, yet the individual ratings regarding the system did (see Figure S3). Based on the user ratings, the users did not find the systems to be more exhausting. Most of the participants rated the SSMVEP systems as the clear ones, especially the SSMVEP1, which received same ratings as the SSVEP scenario, with an average scores of 1.1 and 1.2 for SSVEP and SSMVEP1, respectively). Most of the SSMVEP systems were also rated as the more inventive ones, SSVEP score 1.8 and SSMVEP average scores of 1.7, 1.6, 2.1, 2.2, 2.2 for SSMVEP1-5, respectively. Almost all tested systems were subjectively rated as fast ones besides SSMVEP4 (checkerboards radial contraction-expansion motion), which received an almost moderate score of 3.4 between fast and slow. SSVEP score was 1.1, SSMVEP1-3 reached 1.6, 1.6, 2.0, respectively, and SSMVEP5 was scored 2.9. These ratings showed significant differences between SSVEP, SSMVEP1, SSMVEP2, and SSMVEP3 against SSMVEP4 with p < 0.05 (WSR). Moreover the differences in ratings between SSMVEP3 and SSMVEP5 were significant (WSR p < 0.05). In the case of exciting vs. boring ratings, the participants rated on average (scores in brackets), the SSMVEP1 (1.9) and SSMVEP2 (1.8), together with the SSVEP (2.1), as more exciting than the SSMVEP3 (2.9), SSMVEP4 (3.7), and SSMVEP5 (3.2). A pairwise signed-ranks test, showed significant differences in favor of SSVEP, SSMVEP1, and SSMVEP2 against SSMVEP 4 and SSMVEP5 (WSR p < 0.05). While ranking the enjoyable vs. annoying question, the users rated the SSMVEP4 system as the least enjoyable (3.8) amongst all tested versions with scores of 2.2 for SSVEP, 2.2 for SSMVEP1, 2.2 for SSMVEP2, 3.1 for SSMVEP3, and 3.0 for SSMVEP5. These ratings were significant for SSMVEP4 against the SSVEP and SSMVEP4 vs. SSMVEP2 (WSR p < 0.05). All other SSMVEP systems, including the SSVEP were found to be fairly enjoyable by the users. In the case of subjective response to efficiency, the highest scores, besides the SSVEP (1.3), were received by the SSMVEP1 (1.4) and SSMVEP2 (1.2) systems. Here, a statistical significance was found between the SSMVEP1 vs. SSMVEP5 (WSR p < 0.05), and SSVEP vs. SSMVEP5 (WSR p < 0.05). The other SSMVEP systems were rated as fairly efficient, with the average scores of SSMVEP4 2.6, and SSMVEP5 2.3. Discussion We tested five different SSMVEP stimulations with different designs and behaviors, two designs were moving in the vertical axis (one with an up and down movement, while the other one changed its vertical size), one design represented radial oscillation movements, and two represented radial movements of an arc's segment (one rotational movement and one oscillation movement). All were compared to the standard SSVEP on-off appearance. In order to test a practical implementation of this novel stimulus, all designs were utilized in a four target 3-step spelling application and the users were asked to type a specific word with this BCI system. We compared the five different SSMVEP stimuli to the traditionally SSVEP flickering and evaluated the accuracy, performance and questionnaire-based user friendliness for each of them. Some of the tested designs were demonstrated in our previous publications (SSMVEP1 in [22], SSMVEP2 in [6]) or literature SSMVEP3 in, e.g., [11,17,28], the SSMVEP4 and SSMVEP5 are novel designs. Our results show that one of the presented SSMVEP scenarios (SSMVEP1) is as fast and accurate as the standard SSVEP. This scenario consisted of a translational motion (up & down). These findings confirm our preliminary study results presented in [22]. The second best performance results were achieved with the SSMVEP2 design. The accuracies did not differ significantly between any of the tested systems, all users were able to achieve close to perfect accuracies (> 97%). The statistical results of the achieved accuracies show that all tested SSMVEP stimuli were as accurate as the standard SSVEP stimulation. This proves that all of the tested SSMVEP stimuli can easily induce a strong steady-state response [29], as discussed in [18][19][20]. We also examined a broader frequency response of the tested SSMVEP designs, where we compared the responses with an additional recording utilizing the CCA maximum correlation coefficient spectrum (see Figure S4). This spectrum shows, that the tested stimuli (SSMVEP1, SSMVEP2, SSMVEP3, and SSMVEP4) can also induce a high harmonic response, which is contrary to the findings of Han et al. [18], where a similar stimulation design to SSMVEP4 was examined. Our findings on the subjective discomfort, that the SSMVEP approach is less exhausting and more comfortable-according to the questionnaire-are in-line with the literature [20]. We did not find any significant differences between the user questionnaire responses of the SSVEP against SSMVEP1 or SSMVEP2 (details in Supplementary Figure S3 and Table S1). This means, the motion behavior with similar performance to SSVEP (SSMVEP1) is not subjectively better than the SSVEP. On the other hand, even though the SSMVEP2 was not faster than SSVEP, it received a subjectively comforting user rating on average. The importance of this can be extended, the SSMVEP1 was as good as the SSVEP, thus it could replace the flickering approach in applications that have sufficient space on the screen for the movement representation, or even better, by utilizing the SSMVEP2 instead of the SSVEP, where the user comfort and preferences are traded off against the performance of the system. Interestingly, while the translational movement based stimulations (SSMVEP1, SSMVEP2, SSMVEP3, and SSMVEP4) the harmonic component is fairly visible, in the rotational based stimuli (SSMVEP5) only the fundamental component is visible in the CCA maximum coefficient spectrum (see Figure S4). This observation confirms the findings of [19], where the rotation based SSMVEP did not induce any visualizable harmonic response in the presented spectrum. When comparing the checkerboard stimuli presented in this study with the ones tested in recent literature [18,19], we can see that the designs tested in this study consisted of smaller number of stimuli and slightly reduced number of single elements in the checkerboard pattern. In the GUI design, the letter labels partially covered the stimulation surface; this could have had an impact especially on the circulated checkerboard designs (SSMVEP3, SSMVEP4, and SSMVEP5). Only frequencies between 6 and 8 Hz were examined, hence, other ranges could be explored, particularly frequencies from 4 to 6 Hz. The tested frequency range (7.06-8.57 Hz) was chosen based on our preliminary study [22] as a trade-off between speed and accuracy. The lower performance and thus the long spelling time of the SSMVEP4 stimuli might influence the subjective user comfort response against this design. Further research should compare the FB-CCA and FB-MEC methods and investigate the influence of the FB composition on the SSMVEP classification, e.g., how many FBs are sufficient and which weight combination is optimal. Our results show that for future applications, the SSMVEP1 and SSMVEP2 should be focused on, as they induced the highest performance and best user rating results. A combination of the SSMVEP1 & SSMVEP2 could also be further investigated, with the possibility of providing a great user experience/comfort as well as a satisfactory performance. Most of the compared SSMVEP designs (SSMVEP1, SSMVEP2, SSMVEP3, and SSMVEP4) were implemented based on translational movement, however, the SSMVEP5 on the other hand, was based on rotational movement. This rotation-based stimulus requires further investigation with different designs and additional examination of the harmonic components. Future research in this area should also consider implementing the duty cycle, in order to increase the user's comfort, by changing the movement ratio in a cycle, similar to SSVEP duty cycle, where a stimuli is not 50% on and 50% off but, e.g., 20% on and 80% off in a cycle. The findings of this study suggest that SSMVEP can be applied when SSVEP is found to be annoying, confusing or destructive [20] like in a virtual reality environment, while keeping the overall performance at a similar level to the SSVEP. For this application, our results suggest that the optimal movement would incorporate translational components in their movement behaviors. For example, in VR applications, a motion-based stimuli could blend into the VR environment and still be easily detectable by the BCI system. Conclusions We tested the motion evoked potentials to find a more comfortable alternative to flickering visual stimulation. While we found one motion-based scenario (SSMVEP1) that was equivalent to the SSVEP regarding performance, none of the tested stimuli was found to be subjectively more comfortable than the traditional flickering stimuli. Our results confirmed previous findings, that oscillatory motion induces a steady-state response and have extended the research field of the flicker-free steady-state motion-based visual evoked potentials for further stimulus designs and evaluation. Our results suggest that future experiments implementing different translational and/or "flipping" motion-based behaviors, have a high chance of finding a more comfortable alternative. Based on the achieved accuracy and ITR performance results, supported by the positive user comfort ratings, we recommend replacing the uncomfortable on-off flickering with the motion-based stimuli approach.
7,397.6
2020-09-28T00:00:00.000
[ "Computer Science" ]
Novel Functionalized Polythiophene-Coated Fe3O4 Nanoparticles for Magnetic Solid-Phase Extraction of Phthalates Poly(phenyl-(4-(6-thiophen-3-yl-hexyloxy)-benzylidene)-amine) (P3TArH) was successfully synthesized and coated on the surface of Fe3O4 magnetic nanoparticles (MNPs). The nanocomposites were characterized by Fourier transform infra-red (FTIR), X-ray diffractometry (XRD), Brunauer-Emmett-Teller (BET) surface area analysis, analyzer transmission electron microscopy (TEM) and vibrating sample magnetometry (VSM). P3TArH-coated MNPs (MNP@P3TArH) showed higher capabilities for the extraction of commonly-used phthalates and were optimized for the magnetic-solid phase extraction (MSPE) of environmental samples. Separation and determination of the extracted phthalates, namely dimethyl phthalate (DMP), diethyl phthalate (DEP), dipropyl phthalate (DPP), dibutyl phthalate (DBP), butyl benzyl phthalate (BBP), dicyclohexyl phthalate (DCP), di-ethylhexyl phthalate (DEHP) and di-n-octyl phthalate (DNOP), were conducted by a gas chromatography-flame ionization detector (GC-FID). The best working conditions were as follows; sample at pH 7, 30 min extraction time, ethyl acetate as the elution solvent, 500-µL elution solvent volumes, 10 min desorption time, 10-mg adsorbent dosage, 20-mL sample loading volume and 15 g·L−1 concentration of NaCl. Under the optimized conditions, the analytical performances were determined with a linear range of 0.1–50 µg·L−1 and a limit of detection at 0.08–0.468 µg·L−1 for all of the analytes studied. The intra-day (n = 7) and inter-day (n = 3) relative standard deviations (RSD%) of three replicates were each demonstrated in the range of 3.7–4.9 and 3.0–5.0, respectively. The steadiness and reusability studies suggested that the MNP@P3TArH could be used up to five cycles. The proposed method was executed for the analysis of real water samples, namely commercial bottled mineral water and bottled fresh milk, whereby recoveries in the range of 68%–101% and RSD% lower than 7.7 were attained. Introduction Belonging to non-halogenated esters of phthalic acid, phthalates or phthalate esters are used as plasticizers for nitrocellulose, since it was first recognized in 1880, replacing camphor [1]. Nowadays, phthalates can be found in many different matrices in our environment and are widely utilized in the PVC industries as a plasticizer, from floors, hoses, cables (building materials), toys and medical appliances [2]. Other consumer-based products utilizing phthalates are as a component in inks, adhesive materials, lacquers, sealing and packing materials, materials for treating surfaces, solvents and fixing agents in fragrances, as well as additives in cosmetics [3][4][5]. They become emerging pollutants and harmful to humans, especially children, since they are not chemically bound in plastics Standard, Reagents and Chemicals Analytical grade ferric chloride, ferrous chloride, ammonia solution (25 wt %), thiophene, 4-hydroxybenzaldehyde, acetonitrile, potassium permanganate, 4-aminophenol, 3-bromothiophene, 1,6-dibromohexane, N-bromosuccinimide, acetic acid, sodium hydrogen bicarbonate, potassium iodide, potassium carbonate, tetrahydrofuran, methanol, hydrochloric acid, acetone and ethyl acetate were purchased from Merck (Darmstadt, Germany). Acetone was procured from Fisher Scientific (Loughborough, UK). Thiophene carboxaldehyde, polyvinyl alcohol and n-butyllithium (2.0 M in cyclohexane) were obtained from Sigma Aldrich (Milwaukee, WI, USA). Magnesium sulfate anhydrous, ethanol denatured and hexane were received from J. Kollins (Parkwood, Australia), while dimethyl sulfoxide-d 6 (DMSO-d 6 ) and phthalate esters were purchased from Acros Organics (Geel, Belgium). Ultrapure water was prepared by a model Aqua Max-Ultra ultra-pure water purification system (Zef Scientific Inc., San Diego, CA, USA). Stock solutions of 1000 mg¨L´1 of standards were prepared by dissolving appropriate amounts of compounds in methanol, which remain stable for three months if stored in a refrigerator at 4˝C. Working standard solutions were prepared daily by diluting the stock standard solution to the required concentrations. Instruments The Fourier transform infrared (FTIR) spectra were recorded on a Perkin-Elmer FTIR between 4000 and 400 cm´1, with a resolution of 2 cm´1. Structural elucidation was determined using 1 H NMR, JEOL 400 MHz. The pore diameter and surface area of Brunauer-Emmett-Teller (BET) analysis were determined from low-temperature nitrogen adsorption isotherms at 77.40 K using a Quantachrome Autosorb Automated Gas Sorption System (Quantachrome Instruments, Boynton Beach, FL, USA). X-ray powder diffraction (XRD) analysis was conducted with Panalytical model Empyrean (Panalytical, Almelo, Netherlands) at 40 kV and 35 mA using Cu Kα radiation (λ = 1.54059 Å). Morphological analyses of the synthesized products were conducted using transmission electron microscopy (TEM) analysis using an FEI Tecnai G2 spectra microscope (FEI, Hillsboro, OR, USA). The magnetic property was tested using a vibration sample magnetometer (VSM) Model 9600 (Quantum Design Inc., San Diego, CA, USA). Magnetization measurements were carried out in an external field of up to 15 kOe at room temperature. Separation and detection of target analytes were performed by a Shimadzu 2010 gas chromatograph (Shimadzu, Kyoto, Japan) equipped with a split/splitless injector and a flame ionization detector (FID). A DB-5 Agilent fused-silica capillary column (Agilent, Santa Clara, CA, USA) (30 mˆ0.32 mm i.d.ˆ0.25 µm film thickness) was applied for separation of analytes. Helium (with 99.999% purity) was used as the carrier gas at a constant flow rate of 4 mL¨min´1. Chromatographic conditions were controlled as described; the temperatures of the injector and detector were set at 260 and 280˝C, respectively. The injection port was operated at splitless mode. Oven temperature was held at 150˝C for 1 min and increased to 280˝C at 8˝C¨min´1 for 3 min. Polymerization of 3TArH and Thiophene Monomers on the Surface of MNPs The preparation of MNP@PTh and MNP@P3TArH NPs involves two steps. Briefly, Fe3O4 has been prepared by the co-precipitation method [57]. FeCl3·6H2O (8.48 g, 30 mmol) and FeCl2·4H2O (2.25 g, 11.3 mmol) were dissolved in 400 mL deionized water under nitrogen atmosphere via vigorous stirring (1000 rpm) at 80 °C. Then, a 20-mL ammonia solution 25% (w/w) was added to the solution. The color of the bulk solution immediately changed from orange to black. After stirring the mixture for 5 min, the Fe3O4 NP precipitates were obtained via magnetic decantation and washed three times with deionized water. Finally, the Fe3O4 NPs were dried in a vacuum oven at 70 °C for 12 h. The surface of Fe3O4 NPs was modified by being coated with the newly-designed modified thiophene monomers via oxidation polymerization with the generation of ferric cations on the Fe3O4 NPs' surface [54]. Fe3O4 NPs (1 mmol, 0.235 g) were discrete in polyvinyl alcohol (PVA) aqueous solution (0.001 M). Later, 3TArH (3.64 g, 10 mmol) was added into the mixed solution with vigorous stirring. Subsequently, 30 mL of HCl (0.5 M) solution were introduced into the mixture. Then, the products obtained were dried in a vacuum oven at 70 °C for 12 h. Experiments were repeated using freshly-distilled thiophene monomer (10 mmol, 0.84 g). Solid Phase Extraction Optimization and Reusability Studies Factors affecting the extraction efficiency of the proposed method, such as type of adsorbents, pH, extraction time, sample volume, elution solvent, elution solvent volume, desorption time, adsorbent dosage and effect of NaCl, were studied. All of the experiments were performed in triplicate, and the means of the results were used in plotting the optimization curves. The reusability of the adsorbent was determined with optimized conditions for up to five cycles. The adsorbent was recycled after being washed with methanol and water and dried in vacuum at 70 °C for 12 h. Scheme 1. Synthesis pathway for (phenyl-(4-(6-thiophen-3-yl-hexyloxy)-benzylidene)-amine) (3TArH). Polymerization of 3TArH and Thiophene Monomers on the Surface of MNPs The preparation of MNP@PTh and MNP@P3TArH NPs involves two steps. Briefly, Fe 3 O 4 has been prepared by the co-precipitation method [57]. FeCl 3¨6 H 2 O (8.48 g, 30 mmol) and FeCl 2¨4 H 2 O (2.25 g, 11.3 mmol) were dissolved in 400 mL deionized water under nitrogen atmosphere via vigorous stirring (1000 rpm) at 80˝C. Then, a 20-mL ammonia solution 25% (w/w) was added to the solution. The color of the bulk solution immediately changed from orange to black. After stirring the mixture for 5 min, the Fe 3 O 4 NP precipitates were obtained via magnetic decantation and washed three times with deionized water. Finally, the Fe 3 O 4 NPs were dried in a vacuum oven at 70˝C for 12 h. The surface of Fe 3 O 4 NPs was modified by being coated with the newly-designed modified thiophene monomers via oxidation polymerization with the generation of ferric cations on the Fe 3 O 4 NPs' surface [54]. Fe 3 O 4 NPs (1 mmol, 0.235 g) were discrete in polyvinyl alcohol (PVA) aqueous solution (0.001 M). Later, 3TArH (3.64 g, 10 mmol) was added into the mixed solution with vigorous stirring. Subsequently, 30 mL of HCl (0.5 M) solution were introduced into the mixture. Then, the products obtained were dried in a vacuum oven at 70˝C for 12 h. Experiments were repeated using freshly-distilled thiophene monomer (10 mmol, 0.84 g). Solid Phase Extraction Optimization and Reusability Studies Factors affecting the extraction efficiency of the proposed method, such as type of adsorbents, pH, extraction time, sample volume, elution solvent, elution solvent volume, desorption time, adsorbent dosage and effect of NaCl, were studied. All of the experiments were performed in triplicate, and the means of the results were used in plotting the optimization curves. The reusability of the adsorbent was determined with optimized conditions for up to five cycles. The adsorbent was recycled after being washed with methanol and water and dried in vacuum at 70˝C for 12 h. Analytical Performances and Real Sample Analysis In order to evaluate the figures of merit of the proposed technique, linearity, the limit of detection (LOD), the limit of quantitation (LOQ) and repeatability were investigated under optimized conditions. The linearity was analyzed through the standard curves ranging from 0.1-50 µg¨L´1 by diluting appropriate amounts of phthalates stock solution (1000 mg¨L´1) with methanol and prepared in triplicate. The calibration curves were prepared using 10 spiking levels of analytes. For each level, three replicate experiments were performed. To evaluate the reliability of the proposed method for the extraction of the plasticizers from real samples, two real samples were selected, spiked and subjected to the MSPE-GC-FID analysis. The two real samples were commercial bottled mineral water and bottled fresh milk. Figure 2 shows several additional peaks in the spectrum of nanocomposites, proportional to the MNP spectrum, which might be due to the surface functionalization. The strong absorption peaks in the range of~3400 cm´1 for MNP and all nanocomposites indicated the presence of OH vibration, while the peak at 530-632 cm´1 corresponds to Fe-O stretching modes [58]. The C-H aromatic stretching peak was observed for all nanocomposites, which falls at 3000 cm´1 for MNP@PTh and 2980 cm´1 for MNP@P3TArH. C-H sp 3 stretching (hexyl aliphatic side) occurred at 2934 cm´1 for MNP@P3TArH. Schiff base peaks (C=N) were observed at 1674 and 1685 cm´1 for MNP@P3TArH [59]. C=C aromatic symmetric and asymmetric absorption bands demonstrated in the range of 1573-1461 cm´1 occurred for both nanocomposites. Two absorption band peaks at 1250 and 1072 cm´1 indicated the presence of C-O in MNP@P3TArH. Hence, the FTIR study clearly revealed that the MNPs prepared have been successfully functionalized. Analytical Performances and Real Sample Analysis In order to evaluate the figures of merit of the proposed technique, linearity, the limit of detection (LOD), the limit of quantitation (LOQ) and repeatability were investigated under optimized conditions. The linearity was analyzed through the standard curves ranging from 0.1-50 µg·L −1 by diluting appropriate amounts of phthalates stock solution (1000 mg·L −1 ) with methanol and prepared in triplicate. The calibration curves were prepared using 10 spiking levels of analytes. For each level, three replicate experiments were performed. To evaluate the reliability of the proposed method for the extraction of the plasticizers from real samples, two real samples were selected, spiked and subjected to the MSPE-GC-FID analysis. The two real samples were commercial bottled mineral water and bottled fresh milk. Figure 2 shows several additional peaks in the spectrum of nanocomposites, proportional to the MNP spectrum, which might be due to the surface functionalization. The strong absorption peaks in the range of ~3400 cm −1 for MNP and all nanocomposites indicated the presence of OH vibration, while the peak at 530-632 cm −1 corresponds to Fe-O stretching modes [58]. The C-H aromatic stretching peak was observed for all nanocomposites, which falls at 3000 cm −1 for MNP@PTh and 2980 cm −1 for MNP@P3TArH. C-H sp 3 stretching (hexyl aliphatic side) occurred at 2934 cm −1 for MNP@P3TArH. Schiff base peaks (C=N) were observed at 1674 and 1685 cm −1 for MNP@P3TArH [59]. C=C aromatic symmetric and asymmetric absorption bands demonstrated in the range of 1573-1461 cm −1 occurred for both nanocomposites. Two absorption band peaks at 1250 and 1072 cm −1 indicated the presence of C-O in MNP@P3TArH. Hence, the FTIR study clearly revealed that the MNPs prepared have been successfully functionalized. (440)) [61]. This showed that the surface functionalization does not change the crystalline phase of MNPs [62]. (440)) [61]. This showed that the surface functionalization does not change the crystalline phase of MNPs [62]. The BET surface area is measured using the multipoint BET method, within the relative pressure (P/P0) range of 0.05-1. As described in ( Figure S5, Supplementary Material), the MNPs and all nanocomposites display an H3-type hysteresis loop, based on the Brunauer-Deming-Deming-Teller (BDDT) classification, demonstrating the existence of mesopores with pore diameters between 2 and 50 nm [63]. The pore size and BET surface area of MNPs and nanocomposites are tabulated in Table 1. The reduction in the pore size of nanocomposites is due to the addition of polymers on the surface. Meanwhile, escalation in the surface area could be because of the dispersity of particles that results from the enhancement of the spaces between them [64,65]. Morphological analysis of the synthesized products was performed using TEM techniques. As shown in Figure 4, TEM images of all materials demonstrated a sphere-shaped property. From the images, we could clearly observe the good dispersion of the functionalized nanoparticles (MNP@PTArH) in the TEM image. For instance, before polymerization, magnetic nanoparticles were highly agglomerated with each other. After polymerization of MNP with 3TArH, they showed lower agglomeration, and the nanocomposite became well dispersed. The dispersity of the nanocomposite influenced its surface area, as evidence by the BET result of MNP@P3TArH, which is higher compared to MNP@PTh and MNP, as tabulated in Table 1. The BET surface area is measured using the multipoint BET method, within the relative pressure (P/P0) range of 0.05-1. As described in ( Figure S5, Supplementary Material), the MNPs and all nanocomposites display an H3-type hysteresis loop, based on the Brunauer-Deming-Deming-Teller (BDDT) classification, demonstrating the existence of mesopores with pore diameters between 2 and 50 nm [63]. The pore size and BET surface area of MNPs and nanocomposites are tabulated in Table 1. The reduction in the pore size of nanocomposites is due to the addition of polymers on the surface. Meanwhile, escalation in the surface area could be because of the dispersity of particles that results from the enhancement of the spaces between them [64,65]. Morphological analysis of the synthesized products was performed using TEM techniques. As shown in Figure 4, TEM images of all materials demonstrated a sphere-shaped property. From the images, we could clearly observe the good dispersion of the functionalized nanoparticles (MNP@PTArH) in the TEM image. For instance, before polymerization, magnetic nanoparticles were highly agglomerated with each other. After polymerization of MNP with 3TArH, they showed lower agglomeration, and the nanocomposite became well dispersed. The dispersity of the nanocomposite influenced its surface area, as evidence by the BET result of MNP@P3TArH, which is higher compared to MNP@PTh and MNP, as tabulated in Table 1. The BET surface area is measured using the multipoint BET method, within the relative pressure (P/P0) range of 0.05-1. As described in ( Figure S5, Supplementary Material), the MNPs and all nanocomposites display an H3-type hysteresis loop, based on the Brunauer-Deming-Deming-Teller (BDDT) classification, demonstrating the existence of mesopores with pore diameters between 2 and 50 nm [63]. The pore size and BET surface area of MNPs and nanocomposites are tabulated in Table 1. The reduction in the pore size of nanocomposites is due to the addition of polymers on the surface. Meanwhile, escalation in the surface area could be because of the dispersity of particles that results from the enhancement of the spaces between them [64,65]. Morphological analysis of the synthesized products was performed using TEM techniques. As shown in Figure 4, TEM images of all materials demonstrated a sphere-shaped property. From the images, we could clearly observe the good dispersion of the functionalized nanoparticles (MNP@PTArH) in the TEM image. For instance, before polymerization, magnetic nanoparticles were highly agglomerated with each other. After polymerization of MNP with 3TArH, they showed lower agglomeration, and the nanocomposite became well dispersed. The dispersity of the nanocomposite influenced its surface area, as evidence by the BET result of MNP@P3TArH, which is higher compared to MNP@PTh and MNP, as tabulated in Table 1. The magnetic properties of the samples were recorded at room temperature with an external field of˘15 kOe. Important magnetic variables, such as saturation magnetization (M S ), were evaluated. The maximum saturation (M S ) of MNPs occurred at 69.2 emu¨g´1, respectively. After surface functionalization, the magnetization of MNP@PTh and MNP@P3TArH was reduced to 65.3 and 61.5 emu¨g´1 respectively. The magnetization decrease signified the presence of a dead magnetic layer on the surface of the nanocomposites [58]. Although the magnetization has declined, the value is still within the acceptable range, which suggests that it can be applied as the MSPE sorbent [66]. Type of Adsorbent Hypothetically, the adsorption of phthalates is based on the hydrophobicity and π-π dispersion [67]. To prove that the structure architecture influences the adsorption studies of phthalates, three different types of sorbents, which are naked magnetic nanoparticles (MNP), MNP-PTh and MNP@P3TArH, were tested. As seen in Figure 5, MNP resulted in an insignificant peak area for all of the analytes studied. After the introduction of polythiophene derivatives on the surface of MNP, the peak area of phthalates increased. The presence of aliphatic and aromatic groups in the MNP@P3TArH enhances the dispersion of phthalates, which enhances the π-π dispersion and hydrophobic interaction. As evidenced, butyl benzyl phthalate (BBP) is more prone to the adsorbent with more aromatic sides, as in the MNP@PT3ArH, compared to the other adsorbents. Besides, the high surface area of MNP@P3TArH also contributes to the increase of extraction performance. Since the MNP@P3TArH has demonstrated the high peak area for all analytes studied, it was selected for further MSPE optimization. Sample pH To study the influence of the surface charge of adsorbent/adsorbate in the extraction process, experiments were performed under different pH conditions, ranging from pH 2-9. As shown from Figure 6a, the peak areas for phthalates increase when the pH rise from 2-7, but decline later from 8-9. At low pH, C=N, alkoxy in P3TArH was protonated, making the adsorbent surface positively charged. At pH < 7, phthalates hydrolyze to phthalic acid, thus making the carbonyl group nucleophilic, reacting with hydrogen ions in the aqueous solution, producing positive charges. Due to both the absorbate and adsorbent acquiring positive charges, the electrostatic repulsion occurred and retarded the adsorption performance [68]. At basic conditions, the surface adsorbent became negatively charged, while the adsorbate hydrolyzes to phthalate anions, reducing the extraction efficiency [69]. Thus, in neutral pH, the extraction increased due to the absence of electrostatic repulsion that disturbed the extraction capability. As the optimum performance was demonstrated at pH 7, this pH was selected for all of the experiments. Extraction Time It has been understood that prolonged extraction time might increase the recovery of analytes. Thus, the influence of extraction time on the recoveries of the analyte has been investigated. As demonstrated in Figure 6b, the peak area increased rapidly for the first 20 min, since more adsorption sites were available and phthalates could easily interact with these sites. After 30 min, the peak area was almost persistent; therefore, 30 min was sufficient to extract the maximum of the target analytes. In order to ensure that the extraction time was satisfactory, further experiments were carried out until 90 min, and they were found to be constant. Desorption Studies The elution solvent is one of the crucial parameters to be considered. In order to determine the best elution solvent, the solvent must be able to elute all of the analytes that were retained from the adsorbent in a small volume [70]. Six eluting solvents with dissimilar polarities, namely hexane, toluene, diethyl ether, acetonitrile, methanol and ethyl acetate, were studied. As evidenced in Figure 7a, polar solvents (acetonitrile, methanol and ethyl acetate) were the best solvents, with high peak areas compared to non-polar solvents (hexane, toluene and diethyl ether), since phthalates contain a polar carbonyl group [71]. Among the polar solvents, ethyl acetate showed high solvent strength, since it gave the maximum peak area for the phthalates studied and was thus selected to be the eluent. The volume of ethyl acetate was tested from 0.1 mL-2.5 mL. As observed in Figure 7b, the peak area increased from 0.1 mL and remained constant after 0.5 mL. This showed that 0.5 mL may accommodate the maximum phthalates extracted from the sorbent. Further, desorption time was optimized to investigate the best time taken for the analytes to desorb from the sorbent ranging from 0-12 min. As revealed in Figure 7c, analytes were desorbed rapidly in the first 4 min and started to become linear after 10 min. This indicated that 10 min of time are sufficient to desorb back all of the analytes from the adsorbent. As for the case of BBP, desorption was found to be slower than other phthalates. This could be due to the presence of an additional aromatic ring in BBP, which makes it less polar to the eluent (ethyl acetate). After 6 min of desorption, most of the phthalates had reached near to equilibrium, whereas BBP was desorbed steeply after 6 min until it reached equilibrium at 10 min. Mass of Adsorbent Investigation of the adsorbent amount was executed in the range of 1-25 mg. As exposed in Figure 8a, the extraction peak area increased up to 10 mg, but decreased later with a further increase of the adsorbent. Increasing the adsorbent amount provides more active sites for the adsorption of target analytes. However, a high amount of adsorbent at a specific volume has weakened elution efficiency [30]. It is shown that this adsorbent only required a small amount of adsorbent to remove phthalates efficiently, which added the advantage of economic value. Therefore, for further experiments, the adsorbent amount of 10 mg was applied. Sample Loading Volume The effect of sample volume was investigated by the extraction of the phthalates ranging from 5-100 mL and shown in Figure 8b. Each sample was spiked with 10 mg¨L´1 analytes and 10 mg adsorbent. As can be seen, peak area increased until 20 mL and further decreased till 100 mL. A 20-mL volume of sample demonstrated the most efficient extraction. An increase in sample volume could lead to a high distribution of adsorbent to the aqueous phase, which lowered the amount of adsorbent in the volume unit sample solution, and the extraction became less effective [72]. Thus, a 20-mL sample volume was chosen as the optimized sample volume. Effect of NaCl Indeed, the addition of salt in the sample matrices effects the extraction efficiency. Thus, studies on the concentration of NaCl ranging from 0-25 g¨L´1 were conducted. As observed in Figure 8c, peak areas of the studied analytes increased from 0-15 g¨L´1, but decreased later from 20-25 g¨L´1. This can be due to the addition of salt, which increases the ionic strength and eventually decreases the solubility of the analytes in the media. However, as the concentration of salt increases, the diffusion rate of the analytes may reduce, since the solvation cage of the analytes is disturbed [51]. Since a 30 g¨L´1 NaCl concentration gave a high peak area for all analytes studied, it was chosen for subsequent experiments. Reusability Studies To investigate the probability of reusing and regenerating the sorbent, a reusability test was designed and implemented for Fe 3 O 4 @P3TArH, which was recycled after being washed with methanol and water and was dried in a vacuum at 70˝C for 12 h. From Figure 9, it could be surmised that after five repeated experiments, the adsorbent was still active. This may be due to some of the particles in the adsorbent accumulating due to the heat treatment after several cycles, which decreases the surface area. Analytical Performances and Real Sample Analysis The optimized method obtained for the extraction of phthalates using MNP@P3TARH involved the sample at pH 7, 30 min extraction time, ethyl acetate as the elution solvent, 500-µL elution solvent volumes, 10 min desorption time, 10 mg adsorbent dosage, 20-mL sample loading volume and a 15 g¨L´1 concentration of NaCl. In order to assess the validation of the proposed method, linearity, the limit of detection, the limit of quantitation and repeatability were performed under optimum conditions. Analytical performance figures of merits are tabulated in Table 2. Calibration curves obtained for the studied phthalates were linear over the range of 0.1-50 µg¨L´1 with R 2 more than 0.99. As per the U.S. EPA standard, the screening of phthalates in drinking water must be done at a concentration above 0.6 µg¨L´1 [4]. However, the LOD of our method lies within the range of 0.080-0.468, indicating the suitability of this method as an efficient phthalate detector. Repeatability studies were conducted for inter-day (three consecutives replicates for three days) and intra-day (seven consecutives replicates on the same day). The results were expressed as relative standard deviations (RSD%). This method demonstrated good precision, since the RSD (%) values were in the range of 3%-5% [73]. Comparative studies on the analytical performance between the proposed methods with other developed methods are shown in Table 3. Obviously, the extraction of phthalates using MNP@P3TArH provides sensitivity and repeatability. To endorse the reliability of the method using MNP@P3TArH, it was applied to determine phthalates in the water from the mineral water bottle stored at room temperature and commercial fresh milk. Figure 10 shows the chromatogram of commercial fresh milk unspiked and spiked with phthalates. None of the targeted phthalates were found in the water samples under the optimized condition described. To evaluate the matrix effect, all of the samples were spiked with 50 µg¨L´1 of the phthalates studied. Recoveries and RSD (%) for all of the water samples were determined and are shown in Table 4. From the optimization procedures until the real sample analyses, DMP, DEP and DPP demonstrated lower recoveries; this may be due to the lower molecular weight of phthalates being more prone to aqueous solution than to the adsorbent [76]. From the chromatogram of mineral bottle stored at room temperature as shown in ( Figure S6, Supplementary Material), the recoveries obtained for water in the mineral bottle demonstrated higher values compared to the recovery for the milk sample. This might be caused by the matrix effect that holds the analyte in the milk sample to be higher compared to the water sample. RSD (%) values were found to be in the range of 1.3%-5.8%, which indicated a precise method. DBP, DEHP MNP@Zeolite-GC-FID 2.80-3. To endorse the reliability of the method using MNP@P3TArH, it was applied to determine phthalates in the water from the mineral water bottle stored at room temperature and commercial fresh milk. Figure 10 shows the chromatogram of commercial fresh milk unspiked and spiked with phthalates. None of the targeted phthalates were found in the water samples under the optimized condition described. To evaluate the matrix effect, all of the samples were spiked with 50 µg·L −1 of the phthalates studied. Recoveries and RSD (%) for all of the water samples were determined and are shown in Table 4. From the optimization procedures until the real sample analyses, DMP, DEP and DPP demonstrated lower recoveries; this may be due to the lower molecular weight of phthalates being more prone to aqueous solution than to the adsorbent [76]. From the chromatogram of mineral bottle stored at room temperature as shown in (Figure S6, Supplementary Material), the recoveries obtained for water in the mineral bottle demonstrated higher values compared to the recovery for the milk sample. This might be caused by the matrix effect that holds the analyte in the milk sample to be higher compared to the water sample. RSD (%) values were found to be in the range of 1.3%-5.8%, which indicated a precise method. Conclusions MNP@P3TArH has been successfully synthesized, characterized and utilized as a sorbent for the analysis of GC-FID in the determination of selected phthalates. The optimized conditions of MSPE were carefully selected as follows: sample at pH 7, 30 min extraction time, ethyl acetate as the elution solvent, 500-µL elution solvent volume, 10 min desorption time, 10 mg adsorbent dosage, 20-mL sample loading volume and a 15 g¨L´1 concentration of NaCl. The steadiness and reusability studies suggested that the MNP@P3TArH could be used up to five cycles without significantly impacting its extraction capacity. The adsorbent covers a wide range of phthalates with a dynamic linear range of 0.1-50 µg¨L´1 and a limit of detection at 0.08-0.468 µg¨L´1. The presence of new interfaces (π-π and hydrophobic interactions) among the sorbent and target analytes increased the adsorption capability. The application of MNP@P3TArH as the MSPE sorbent was successfully executed by the analysis of phthalate esters in the mineral water and commercial fresh milk.
6,802.6
2016-04-28T00:00:00.000
[ "Chemistry" ]
Thermodynamic Potentials Theory Aspects in External Differential Forms Calculus Representation Thorough review of external differential forms calculus basic theses presented. Potentialities of this mathematical discipline, which can describe physical properties of dielectric materials, magnets and photonic materials influenced by mechanical, thermal and electromagnetic factors more logically and objectively, then traditional methods, demonstrated. Methodological effectiveness of the differential forms of thermodynamic potentials application in the macroscopic properties of homogeneous monoand polyvariant systems description has been demonstrated. The simple, fundamental, symmetrical to the thermodynamic variables choice relations demonstrating the calculus of differential forms benefits have been obtained. Using Pfaffian forms thermodynamics, have been demonstrated, that differential forms calculus application to a description of the physical reality allows to operate physical concepts at a deeper level, based on the fundamental physical and mathematical principles. Introduction The calculus of differential forms, which was created at the beginning of the XX century by E. Kartan, is one of the most fundamental and at the same time simple-to-use, mobile and fruitful mathematical method in differential geometry and its applications [1][2][3]. The universality of concepts and methodological simplicity are factors that confirm the fundamentality of differential forms theory. In the opinion of many mathematicians [1][2][3][4], such traditional mathematical tools, as vector, differential and integral calculus, which are the foundation of usual theoretical physics mathematical apparatus, are to a certain extent not full constructive transfer of the more fundamental mathematical constructionsexternal differential forms (see appendix). The development of scientific thought has always sought to unify, simplicity and universality of physical concepts, which could be presented using the fundamental nature of the operator symbolism, easily and simply to operate with it. Many thermal, mechanical, magnetic and electric properties of matter with mono-or polyvariant structure can be satisfactory described by thermodynamic language. A lot of macroscopic matter properties have been accounted by such a way. Thermodynamic approach was found successful from both fundamental and applied points of view. Methodology of thermodynamic potentials (also calling characteristic functions) widely used against standard thermodynamic language background [5][6][7][8][9][10]. At the same time many fundamental problems haven't due explanation because of traditional mathematic apparatus restrictions. On our opinion, using external differential forms calculus allows to expand thermodynamic language application field, to look to the standard relations from new point of view, consider ones on deeper scientific level. Authors think, that using in the article mathematic apparatus application will take on more concrete sense and apprehend more adequate after thermodynamic axioms and laws consideration using direct differential calculus [5][6][7][8][9][10] and, at the same time, external differential forms calculus basic theses [1][2][3][4]. Such an approach allows to look deeper to the thermodynamic laws in the view of abstract vector analysis and its geometrical images, which show physical reality nature from one more fundamental side, describing in mathematical physics by external multiplication and external differentiation concepts (see Appendix). Motivation of external differential forms using dictating by this methodology effectiveness, meaning fundamentality and application's simplicity. Confirmation of initial principles is this article which demonstrated the simplicity of obtaining already known results and provided obtaining new ones conceptual scheme. Thermodynamic Potentials External Differential Forms Calculus Let's consider a simple, homogeneous, placed in an external constant electric or magnetic field system. Its thermodynamic properties are investigated, using the theory of defined on consistent generalized thermodynamic forces and coordinates manifolds Pfaffian forms potentials. The variables that characterize the forces denoted by the We emphasize that the required functional relationship can be represented in another form for another problem conditions [5,9,10]. By themselves, in the differential forms calculus the thermodynamic functions are a 0-form. The action of the external differentiation operator d ɶ at the 0-form transforms it into 1-form. This operator is similar to the internal differentiation one, but it has some features (see App.). Each 1-form of potential after formal substitution operator to an ordinary differential operator d ɶ will produce a corresponding certain potential Pfaffian form [1][2][3][4][5][6][7][8]. In particular, the internal energy external differential is Here dU ɶ is a counterpart of usual differential, and partial derivatives are generalized forces. So, let's act to the relevant 0-form by the operator d ɶ and obtain 1-form of the thermodynamic potentials Everywhere (in particular, in (1-4)) meaning summation on doubled indices. Take into account Note that the thermodynamic potentials are the full differentials in the sense of usual differential calculus, that is why differential calculus corresponding differential form in the sense of external differential calculus is closed (or précising) [1,2]. Therefore, considerating basic property of double using the operator d ɶ , we apply the external differential operator to the relations (1-5) again. In consequence occur vanishing 1-form: . Take into account anticommutation rules dS dT dT dS ∧ = − ∧ ɶ ɶ ɶ ɶ etc., too. In consequence we obtain the basic equation of the external differential calculus thermodynamic potentials theory Note that this equation has balanced, "symmetrical" to the differentials, form. Based on the rules of the external differentiation, from the basic equation (6) we can easily get all the known thermodynamic relations between describing the macroscopic properties of the material characteristic thermodynamic factors. These relations traditionally determines on the Pfaffian forms of characteristic functions (corresponding thermodynamic potentials) basis [5][6][7][8]. For example, let's consider only thermal and mechanical variables in (6) (i.e. assuming 0 The 0-forms for the temperature and pressure are defined using the manifold (basis) of variables (S, V): From the 0-forms (8) we obtain 1-forms Substituting (9) and (10) into (7) and taking into account forms properties (particularly, anticommutation: From (11) we obtain the well-known Maxwell's relation Using Jacobians technique [5,6], it is possible to show (12) either as or as the calibration ratio [7] ( , ) 1 ( , ) Using pair of variables (S, P), (T, V), (T, P) as a basis, after a conversion, similar to demonstrated above, using the external differential forms calculus technique, obtain the corresponding Maxwell's relations, that can be reduced to calibration (14) by Jacobians technique. To make an analysis of the homogeneous, placed in an external field system, you should consider an appropriate combination of the paired members (6), or the corresponding 2-form. For example, to examine together thermal and mechanical properties in general case (in presence of an electric and magnetic fields), you should consider 2-form (6) (with 0 . The simplest and most accessible to experimental verification are 4-dimensional 2-forms (second degree forms in R 4 ). For example we can explore obtained from (6) 2-form, describing the mechanical behavior of dielectric in an electric field and magnetic material in a magnetic field 0 dP dV dM dH Similarly, we can consider only the thermal properties of the dielectric and magnetic material respectively on the base of the corresponding 2-forms We consider the most known relations for the dielectric (15) and magnet (16) in the isotropic case. Solving equation (15). Similarly to operations (9)-(14) used to solve (7), select serially bases Using the calibration (19) and the Jacobian technique [5,6], we can obtain any ratio between the characteristic coefficients for given thermodynamic variables and field conditions. For example, multiplying (19) on unit represented , which can be formally regarded as a fraction, obtain the relation After obvious transformations it leads to the form Hence we receive connection which can be written in the traditional form For the magnetic material mechanical properties study (see (16)) in the adiabatic or temperature constancy case ( 0 dT dS ∧ = ɶ ɶ ) the following calibration is obtained: Based on the calibration ratio (21) These relations (Maxwell's identity) can be found by the standard thermodynamic approach on the base of differentials fullness condition [5][6][7][8]. Obviously, the relations (22) define a volume change caused by electric and magnetic fields respectively: The latter are connected with electroelastc effects. In the absence of external fields ( 0 E = and 0 H = ) e) electric and magnetic effects caused by elastic forces is called [5] piezoelectric and piezomagnetic respectively. Remarks Let's make a remark about the methodology of thermodynamic potentials. Maxwell's relations, obtained in a standard Pfaffian forms calculus as a result of characteristic functions mixed derivatives equality, usually connected some values describing the mechanical, thermal etc. system properties [5,6,8]. The establishment of such relations is a content of the thermodynamic potentials method. For example, thermodynamic potential derivatives in the T, S, P, V variables determine the thermal, adiabatic, isochoric, isobaric, caloric system parameters characterizing its thermal and mechanical properties. The relationship between these parameters can be determined basing on different potentials. At the same time, the thermodynamic potential only in their own variables satisfies the differential fullness condition and is a real characteristic function of them. Calibration relations have certain universal character, and in the majority of cases they are invariant to the variables change. Calibration violation is an indication of matter abnormal properties in relevant thermodynamic variables space points (for instance, water [5,6]). Additionally we'll make a brief summary, based on the provisions of [1][2][3], in order to more fully revealing the meaning operations used in the paper. Note the differential forms calculus fundamental provisions and the obvious comparisons. Note the following about comparison. In threedimensional space R 3 external multiplication operation may be associated with the vector multiplication operation in the standard vector calculus. Accordingly, the external differentiation operator d ɶ , acting on the 1-form (p=1) in three-dimensional space, associated with the rotor of the vector field. The d ɶ and Λ operations fundamental properties are the following. Operator d ɶ converting form to another form, increasing its degree per unit -this is its main property. So if ϕ ɶ is a form, then dϕ ɶ ɶ is a form too, and its degree is one unit higher. Next integral-differential rule, that is correct for the forms of degrees 0 and 1, holds for higher degrees too. In a onedimensional space (n = 1) operator d ɶ turns a 0-form (p=0) to For the latter standard integral calculus basic formula is true: In the differential forms calculus operator's d ɶ action to forms of high degrees is similar to one's action to 0-form. If The article listed the differential forms calculus provisions have been applied to the thermodynamic potentials and their differentials, which can be regarded as a 0-form and 1-forms, respectively. At the same time the second external differentials of these functions, according to the operator's d ɶ general properties, vanish [1,2]. Conclusions This paper is given a visual representation of the vector calculus fundamental nature, which is essentially based on the external differential forms calculus. As an example of the required formalism chosen Pfaffian forms methodology, used in thermodynamic potentials theory. The mathematical simplicity of forms using and the efficiency of obtaining physical results is shown. In the article fundamental relationship between external differential form calculus and abstract vector analysis principles have been demonstrated. This apparatus is a generalization of standard differential and integral calculus. Article shows potentials of external differential forms using for analyze electromagnetic fields influence on condensed media, composite materials and compound highmolecular systems, in particular photonic materials. Using mathematical apparatus methodological peculiarities, proving its fundamentality, which causes one's academic necessity and practical expedience, demonstrated. Authors think [11], that due to the fundamental nature of the external differential forms calculus apparatus and based on geometrical principles (adequate the describing physical reality nature) concepts application using differential forms calculus as a method of mentioned reality study in the near future will be, as projected in the literature cited in an article, the needed fundamental mathematical tool in the making steps towards understanding the laws of nature investigators arsenal. Basic Theses To refining using material, following [3], let's consider The special case is a polylinear skew-symmetric form, which can be represented by expansion on given basis: Let's consider polylinear form, which is simply a product of two skew-symmetric p-and q-dimension forms: Here σa is a function of p+q vectors, obtained from defined above function a by argument permutation σ; sgn(σ) is 1, if permutation is even, and -1, if it is odd. As a simple example of exterior product let's view a product of two linear 1-forms gives a bilinear form: Exterior product of 1-form and q-form (q>1) is a form of q+1 degree: . By T* denotes a dual space (like direct and inverse space in the solid state physics), or vector space of linear forms in T space [1][2][3]. For all p exterior p-forms combine into real vector space Concrete Applications From methodological point of view exterior differential form formalism is much simply then vector calculus one [1][2][3]. By definition, p-degree differential form in n-dimension Euclidean space is infinitely differentiable vector function is an element of mentioned space, and differential symbol ( ) 1 2 , ,..., n dx dx dx dx = is a vector) [3]. This form is skew- Differential forms algebra is formalized by exterior differentiation d ɶ and exterior (anticommutative) multiplication Λ rules. This algebra is more easily and at the same time more effective and more fundamental, that vector analysis [1][2][3]. A set of all forms of any degree with exterior multiplication operation between them defines as Grassmann algebra [1][2][3]. For the forms φ(p) and ψ(q) of p and q degrees it's true a commutation rule Operator's d ɶ action to high degree forms is analogous to one's action to 0-forms. Exterior differentiation increases form degree per unit (if φ is a p-form, then dϕ ɶ is a (p+1)form). If dϕ ɶ is an exact form [1,2] ( dϕ is a full differential in a usual differential calculus), then double exterior differentiation operation leads to form vanishing: ( ) 0 d dϕ = ɶ ɶ . Exterior differentiation rules are the similar to usual differentiations ones, taking into account Λ operation anticommutative properties: Linearity of exterior differential forms resulting from ( 1 2 , λ λ are numbers): representations of 0-form there are for any dimension of space. Form of degree 1 (p=1), or 1-form, is Particularly, when n=1, we have linear differential form Form of degree 2 (p=2), or 2-form, is ( ) Particularly, for the minimal space dimension n=p=2 Here determinant ɶ ɶ is equal to defined on vectors 1 2 , dx dx ɶ ɶ area element. When n=p=3 and variables are 1 2 3 1 2 3 1 2 3 1 1 1 1 2 2 2 2 1 2 3 3 3 3 3 ( , , ), x we have corresponding to vectors 1 2 3 , , dx dx dx ɶ ɶ ɶ volume element, which is similarly equal to determinant. 3-form is, respectively ( ) ( ) Let's specify differential form formalism for a vector field case. In this case remind, that exterior differential dω ɶ of the linear differential form ω of p degree defines by relation Mark out rules, which define exterior differentiation operator action for fixed form degree p and space dimension n. Operator d ɶ action to defined in 1-dimension space R 1 0form (p=0) gives 1-form (p=1). I.e., in R 1 operator d ɶ increase form degree, and dimension of space, which form defined on, is invariant: Operator d ɶ action to 0-form, defined in n-dimension space R n , also given 1 form -linear combination of n differential terms: Differential forms of higher degrees (p>1) generate either by lower degrees form exterior multiplication, or by exterior differentiation operator action to form of degree, lower per unit. For example, if 1-form in R 3 is ( )
3,603.6
2017-07-27T00:00:00.000
[ "Physics" ]
Hobo elements and their deletion-derivative sequences in Drosophila melanogaster and its sibling species D simulans, D mauritiana and D sechellia Hobo elements are a family of transposable elements found in Drosophila melanogaster which present a specific deletion-derivative element (Th) in the majority of the current natural populations. Present data has resolved the Th element into two elements Thl (1510 bp) and Th2 (1490 bp) and specified the regions within which the deletion breakpoints lie. Hobo homologous sequences were analysed in the sibling species D simulans, D mauritiana and D sechellia. In the full-sized element a Pvull site was shown in D simulans and D mauritiana as it exists in D melanogaster, but was not observed in D INTRODUCTION Hobo elements are a family of transposable elements which can be mobilized within the germline of Drosophila medanogaster. In this species, strains containing hobos may have 3.0 kb complete elements and numerous smaller derivatives of the element (Streck et al, 1986;Yannopoulos et al, 1987;Blackman and Gelbart, 1988;Louis and Yannopoulos, 1988). Molecular analyses have revealed the presence of a specific deletion-derivative element, the Th element, in all current strains of D medanogaster examined throughout the eurasian continent (Periquet et al, 1989a). D melanogaster is not the only species in which hobos have been found. Homologous sequences have been detected in the sibling species D simudans and D mauritiana which contain what appear to be complete copies in addition to several internally deleted sequences (Streck et al, 1986). In this paper, the presence and the pre-eminence of specific deleted-derivative sequences in each of the four sibling species D melanogaster, D simulans, D mauritania and D secheddia are reported. Their structure is analysed and the maintenance of the activity of hobo elements during evolution is discussed. MATERIALS AND METHODS The species and the tested strains of Drosophila originated from our collection of flies sampled in the eurasian continent for D melanogaster, and from the collection of the BGE and CGM Laboratories of the CNRS at Gif/Yvette for D simulans, D mauritiana (163.1) and D sechellia (228). Standard techniques were used for DNA extraction, gel electrophoresis, blotting, hybridization and ligation (Maniatis et al, 1982). Genomic DNA were digested either by XhoI to show the presence of the 2.6kb fragment characteristic of potentially complete hobo elements, or by the double digest Bam HI plus Bgl II, which do not cut the hobo element, in order to obtain an approximation of the total number of elements. Other enzymes were subsequently used to search for the presence of the corresponding restriction sites in the specific deletion-derivative elements. DNA samples were run on either 0.7% to 1.2% agarose gels or on 4% agarose Nuesieve gels according to the size of the fragments analyzed, and blotted onto Hybond-N membranes. Hybridizations were carried out overnight at 65°C in 5 x SSC, 10 X Denhardt's solution, 0.1% SDS with the 32 P labelled XhoI fragment of 2.6-kb obtained from the pHcSac plasmid (Stamatis et al, 1989). Filters were washed for 40 min at 65°C in 3 x SSC, then for 2 x 20 min at 65°C in 1 x SSC or 0.5 SSC. In this way the procedure will promote and maintain DNA hybrids between probe and target when the two have a sequence similarity of 95% or more. RESULTS Our analysis is based on knowledge of the complete hobo element from D melanogaster, for which a restriction-enzyme map is shown in figure 1. To test for hobo sequences, Xho I digests of genomic DNA from strains of the different species were probed with the XhoI fragment from the complete hobo element contained in the plasmid pHcSac. With this combination, full-sized hobo elements produce a 2.6-kb fragment with homology to the probe. A defective element with an internal deletion spanning between the two XhoI sites will produce a fragment smaller than 2.6-kb. On the other hand, elements having an insertion sequence between these two sites or having lost one (or both) XhoI sites will generally give a fragment larger than 2.6-kb. With this approach we will able to assess the intactness of the XhoI fragment but of course not the left and right ends of the elements. Finally, operating under the assumption that the restriction sites in hobo sequences from other species are not dramatically different from those found in the cloned hobo, 08 element of D melanogaster (Streck et al, 1986), this approach allows the investigation of the hobo sequences present in sibling species. Hobo sequences in D melanogaster and sibling species Southern blot analyses of genomic DNA, digested by Bam HI plus Bgl II, from D melanogaster, D simulans, D mauritiana and D sechellia, confirm the presence of hobo sequences in these sibling species (Streck et al, 1986;Daniels et al, 1990). As these enzymes do not cut the hobo melanogaster sequence, they allow a rough estimation of the total number of hobo sequences in the tested strains. These values range from 25 to over 30 for D melanogaster and D simulans H strains, and from 15 to 20 for D mauritiana and are about 25 for D sechellia (data not shown). Figure 2 shows the results of genomic DNA samples, digested by Xho I, of different species and subjected to Southern blot analysis with the 2.6-kb hobo probe. In D melanogaster, hobo-containing strains (lanes 1 to 3) show the presence of the 2.6-kb and band corresponding to the putative full-size element, as well as several deletion-derivative elements. Two of the latter are frequently found in the different D melanogaster strains. The Oh element (from the Oregon R 9 strain, see also Streck et al, 1986) gives a 1.5-kb fragment and corresponds to a 1.9-kb element with an internal deletion of about 1.1-kb. The Th element (Periquet et al, 1989a) gives a 1.1-kb fragment and corresponds to a 1.5-kb element having an internal deletion of about 1.5-kb. In D simulans, hobo-containing strains (lanes 4 to 6) generally show the presence of a fragment comigrating with the 2.6-kb fragment of D melanogaster (lanes 4 and 6), although some strains may be devoid of this fragment (lane 5). Moreover, a characteristic deleted-derivative element is generally present in the recent strains studied (lanes 4 and 5). This fragment has been found in 10 strains collected from 1970 to 1990 in the Americas, Europe and Japan, but not in the African strain tested (lane 6). This h del sim element gives a fragment of 0.68-kb and corresponds to 1.1-kb element having an internal deletion of about 1.9-kb. Finally, in D mauritiana and D secheldia (lanes 7 and 8), whose stocks are limited by the endemism of these species, fragments comigrating at the 2.6-kb level are also present, as well as characteristic fragments of deleted-derivative element comigrating at the 0.73-kb level. These fragments correspond to 1.1-kb-elements with an internal deletion of about 1.9-kb. For the moment it is not possible to determine whether these specific deleted elements (called h del maur and h del sech) are identical or not. Analysis of the specific deleted-derivative elements Figure 3 shows the results for genomic DNA samples of various D melanogaster strains collected on several continents. These samples were digested by different restriction enzymes, in order to study for pattern similarity. If the Th element were present in the different strains, the patterns would be identical. As expected, the patterns were the same but, since the DNAs were run onto a 4% agarose gel in order to detect small fragments, in all cases a double band was present at the Th level. The Th element was therefore resolved into two elements. Figure 3 By using differents sets of enzymes and Southern blot analyses, a restriction map of these two elements was obtained and their size was estimated by a logarithmic regression analysis of the band distance on the autoradiograms. The results are summarized in figure 1, which also gives the data obtained for the Oh element. These elements are deleted in the central part of the sequence and have the following approximate sizes : Oh (1870 bp), Thl (1510 bp), Th2 (1490 bp). In D simulans, D mauritiana and D secheddia similar analyses were performed to characterise the deleted-derivative elements. Results are summarized in figure 1. All these elements are also internally deleted and have an approximate size of 1130 bp (h del maur and h del sech), and 1080 bp (h del sim). The breakpoints of the internal deletion are different for D melanogaster, D sintulans and D mauritiana. However, at the level of this restriction map, no difference has been detected in the deleted elements of D mauritiana and D sechedlia. Analysis of the putative full-sized element All the preceding experiments revealed good conservation of the restriction sites of hobo elements from the sibling species of D melanogaster as compared to the sequence of the cloned hobo, 08 element. However, Bazin and Williams (personal communication) have recently found a non-described PvulI site, at position 2227, of a hobo inserted element at the vestigial locus of D melanogaster. This site is extremely frequent in all current populations of D medanogaster as well as in the functional hobo element of the pHFLl plasmid (Periquet, unpublished data). DNA samples of different species, alternatively digested by XhoI and XhoI plus PvuII, were analysed by Southern blot. When the PvuII site is present, the 2.62-kb fragment is therefore cut into two fragments of 1.94-kb and 0.68-kb respectively. The results (figure 4) show that the PvulI site is also present in the putative fullsized elements of D simulans (but not in the US strain El Rio) and D mauritiana. For D sechellia the results are less clear. The pattern difference between the last two lanes shows that a PvuII site is present in some hobo sequences but the absence of bands at 1.94-and 0.68-kb levels suggests the absence of this site in the putative full-sized element. These data pose the question of the fine structure of the hobo element in D sechellia. The presence of full-sized elements in the D melanogaster sibling species raises. the problem of the functioning of these elements. In D simulans, the existence of different patterns of restriction fragments Bam HI plus Bgl II among strains suggests differences in the number and location of hobo elements which might be due to their mobility. In D mauritiana, a former experiment made in order to obtain inter-specific hybrids between D simulans and D mauritiana proved meaningful for the present purpose. The cross involved a D simudans strain devoid of the 2.6-kb Xho I fragment and the present D mauritiana strain (fig 5). After 13 generations of free massmating, the hybrid flies were of the D simudans type, as is classically obtained in this type of inter-specific cross. DNA samples digested by XhoI of these G13 flies were analysed by Southern blot and showed the pattern of band characteristics of the D simulans elements, plus the presence of a new 2.6-kb band (fig 5). This result strongly supports the hypothesis of the presence of functional transposable hobo elements in D mauritiana which were able to be mobilized in the hybrid genome of the first generations of flies. DISCUSSION The severe conditions of stringency and the normal exposure used in our experiments confirmed the presence of extremely well conserved hobo sequences among D melanogaster and its sibling species (Streck et al, 1986;Daniels et al, 1990). The existence of specific deleted-derivative elements appears to be a feature of the hobo family, with the presence of a majority class in almost all current populations of the cosmopolitan species D melanogaster and D simulans. These internally deleted elements are different for each species, but in both cases they have lost the majority of the ORF1 and are probably non functional, which makes one wonder why they are present in such large quantities in these species. This may be due to the recurrent formation of specific deletions from the complete hobo element. The presence of the two Thl and Th2 elements, which differ by about 20 bp, in numerous populations of D melanogaster, shows that the mechanisms of such recurrent deletions might be very precise and would implicate preferential breaking sites. On the other hand, the specific deleted-derivative elements might play a role in the regulation of the activity of the complete hobo element, as has been shown for the KP element in the P-M system (Black et al, 1987;Jackson et al, 1988). Consequently their presence in many populations would implicate a rapid spread of these non-recurrent Th elements, aided by a selective advantage. Although Oh, Thl and Th2 are not present in the other three species, other derivative-deleted elements are found in these species. In any case, the massive presence of such derivative-deleted elements is also an argument in favor of the maintenance of active hobo elements in D simulans and of the putative role of such derivative elements well adapted to the genome of this species. These activities are also suggested for hobo elements of D mauritiana and are corroborated by the high degree of similarity implicated by our conditions of stringency. Contrary to results presented by Daniels et al (1990), our D sechellia strain shows the presence of one fragment migrating at the 2.6-kb level, but the pattern of the other bands resemble Daniels'. The difference might be due either to a stochastic loss of this element in a derived sub-line of the 228 strain, or to an excision due to its mobility. Moreover, the fact that hobo elements in D sechellia appear to present differences in sites which are common to the other three species could be related either to an ancient divergence of this species from the other ones, or to an evolution by genetic drift in this island species. In any case, sequencing of the elements of the sibling species will be necessary to determine their fine structure and the relatedness between species. At the phylogenetic level, hobo sequences appear to be limited to the melanogaster and montium subgroups (Daniels et al, 1990). The present data corroborate the strength of the relatedness between the members of the melanogaster complex, as opposed to the weakness and the lack of information of the relationships among the hobo hybridizing sequences found in the montium subgroup. These authors suggest two hypothetical scenarios to account for the current distribution of hobo sequences in these subgroups. The first proposes a single introduction of hobo elements into the common ancestral lineage. The second proposes two introductions, one into the common ancestral lineage and another one specific to the melanogaster complex. When considering only the melanogaster complex and the existence of several D melanogaster strains devoid of almost all hobo elements (essentially in the oldest strains collected from natural populations), the presence of active hobo elements in all the current populations of this species poses the problem of the origin of hobo elements (Periquet et al, 1989b;Pascual and Periquet, 1991). As proposed, the active hobo element of D melanogaster may have originated either from internal recombination-reactivation of deleted hobo elements in D melanogaster itself, or by horizontal transfer from a foreign species. Present data show that the best candidate for such a transfer is D simudans which is also a cosmopolitan species non-vicariant of D melanogaster. Clearly sequence analyses of hobo elements between these two species will be of help in understanding the evolutionary history of hobo elements.
3,581.2
1990-12-15T00:00:00.000
[ "Biology" ]
A Comparison of a Smart City’s Trends in Urban Planning before and after 2016 through Keyword Network Analysis : The aim of this study was to explore the keywords related to smart city concepts, and to understand their flow. This research used a keyword network analysis by collecting keywords from papers published on the web from Scopus, which is an international scholarly papers engine. The data were collected from before and after 2016, and since the amount of data has been growing rapidly after global agreements such as the United Nations’ Sustainable Development Goals (SDGs) in 2015, we attempted to focus on adjacent years of publication. In order to understand the flow of research, we conducted a central analysis, which is widely used in quantitative research relating to social network analysis, and performed cluster analysis to identify relationships with related research. The results of the analysis are represented in the form of network maps, and the role of each keyword was clarified based on these network maps. In addition, the overall flow explained the change of flow through discarded and emerging keywords, and the relationships with related fields were explained through cluster analysis. The findings could serve as a basis for policymakers, urban managers, and researchers seeking a comprehensive understanding of the smart city concept in urban planning areas. Introduction A smart city may be considered an advanced concept related to the concepts of the information city, digital city, intelligent city, and sustainable city [1,2], and has been widely cited and studied, along with the sustainable city concept, since 2013 [3,4]. According to Google trends regarding the "smart city" (accessed on 13 March 2019) (Figure 1), the smart city's search terms have been on the rise since 2004 and peaked in 2015, but have remained high, suggesting many studies and discussions are occurring on the smart city concept [4,5]. However, the concept of a smart city is controversial, and no exact agreement has been reached on its definition. Despite efforts to conceptualize the smart city in many research fields and studies, most definitions of the smart city have been ambiguous or duplicated [3,6]. For example, many studies have used smart city, smart sustainable city, sustainable smart city, and so on interchangeably. These terms need to be clearer because they have the potential to be confused with specific and related terms, often interactively used by policymakers, planners, and researchers, when considering the common aspects of urban sustainability [3]. It is not yet possible to establish a comprehensive approach that addresses the various dimensions of sustainability at the urban level [7]. However, in general, the smart city is understood as an ideal model for urban planning and development, model for urban planning and development, adaption to environmental issues such as climate change and global warming, and efficiently utilizing and managing energy. In addition, Information and Communication Technologies (hereafter ICT) will extensively and effectively help cities achieve a comparative edge [4, 8,9], and be used as the tools and means to develop Intelligent Transport Systems (hereafter ITS) with mobility information and the Internet of Things (hereafter IoT) [10][11][12], as well as to achieve urban policy making based on governance and open data [13][14][15]. Accordingly, in improving the urban quality of future urban areas, the term smart city is considered as an umbrella concept that includes various sub-concepts such as sustainable smart environments, smart technology, smart energy, smart transportation, smart mobility, and smart government [16][17][18][19][20][21]. The concept of the smart city has emerged over the past decade through ideas of ways to improve the functioning, efficiency, and competitiveness of cities, and solve their environmental challenges. Early on, it was speculated that ICT would play a key role in the smart city [22][23][24]. With the development of ICT, the functions of urban management were improved in various fields, such as transportation, energy, health care, and water [2], and the use of ICT facilitated the development and delivering of information and knowledge generated in daily life, promoting citizens' participation in e-governance and e-services [25]. Additionally, ICT is a technical platform for the process of collecting and processing massive amounts of data, called big data, enhancing digital devices, Internet services, the IoT, and the Internet of people's societies [25], and these techniques and technologies have been recognized as tools of urban planning to create innovative, intelligent spaces and improve urban sustainability [26]. In this way, the information gathered by these processes is accelerated to achieve intelligence and efficiency in managing urban resources and settings [27][28][29]. Collectively, ICT-based predictive analytics can demonstrate the best implements for gaining insight into data for future decisions [30,31], and enhance the outcomes for other stakeholders in the smart city area [2]. A new and important flow in ICT is the identification and utilization of meaningful data collected from information systems [32], and the analysis of the data in various applications in smart cities [33]. ICT is being spotlighted in urban planning as one of the key components of the urban infrastructure that enables access to a smart city [34], and the smart city concept relies on the IoT technology's visions of pervasive computing and related big data applications [20,35]. Many process technologies have been introduced to understand and analyze a lot of the information connected to the IoT, among them data mining which is one of the most valuable technologies [36]. Technological and technical advancements in ubiquitous computing, wireless sensor networks, cloud computing, The concept of the smart city has emerged over the past decade through ideas of ways to improve the functioning, efficiency, and competitiveness of cities, and solve their environmental challenges. Early on, it was speculated that ICT would play a key role in the smart city [22][23][24]. With the development of ICT, the functions of urban management were improved in various fields, such as transportation, energy, health care, and water [2], and the use of ICT facilitated the development and delivering of information and knowledge generated in daily life, promoting citizens' participation in e-governance and e-services [25]. Additionally, ICT is a technical platform for the process of collecting and processing massive amounts of data, called big data, enhancing digital devices, Internet services, the IoT, and the Internet of people's societies [25], and these techniques and technologies have been recognized as tools of urban planning to create innovative, intelligent spaces and improve urban sustainability [26]. In this way, the information gathered by these processes is accelerated to achieve intelligence and efficiency in managing urban resources and settings [27][28][29]. Collectively, ICT-based predictive analytics can demonstrate the best implements for gaining insight into data for future decisions [30,31], and enhance the outcomes for other stakeholders in the smart city area [2]. A new and important flow in ICT is the identification and utilization of meaningful data collected from information systems [32], and the analysis of the data in various applications in smart cities [33]. ICT is being spotlighted in urban planning as one of the key components of the urban infrastructure that enables access to a smart city [34], and the smart city concept relies on the IoT technology's visions of pervasive computing and related big data applications [20,35]. Many process technologies have been introduced to understand and analyze a lot of the information connected to the IoT, among them data mining which is one of the most valuable technologies [36]. Technological and technical advancements in ubiquitous computing, wireless sensor networks, cloud computing, and machine learning have adopted by big data analytics as supporting tools [37][38][39]. Moreover, smart devices share their own information and access with other devices, and generate information with internal applications by themselves [40]. Through these computing and ICT processes as core enabling technologies, the information is used for understanding, analyzing, evaluating, and monitoring, and this contributes towards the goals of sustainable development in the sustainable city [35]. Predictions and outcomes analyzed from information gathered through ICT may be efficiently reflected in various aspects within urban areas, and help the decision-making process in urban service policies, such as those around the environment, education, and well-being. In light of this, ICT is deeply involved in the need for smart, data-centric technologies for dynamic and evolving urban plan systems aiming towards sustainability for the management and development of urban functions [35][36][37][38][39][40][41]. The concept of the smart city with advanced technologies is not certainly complete to achieve optimal sustainability in urban development and planning yet [20,42,43]. However, ICT analytics, including big data, are considered to be fundamental ingredients for urban analyses [20,42,43]. In addition, ICT is being considered to achieve long-term goals of sustainable development, as a way to mitigate increasing social-economic concerns and complex environmental challenges in modern cities in their various forms of sustainability, infrastructure, data analysis, and services [20,23]. The applicability of smart systems in contemporary cities requires the comprehensive understanding of the possibilities of how unpredictable and unprecedented urban issues, such as population growth, environmental pressure, and human welfare and safety, can be efficiently handled [41,44]. In this regard, the Smarter Cites Challenge program of IBM achieved smart city projects in 100 cities around the world, with essential themes for urban management such as urban planning, transportation, environment, civic engagement and civil management, and public safety [45]. Its program has helped global cities to significantly improve quality of life through data analytics [46]. Huawei, a technologies company in China, published a report in 2017 comparing the 20 cities in the United Kingdom in detail, with themes such as digital innovation, social management, urban mobility, energy education, and sustainability, to address challenges facing cities and communities moving towards strategic smart cities [47]. Cities are absolutely required in the process of urban planning; utilizing their infrastructure and technologies, and cooperation from citizens, will be needed in order to approach the optimal smart city, because the ultimate goals of urban planning based on sustainable development are to improve the quality of life of citizens [48]. Citizens are deeply involved in urban initiatives and governance, and contribute to disseminating smart devices and Internet sharing, and generating information [49]. In implementing smart city projects, citizens should be considered as important decision makers, with their priorities for the strategies and goals to be understood as relating to the needs and challenges in their own city; the government should support their initiatives and governance [50]. In fact, a study supported by the EU reviewed 300 initiatives in smart cities and the community highlighted that governance, which consist of citizens, government agencies, private companies, and investors, should be important in the processes of resolving problems and making policy decisions [51]. The positive effect of governance frameworks based on citizens, companies, and governments should not be understated in smart urban planning, and smart governance frameworks that are established must be credible to community members, stakeholders, and experts. While various indicators of urban functions and development have been used, few studies have indicators for assessing the smart city [52]. At a time when the definition of perfection is not yet agreed upon, the assessments of smart cities have been conducted differently in the ICT-centered approach and the people-centered approach [53]. Moreover, the form, size, and funds for each city are considered priorities as fundamental dimensions; small cites are not guaranteed to have an effective understanding of innovative strategies, and smart strategies should be harmonized with government policies [9]. In this regard, the reflection of smart strategies should require a considerate approach depending on the degree of urban development, the latest technologies, and the composition frameworks of governance with the community [54]. In light of the above, smart cities have the potential to provide better urban services to urbanities than urban planning in the past. Existing cities would be applicated by the smart systems as they become accustomed to new technologies that trigger a new paradigm shift in urban planning. Many cities around the world are considering approaches to achieve sustainability in their respective urban environments. Smart concepts are not inherent to the building of cities, but should be considered as the root of a big urban concept. Therefore, cooperation in various research fields is required to address challenges arising in real time based on ICT, utilize data, and reflect this in urban planning. However, few studies have been performed in collaboration or in joint study with other research fields on the actual application of smart functions. Most of the literature is on the advanced technology of urban planning, focusing only on specified fields such as transportation, building, and energy, where ICT was applied to existing cities to emphasize a new city brand [55]. Others have noted that an effective smart policy is required to develop suitable infrastructure, along with governance by a watchdog and collaborator through public-private partnerships [5,9,56]. However, a smart city cannot be led by just any organization or government, nor can it be achieved in one study area. Understanding the smart city based on the literature requires identifying the concepts of the evolving process of urban planning. In particular, we attempt to highlight a new, feasible urban planning system based on the smart concept with other studies areas, and identify the flow of the application of smart systems in other research areas. However, most of the studies on smart trends so far are qualitative research projects. Quantitative and comprehensive research is required to adapt to rapidly evolving cities. We aimed to comprehend how the smart concept can be applied to urban planning. This paper provides the flow of the smart city and connection of other study areas, in terms of urban policies. The Concept of the Smart City Research has been conducted on diverse aspects of the smart city in various research areas. There are many definitions of a smart city; none have been widely recognized, although they can be summarized to concepts. Through some review papers, we came to our understanding of the concept of the smart city of urban planning. Trindade et al. [4] used qualitative methods to identify information about research, models, frameworks, and tools, considering 'smart city' and 'sustainability' as keywords in published web papers. Their paper emphasized that the smart city affects the concept of sustainable urban development. D'Auria et al. [57] examined the concepts and relevance of the 'smart city' and 'sustainable city', utilizing a systematic review through H-index on the Web of Science. The smart city has the goals of urban planning reflected in a new philosophy and approach, but the two concepts cannot be considered in contrast, although it was stressed that the principle should be aligned with sustainable development. Yigitcanlar et al. [58] reviewed 35 academic works about smart cities and insisted that cities could not be smart without sustainability. They highlighted that smart cities need to have appropriate technology, complex city management, and consensus on concepts of sustainability for future sustainable cities. Meijer et al. [59] presented technologies, human resources, and governance as the concepts of the smart city through qualitative analysis of three phases (search, paper selection, and review). They defined that the smart city is human capital, as attracting human capital among various individuals and governance, and these human resources, are used to operate and maintain the smart city through the use of ICT. Yigitcanlar et al. [60] did a systematic review based on literature aimed at conceptual development of smart cities. They addressed the idea that smart cities are more than a technology concept, with goals such as productivity, sustainability, accessibility, well-being, lifestyle, and governance linked to communities, technologies, and policies. Albino et al. [50] attempted to clarify the meaning of the concept of the smart city through a literature review of papers published after 2008. The results stressed that because the concept of a smart city is not universal and is too complex, the visions and conditions of a certain city should be reviewed to approach the idea of a smart city. Arroub et al. [61] noted people, infrastructure, and operations as core to the concept of the smart city, which depends on the geographical, environmental, economic, and social constraints of each city. As a sub-concept, they highlighted education, health-care, and social programs in terms of human service; energy, water, and transportation in terms of infrastructure; and governance, public safety and managing urban resources in terms of the operation concept. Giffinger et al. [19] evaluated 94 small and medium-sized cities in Europe through the smart city indices. They also pointed out the ambiguity of the concept of understanding, stressing that it should be intelligently integrated into the areas of industry, education, civic engagement, and technology infrastructure, and applied to citizens. Accordingly, a smart city was considered to reflect two major trends; one was integration networks as a collection of smart devices, sensors, and real time big data with ICT related to human life, and the other was a new paradigm in urban planning polices related to governance and the economy [62]. Nam and Pardo [63] argued that the key components of a smart city are the technologies, the people resources such as creativity, diversity, and education, and the institutions such as policy and governance. To sum up, the smart city may be considered to be deeply involved in various planning and areas within a city, based on ICT technologies such as infrastructure, education, environment, public welfare, safety, and participation, with the goal of sustainability, and maintained by human resources such as governance and frameworks. ICT in the Smart City A smart city should provide a network of integrating technologies, systems, services, and capabilities for sufficiently multi-sectorial and flexible future development that is open-access [50]. All governments and public institutes at all levels should aim to improve strategies and programs by reflecting the concept of smartness in their existing policies [63,64]. This would mean that ICT is the foundation for promoting new forms of technology, and is the facilitator for more broadly and innovatively balanced development [65]. Therefore, it is important to have a better understanding of in which areas ICT development provides the best advantages for society and the environment, because it has great potential for urban system management and urban sustainability [66]. ICT can be applied from a regional level to the national and world level, such as in the forecasting of environmental pollution and weather, energy transportation, and transport management [67][68][69]. Indeed, ICT has adopted a broader approach to the most important aspects of people's everyday lives, stimulating smartness in the components of city [19], and also allows the smart ecosystem to expand its smart space from a personal context to a large community or the city as a whole [70]. In fact, data generated by ICT are used in data analytics in various urban fields, generated by smart devices such as smartphones, smart sensors, social network services, wearable smart devices, Internet, and the IoT through data mining techniques, and analyzed and utilized by data analytics such as machine learning and deep learning. The processes and results of ICT can be reflected throughout the entire city, and used to predict urban challenges. Sanseverino et al. [71] comprehensively reviewed the smart urban concept, and compared the smart city concept of Europe to cities in China. The authors stressed the integration of cities through ICT infrastructures with smart initiatives and a smart governance system in urban intelligent solutions for energy, agriculture, transportation, buildings and urban services, and advised moving away from a government-led top-down approach, and to invest in ICT infrastructures in the long view. IoT applications such as smart grids, environment monitoring, and intelligent lighting were emphasized as a good example to reduce the environmental impact of pollution and energy consumption [72]. Governments around the world are adopting and utilizing big data in ICT as part of moving towards smart cities, to improve the living conditions for citizens. Big data is a technology with enormous potential to improve smart city services, which could be reflected in national and urban policies [69]. In fact, data mining techniques can collect real time data generated by smart devices, including smartphones, wearable smart devices, smart applications, and the Internet. The IoT is also being developed for deep learning technologies with machine learning that can analyze and utilize collected data from cloud computing. Governance with Smart City Since ICT cannot transform cities without human capital, ICT should not be distinguished from human capital in the smart city [24]. The importance of governance is increasing, in order to manage initiatives or projects to make a smart city. According to Meijer et al. [59], smart city governance is required as new forms of human collaboration through utilizing ICT is beginning to create better results and more open governance processes. These authors emphasized that smart city governance should not be a technical issue, but studied from a social, political, and institutional point of view. Odenddal [73] argued that smart governance promotes data exchange, service integration, collaboration, and communication. Besides, the frameworks of human capital and governance are emphasized, which play important roles individually, as well as in the community, groups, and components of the entire city [74]. Smart governance is considered a core component of smart city initiatives, because it promotes interaction between people, policies, information, and technologies [75]. Smart governance enables creativity and innovates implementation for the smart city, and all initiatives require collaboration, disclosure, and participation based on smart governance models, which are essential components for the smart city [76]. Thus, smart governance is a new channel of communication between governance and citizens, such as e-governance, and requires cooperation from government departments and local communities [19,74]. In fact, creations and data from initiatives and governance should not finish with analysis and prediction. Government servants will continue to communicate with citizens, so that they should co-produce and create more new services [75]. The participation and cooperation of private technologies is also considered to be an important element of smart governance, because different stakeholders are involved in the development of technologies for the smart city. Accordingly, models of smart governance with government, business companies, and citizens are proposed to promote the transparency of society. Citizens are able to suggest opinions or express complaints about government policies through various communication channels, such as the Internet, apps on smartphones, and telecom services [77,78]. Business companies are willing to acquire new knowledge and information in line with the government's policies, and can contribute to government policies through analysis of real-time data and technology development [79]. Based on ICT tools, governments are able to perform tasks with other departments online more quickly, with immediate access and sharing of data available to officials [80]. After all, the overall governance framework should be built for a sustainable smart city. Data Collection 2016 is considered important as a time when new national policies have implemented to achieve goals under international agreements. The United Nations' Sustainable Development Goals (SDGs), agreed in 2015, specified inclusively sustainable urban society and residences (target 11). In particular, Mauritius published a report in which a smart city scheme reflected on the SDGs in Feb 2016. A new climate change strategy was adopted in the Paris agreement in December 2015, and the Sendai framework was adopted by the United Nations Office for Disaster Risk Reduction in 2015 to reduce and mitigate damage of disasters. Therefore, we noted the global debates around 2016 in the data collection process. We investigated "Scopus", a representative international thesis search engine that provides bibliographic information, to grasp the research flow of a smart city. Using "smart city" search terms, keywords from a total of 5526 articles were extracted from 1970 to 13 March 2019, focusing on the social sciences and environmental science subject areas. These two fields were described as areas where the social, economic, and environmental aspects of urban planning have been studied extensively [81]. These subject fields needed to be addressed in terms of urban planning because these three aspects influence the living conditions of urbanites [82]. In this process, a total of 4281 articles were used in the study, excluding articles that did not include keywords information. As illustrated in Figure 1, the data used in the analysis increased rapidly from 2015. The collected keywords were refined in Excel to prepare for overlapping meanings of words due to the problems of singular and plural forms, upper-case and lower-case letters, abbreviations, and full words written together. Although prominent authors expressed that they wanted their articles to be included as keywords, the refining process was inevitable because the purpose of this study is to grasp the overall flow and related research fields, and the form of keywords varies depending on each journal style. For instance, "Information and Communication Technologies (ICT)", "Internet of Things" and "cities" were changed to "ICT", "IoT", and "City", and the phrase "smart city" was removed due to it being related to other words. Figure 2 shows the amount of data for each year in this study. The collected keywords were refined in Excel to prepare for overlapping meanings of words due to the problems of singular and plural forms, upper-case and lower-case letters, abbreviations, and full words written together. Although prominent authors expressed that they wanted their articles to be included as keywords, the refining process was inevitable because the purpose of this study is to grasp the overall flow and related research fields, and the form of keywords varies depending on each journal style. For instance, "Information and Communication Technologies (ICT)", "Internet of Things" and "cities" were changed to "ICT", "IoT", and "City", and the phrase "smart city" was removed due to it being related to other words. Figure 2 shows the amount of data for each year in this study. Methods Keyword networks analysis, one of the social network analysis methods, was used to explore trends and the relationship between research topics in many study fields, including information science [83], medical research [84], computer science [85], and science research [86]. Recently, studies have gone beyond analysis of relationships between individuals and organizations used to identify individuals and other networks. Related information or opinions on social media, such as Twitter for certain people, things, and even presidential speeches, have also been used for analysis [87][88][89]. In addition, this analysis process contributes to greatly reducing the effort and time required for a traditional literature review, and can be applied to all fields of science [90,91]. A process should be performed for the extra analysis of identifying meaningful keywords, because keyword network analysis relies on searched keywords. There is a variety of verification analysis methods, but this study conducted co-occurrence keyword network analysis. It focuses on co-occurrence of links between keywords in literature, and is useful in understanding components and knowledge structures in the scientific and technical fields [82,85,90]. The co-occurrence keyword network represents the number of times a pair of words simultaneously occurs in multiple articles, and constitutes the weight of the link that connects the pair [83,84]. A network map of this analysis consists of nodes and links, in which each keyword is a node, a pair of co-occurrence words is a link, and the number of times a pair occurs simultaneously in multiple articles explains the weight of the link that connects the pair. In this study, keywords with a co-occurrence frequency above 10 were used as data ( Figure 3). 1 1 1 1 2 1 2 4 1 3 10 7 15 6 16 21 26 25 31 45 51 47 65 Methods Keyword networks analysis, one of the social network analysis methods, was used to explore trends and the relationship between research topics in many study fields, including information science [83], medical research [84], computer science [85], and science research [86]. Recently, studies have gone beyond analysis of relationships between individuals and organizations used to identify individuals and other networks. Related information or opinions on social media, such as Twitter for certain people, things, and even presidential speeches, have also been used for analysis [87][88][89]. In addition, this analysis process contributes to greatly reducing the effort and time required for a traditional literature review, and can be applied to all fields of science [90,91]. A process should be performed for the extra analysis of identifying meaningful keywords, because keyword network analysis relies on searched keywords. There is a variety of verification analysis methods, but this study conducted co-occurrence keyword network analysis. It focuses on co-occurrence of links between keywords in literature, and is useful in understanding components and knowledge structures in the scientific and technical fields [82,85,90]. The co-occurrence keyword network represents the number of times a pair of words simultaneously occurs in multiple articles, and constitutes the weight of the link that connects the pair [83,84]. A network map of this analysis consists of nodes and links, in which each keyword is a node, a pair of co-occurrence words is a link, and the number of times a pair occurs simultaneously in multiple articles explains the weight of the link that connects the pair. In this study, keywords with a co-occurrence frequency above 10 were used as data ( Figure 3). This study is designed to identify the trends and flows related to research on the smart city concept through keyword analysis. In this study, keyword network analysis includes the process of constructing the network using the relationship of the collected keywords within research articles, and analyzing their structure. Our analysis consists of two methods; degree centrality and betweenness degree, which are useful to identify the role of words in the overall network map. Centrality analysis measures the importance of a node (word) and has the potential to interpret its structures and express the key properties. Betweenness analysis measures the number of times the shortest link between nodes and explained words acted as bridge between nodes; a node with a high betweenness value may have a significant impact on the overall network [90]. These analyses express trends in the latest studies with the centrality degree, and relevance to other studies with the betweenness degree [92][93][94][95], and are useful in understanding the properties of words and the flow of the entire network in the co-occurrence networks using Netminer 4 software (social network analysis software, CYRAM, Seongnam, Korea). Keywords before 2016 We analyzed the keywords of papers, conference proceedings, and books from 1979 to 2015. Significant keywords were identified based on a frequency above 10, representing nodes, and links were identified via degree centrality and betweenness degree analysis. A frequency of more than 10 meant that one keyword appeared in more than ten papers, and the number of keywords used in the analysis was 381 (21 types). The total number of papers is 1294. Over 36 years, the frequency of the words in decreasing order was as follows: smart growth (59), sustainability (32), ITS (26), mobile application (25), urban sprawl (22), and Internet of Things (22). The degree centrality of the words in decreasing order was as follows: smart growth (0.25), sustainability (0.15), big data (0.15), sustainable development (0.15), urban planning (0.15), and GPS (0.15). The betweenness degree of the words was as follows: climate change (0.526316), big data (0.542105), governance (0.521053), urban development (0.521053), and mobility (0.510526) ( Table 1). This study is designed to identify the trends and flows related to research on the smart city concept through keyword analysis. In this study, keyword network analysis includes the process of constructing the network using the relationship of the collected keywords within research articles, and analyzing their structure. Our analysis consists of two methods; degree centrality and betweenness degree, which are useful to identify the role of words in the overall network map. Centrality analysis measures the importance of a node (word) and has the potential to interpret its structures and express the key properties. Betweenness analysis measures the number of times the shortest link between nodes and explained words acted as bridge between nodes; a node with a high betweenness value may have a significant impact on the overall network [90]. These analyses express trends in the latest studies with the centrality degree, and relevance to other studies with the betweenness degree [92][93][94][95], and are useful in understanding the properties of words and the flow of the entire network in the co-occurrence networks using Netminer 4 software (social network analysis software, CYRAM, Seongnam, Korea). Keywords before 2016 We analyzed the keywords of papers, conference proceedings, and books from 1979 to 2015. Significant keywords were identified based on a frequency above 10, representing nodes, and links were identified via degree centrality and betweenness degree analysis. A frequency of more than 10 meant that one keyword appeared in more than ten papers, and the number of keywords used in the analysis was 381 (21 types). The total number of papers is 1294. Over 36 years, the frequency of the words in decreasing order was as follows: smart growth (59), sustainability (32), ITS (26), mobile application (25), urban sprawl (22), and Internet of Things (22). The degree centrality of the words in decreasing order was as follows: smart growth (0.25), sustainability (0.15), big data (0.15), sustainable development (0.15), urban planning (0.15), and GPS (0.15). The betweenness degree of the words was as follows: climate change (0.526316), big data (0.542105), governance (0.521053), urban development (0.521053), and mobility (0.510526) ( Table 1). Overall, the network map was linked by main flow with the words "city", "climate change", "governance", and "big data", with a high betweenness degree, and the words "urban development" and "mobile application", with a high degree of centrality, were located at both sides of the main flow, wherein two keywords were linked to other related words. Focusing on these two words, the words "urban development" were related to the words of "overall urban planning and development", and the words "mobile application" were related to words related to the tools and technologies of using mobiles. In other words, over 36 years, many studies have been conducted involving urban development, city, climate change, governance, big data, and mobile applications, and in particular, urban development studies were considered to be primarily studied with the urban planning, sustainable development, urban sprawl, urban form, and sustainability words centered on the words "smart growth". Further, sustainable development studies had been conducted based on smart grids with renewable energy and big data with open data research, and mobile applications had been utilized in research on ITS, the IoT with cloud computing, and GPS with GIS and mobility (Figure 4). In the network map, the properties of the words were represented as nodes. The larger the size of a node, the higher the degree centrality, and the darker the color of a node, the higher the betweenness degree. Sustainability 2019, 11, x FOR PEER REVIEW 10 of 25 The words that largely make up the flow of the research trend in the analysis were urban development, city, climate change, governance, big data, and mobile application, which had a high betweenness degree. These words were keywords in areas related to the smart city concept up to 2015. In particular, urban development and mobile applications play significant roles in understanding the flow of related research, as important keywords that connect other keywords in the overall structure. 'Smart growth' was the highest in frequency, but is not highly relevant to related studies in the network, while climate change, on the other hand, was one of the lowest-frequency words, but its relevance to related studies is very high in the network. In addition, the words smart growth, urban form, and urban sprawl, which has a high degree centrality, were studied actively as keywords, but mainly in large frames belonging to urban development. The study of mobile applications was conducted in relation to big data, ITS, the IoT, and GPS, and was particularly relevant to big data. In other words, studies on the concept of the smart city up to 2015 were the most active in urban development, and some studies were conducted with mobile applications and big data as keywords. Overall, given the study of relevant areas, it can be expected that studies of conceptual approaches have been conducted more often, rather than studies of active applications or utilization. Keywords after 2016 We also analyzed the keywords of papers, conference proceedings, and books from 2016 to 2019. Significant keywords were identified based on a frequency above 25, represented as nodes and links via degree centrality and betweenness degree analysis, and the number of keywords used in the analysis was 1164 (22 types). The total number of papers is 4232. Over more than three years, the frequency of the words in decreasing order was as follows: IoT (248), big data (133), sustainability The words that largely make up the flow of the research trend in the analysis were urban development, city, climate change, governance, big data, and mobile application, which had a high betweenness degree. These words were keywords in areas related to the smart city concept up to 2015. In particular, urban development and mobile applications play significant roles in understanding the flow of related research, as important keywords that connect other keywords in the overall structure. 'Smart growth' was the highest in frequency, but is not highly relevant to related studies in the network, while climate change, on the other hand, was one of the lowest-frequency words, but its relevance to related studies is very high in the network. In addition, the words smart growth, urban form, and urban sprawl, which has a high degree centrality, were studied actively as keywords, but mainly in large frames belonging to urban development. The study of mobile applications was conducted in relation to big data, ITS, the IoT, and GPS, and was particularly relevant to big data. In other words, studies on the concept of the smart city up to 2015 were the most active in urban development, and some studies were conducted with mobile applications and big data as keywords. Overall, given the study of relevant areas, it can be expected that studies of conceptual approaches have been conducted more often, rather than studies of active applications or utilization. Keywords after 2016 We also analyzed the keywords of papers, conference proceedings, and books from 2016 to 2019. Significant keywords were identified based on a frequency above 25, represented as nodes and links via degree centrality and betweenness degree analysis, and the number of keywords used in the analysis was 1164 (22 types). The total number of papers is 4232. Over more than three years, the frequency of the words in decreasing order was as follows: IoT (248), big data (133), sustainability (81), smart grids (71), ICT (51), and cloud computing (48). The degree centrality of the words in decreasing order was as follows: data analytics (0.238095), sustainability (0.190476), ICT, cloud computing, data mining, machine learning, and urban planning and innovation (0.142857). The betweenness degree of the words was as follows: urban planning (0.666667), data analytics (0.500794), sustainable development (0.495238), ICT (0.466667), and sustainability (0.456524). Through this result, it could be inferred that words such as data mining, innovation, and IoT, with a high degree centrality, have been used widely in research, and that words such as urban planning, sustainable development, and ICT, with a high betweenness degree, have been used in many research projects together with other keywords with a high betweenness degree. In particular, for data analytics and sustainability, the high centrality and betweenness degree played a key role in the flow and direction of the research ( Table 2). Overall, the network map was linked by the main flow of the words sustainable city, ICT, sustainable development, urban planning, and data analytics, with a high betweenness degree. The words urban development and mobile application, with a high degree centrality, were located at both sides of the main flow, and two keywords were linked to other related words. In particular, the urban planning word played a significant role as a keyword; it had a low degree centrality value, but a high betweenness degree value. Sustainability was related to innovation, city, and renewable energy, and urban planning was related to data analytics and geographic information systems. Data analytics was deeply related to bid data, cloud computing, data mining, and machine learning ( Figure 5). Sustainability 2019, 11, x FOR PEER REVIEW 12 of 25 . Figure 5. The keywords network map after 2016. The words that made up the flow of the research trend in the analysis were sustainability, ICT, sustainable development, urban planning and data analytics, which had a high betweenness degree. These words were keywords in areas related to the smart city concept after 2016. Among them, the research relating to sustainability and data analytics have been the most active, and the relevant fields of research of these two words have been identified. In particular, the urban planning word, which had the highest value in the network map, was considered important in understanding the flow of the smart city concept, as it was located at the center of the network map while playing an important role in connecting the words of data analytics, sustainable development, and GIS. On the other hand, the IoT word had the highest frequency and the highest degree centrality, but did not have a high betweenness degree. This means that there have been studies using IoT as the keyword, but few related to other deep studies. Overall, compared with previous studies in 2015, the studies after 2016 have been more actively focused on collecting and analyzing data to apply the smart city concept to urban planning. Keywords before 2016 To further analyze the relevance of the words, we conducted a cluster analysis. This analysis was useful for understanding the related research in a large framework. The cohesion index describes the concentration within the group. At a value of above 1, the concentration density inside the group is greater than outside the group. This analysis was based on the cohesion of each word, organized into groups with high cohesion. In other words, the words in a group were considered keywords for The words that made up the flow of the research trend in the analysis were sustainability, ICT, sustainable development, urban planning and data analytics, which had a high betweenness degree. These words were keywords in areas related to the smart city concept after 2016. Among them, the research relating to sustainability and data analytics have been the most active, and the relevant fields of research of these two words have been identified. In particular, the urban planning word, which had the highest value in the network map, was considered important in understanding the flow of the smart city concept, as it was located at the center of the network map while playing an important role in connecting the words of data analytics, sustainable development, and GIS. On the other hand, the IoT word had the highest frequency and the highest degree centrality, but did not have a high betweenness degree. This means that there have been studies using IoT as the keyword, but few related to other deep studies. Overall, compared with previous studies in 2015, the studies after 2016 have been more actively focused on collecting and analyzing data to apply the smart city concept to urban planning. Keywords before 2016 To further analyze the relevance of the words, we conducted a cluster analysis. This analysis was useful for understanding the related research in a large framework. The cohesion index describes the concentration within the group. At a value of above 1, the concentration density inside the group is greater than outside the group. This analysis was based on the cohesion of each word, organized into groups with high cohesion. In other words, the words in a group were considered keywords for related studies, and can be used to interpret the flow of research with group cohesion. As a result of the modularity cluster analysis, the related research projects were confirmed. Cluster 1 included words related to "related fields" such as "climate change", "governance", and "big data". Cluster 2 included words related to "smart technologies" such as "mobile application", "GPS" and "Internet of Things". Cluster 3 included words related to "smart concept of urban planning and development" such as "urban development", "sustainability", and "sustainable development" (Figure 6). in the higher concept, while the sub-studies involved were less. The words in Cluster 2 consist mainly of "mobile applications", suggesting that many studies related to the big data of Cluster 1, or consisting of its sub-concepts, had been carried out. The value of each cohesion index was the highest for Cluster 3, related to the "city" of Cluster 1. This suggested that many studies that were deeply related to words in the same cluster were actively carried out. As a result, many studies based on words and their combinations in Cluster 3 were conducted, mainly focusing on the conceptual application of the smart city concept, and its introduction to existing cities. In addition, interpreted in terms of the cohesion index, Cluster 1, with the lowest index value, was mainly used as an important keyword for the studies as a higher concept, but was studied with other keywords. On the other hand, Cluster 3, with the highest cohesion value, suggested that words from the same cluster were studied together as keywords. Combining the trends of studies up to 2015, the urban studies and technology fields had been the main fields investing in the study of the smart city, and the concept of the smart city was still seen as being in a period before being actively introduced and applied to urban areas (Table 3) Up to 2015, studies can be considered as introducing and applying specific areas of smart concepts. Overall, the words that were included in Cluster 1 contained words from the higher concepts of the studies. Cluster 2 was considered as a means of utilizing the "big data" of Cluster 1, and Cluster 3 was also considered a sub-concept based on the "city" of Cluster 1. Cluster 1 consisted of words with a high value of betweenness degree, but the properties of clustering were the lowest. This suggested that many studies have been conducted with the words as the main keywords, mainly in the higher concept, while the sub-studies involved were less. The words in Cluster 2 consist mainly of "mobile applications", suggesting that many studies related to the big data of Cluster 1, or consisting of its sub-concepts, had been carried out. The value of each cohesion index was the highest for Cluster 3, related to the "city" of Cluster 1. This suggested that many studies that were deeply related to words in the same cluster were actively carried out. As a result, many studies based on words and their combinations in Cluster 3 were conducted, mainly focusing on the conceptual application of the smart city concept, and its introduction to existing cities. In addition, interpreted in terms of the cohesion index, Cluster 1, with the lowest index value, was mainly used as an important keyword for the studies as a higher concept, but was studied with other keywords. On the other hand, Cluster 3, with the highest cohesion value, suggested that words from the same cluster were studied together as keywords. Combining the trends of studies up to 2015, the urban studies and technology fields had been the main fields investing in the study of the smart city, and the concept of the smart city was still seen as being in a period before being actively introduced and applied to urban areas (Table 3) Keywords after 2016 After 2016, the related research was confirmed. Cluster 1 included words related to "sustainable smart city", such as "sustainability", "innovation", and "open data". Cluster 2 included words related to "data analytics", such as "machine learning", "big data", and "cloud computing". Cluster 3 included words related to "smart urban planning", such as "sustainable development", "ICT", and "energy efficiency". Overall, the words that were included in Cluster 1 contained words of concepts related to a smart city based on sustainability, which was linked to ICT in Cluster 3. Cluster 1 studies based on sustainability implied that they were related to ICT. Cluster 2, which included words from the process of utilizing and applying data, was associated with the urban planning of Cluster 1, and implied that many studies based on collecting and analyzing data were conducted related to urban planning. Cluster 3 contained urban elements words around urban planning related to sustainability and data analytics, and suggested that many studies have been conducted in fields related to the urban planning elements for a smart city (Figure 7). After 2016, the related research was confirmed. Cluster 1 included words related to "sustainable smart city", such as "sustainability", "innovation", and "open data". Cluster 2 included words related to "data analytics", such as "machine learning", "big data", and "cloud computing". Cluster 3 included words related to "smart urban planning", such as "sustainable development", "ICT", and "energy efficiency". Overall, the words that were included in Cluster 1 contained words of concepts related to a smart city based on sustainability, which was linked to ICT in Cluster 3. Cluster 1 studies based on sustainability implied that they were related to ICT. Cluster 2, which included words from the process of utilizing and applying data, was associated with the urban planning of Cluster 1, and implied that many studies based on collecting and analyzing data were conducted related to urban planning. Cluster 3 contained urban elements words around urban planning related to sustainability and data analytics, and suggested that many studies have been conducted in fields related to the urban planning elements for a smart city (Figure 7). In terms of the cohesion index (Table 4), Cluster 3, with the lowest index value, was mainly used for an important keyword in the related studies as a higher concept, ICT was related to sustainability and urban planning was related to data analytics. On the other hand, Cluster 1 and 2 were high in the entire network, which implied that most of the studies involved in data analytics have been performed in conjunction with the words in their own cluster. Since 2016, many studies related to the words of Cluster 2 have been conducted as keywords for the studies, which consisted of technologies and the process of data analysis research projects. Combining research trends after 2016, smart city research has been more specific and detailed than in the past. In particular, data analysis suggests that various data collection methods and analysis techniques were studied as important elements. This confirmed that traditional data analysis was used on an open data basis, as a large concept, and that related research was used as a basis for mobile applications. Information gathered from existing GPS and smart devices can now be expected to gather information more automatically. In doing so, data analysis is considered to be the most important element in smart cities, and the most advanced area. In addition, open data has also been more usable than in the past, and can be expected to be more utilized as data. Comparing the Keywords The words appearing in the studies until 2015 were as follows: climate change, ITS, mobile application, mobility, GPS, smart growth, urban sprawl, urban development, and urban form. Among them, smart growth was researched as the keyword with the highest frequency and degree centrality, and mobile application, mobility and climate change were the keywords with the highest betweenness degree related to other research. In particular, up to 2015, many research projects on smart growth were implied too much addressed as keywords. On the other hand, the words that appeared after 2016 were as follows: energy efficiency, innovation and E-government, data mining, machine learning, security, deep learning, data analytics, ICT, and optimization. Among them, data mining and machine learning were researched as keywords with a high centrality degree, and data analytics, ICT, and innovation were keywords with a high betweenness degree related to other research. In particular, after 2016, data analytics and ICT were frequently used as keywords ( Figure 8). Words addressed as keywords up to 2015 were used in research as big concepts in related fields to the concept of the smart city, such as climate change, used in mobile applications through geographical information and mobility, and as concepts of overall urban development fields such as smart growth and urban sprawl and form. On the other hand, the words that emerged after 2016 have reflected, in many research projects, elements of the sustainable smart city, such as innovation and E-government, and in the words for data analytics such as data mining and machine learning, as well as in the words for elements of urban planning for the smart city, such as ICT and energy efficiency. Words addressed as keywords up to 2015 were used in research as big concepts in related fields to the concept of the smart city, such as climate change, used in mobile applications through geographical information and mobility, and as concepts of overall urban development fields such as smart growth and urban sprawl and form. On the other hand, the words that emerged after 2016 have reflected, in many research projects, elements of the sustainable smart city, such as innovation and E-government, and in the words for data analytics such as data mining and machine learning, as well as in the words for elements of urban planning for the smart city, such as ICT and energy efficiency. Comparison of Keywords on Smart City In the whole flow of the emergence and disappearance of words, up to 2015, studies had been carried out focused on the concept and introduction of a smart city, related to adaptation and mitigation to climate change, and applied to transportation systems based on mobile applications and geographic information, such as that derived from GPS. Since 2016, studies have evolved towards research based on sustainability with the Internet, information, and technologies as key focuses. The compositions of the clusters were added to the words such as E-government and ICT in order to apply the smart city concept, and various processes such as machine learning and data mining were added to the data analysis process. In other words, previous research focusing on conceptual and specialized fields was conducted, and recent research has focused on actively applying smart urban planning based on big data of broader fields, with the development of information technologies (Figure 9). Comparison of Keywords on Smart City In the whole flow of the emergence and disappearance of words, up to 2015, studies had been carried out focused on the concept and introduction of a smart city, related to adaptation and mitigation to climate change, and applied to transportation systems based on mobile applications and geographic information, such as that derived from GPS. Since 2016, studies have evolved towards research based on sustainability with the Internet, information, and technologies as key focuses. The compositions of the clusters were added to the words such as E-government and ICT in order to apply the smart city concept, and various processes such as machine learning and data mining were added to the data analysis process. In other words, previous research focusing on conceptual and specialized fields was conducted, and recent research has focused on actively applying smart urban planning based on big data of broader fields, with the development of information technologies ( Figure 9). We can also identify keywords that should be highlighted in the current smart city concept, based on keywords appearing continuously. Firstly, a smart city also aims for sustainable development as a part of urban planning with sustainability. Secondly, open data, IoT, cloud computing, GIS, and smart grids as a basis for collecting big data can contribute to energy fields and We can also identify keywords that should be highlighted in the current smart city concept, based on keywords appearing continuously. Firstly, a smart city also aims for sustainable development as a part of urban planning with sustainability. Secondly, open data, IoT, cloud computing, GIS, and smart grids as a basis for collecting big data can contribute to energy fields and governance. Thirdly, if all the secondarily mentioned words are considered as the basis for big data collection, they would all contribute to the smart city. In the end, big data will play the most important role in smart cities, however, if the condition of the environment and infrastructure varies from city to city, these above interpretations are debatable. However, we emphasize that the constantly used keywords should be reflected in the basic concepts and systems of a smart city. The Flow on Smart City Based on the Articles and Conferences We analyzed the information of the analyzed data and verified the publication year of the articles. There were 2327 documents (54.36%), followed by 1538 (35.93%) article documents, 223 (5.21%) book chapter documents, 97 (2.27%) review documents, 77 (1.8%) articles in press, and 29 (0.44%) other types of documents, such as editorials, erratums, letters, and notes ( Figure 10). It was confirmed that by 30 March 2019 more than half of the documents studied for keywords relating to the concept of the smart city were published in conference papers. This means that conferences have become more frequent and popular in many areas under the theme of the smart city. We can also identify keywords that should be highlighted in the current smart city concept, based on keywords appearing continuously. Firstly, a smart city also aims for sustainable development as a part of urban planning with sustainability. Secondly, open data, IoT, cloud computing, GIS, and smart grids as a basis for collecting big data can contribute to energy fields and governance. Thirdly, if all the secondarily mentioned words are considered as the basis for big data collection, they would all contribute to the smart city. In the end, big data will play the most important role in smart cities, however, if the condition of the environment and infrastructure varies from city to city, these above interpretations are debatable. However, we emphasize that the constantly used keywords should be reflected in the basic concepts and systems of a smart city. The Flow on Smart City Based on the Articles and Conferences We analyzed the information of the analyzed data and verified the publication year of the articles. There were 2327 documents (54.36%), followed by 1538 (35.93%) article documents, 223 (5.21%) book chapter documents, 97 (2.27%) review documents, 77 (1.8%) articles in press, and 29 (0.44%) other types of documents, such as editorials, erratums, letters, and notes ( Figure 10). It was confirmed that by 30 March 2019 more than half of the documents studied for keywords relating to the concept of the smart city were published in conference papers. This means that conferences have become more frequent and popular in many areas under the theme of the smart city. A total of 544 journals have been published in the form of papers, including articles, articles in press and reviews. Among them, journals with quantities above 25 are listed in Table 5. Most of them were in Elsevier, but the sustainability journal of MDPI had the hifghest value. The scope of these journals mainly covers sustainability of the urban environment, data collection and analysis, and technical energy. Given the scope of research pursued by these journals, Figure 9 includes most of the keywords that appear continuously, which can help predict research flow and related areas in the smart city concept. A total of 298 conferences have been published in the form of proceeding papers, and conference papers are presented in Table 6 if quantities are above 50. Most of the conferences were held in Asia, especially in China. Most are international conferences on the concept of the smart city and various other topics, ranging from general international issues to regional associations studying a country's issues. Additionally, most of the conferences have been held since 2016, and most of the venues were in Asia, especially China. Other than those shown in Table 6, some conferences are held annually, such as the Intelligent Transport Systems (ITS) World Congress. In addition, the keywords emerging after 2016 in Figure 8 are deeply related to the theme of the conferences held after 2016, and are expected to have been discussed a lot at the conferences. We note that there have been many conferences in China since 2016. According to the Deloitte China report published in 2018 [96], China's government proposed new smart society initiatives in their national strategy in 2016, and embarked on self-assessment of the smart city to achieve high urbanization growth globally. The people-centered approach was first introduced as an element of the smart society in the 19th CPC National Congress in 2017. Since 2016, 95 percent of small and medium level cities have made an effort to introduce smart concepts, with many research projects underway. China ranks first in the international community, with 500 pilot cities which form a smart city cluster near the Yangtze and Pearl rivers under smart city construction. Many related studies are also being conducted within these cities on index developments that may be able to be applied to each city. Further, the Chinese government has stepped up efforts to introduce and apply smartness. This smartness has been highlighted in the government report, "The 13th Five-Year Report for Economic and Social Development of the People's Republic of China (2016-2020)", and is considered as the future of Chinese urban planning through more technologically organized urban areas [71,97]. Therefore, many studies on the smart city concept are being conducted with investments from the Chinese government, and many conferences held in China can be expected to be utilized as a venue for academic information exchange. So far, we have analyzed the flow of the smart city through keywords analysis, based on papers and conferences. Although it is not possible to accurately define the flow of research through analysis of the keywords, this process is useful to understand and identify the flow of research, rather than review papers by qualitative methods. In addition, the relationship between words based on keywords is visualized by numerical and network mapping, which is more flexible to understand and interpret, and is useful in order to grasp the overall context by analyzing the association of words through cluster analysis. However, the time required for analysis relies on the format or form of the data. The Flow of the Smart City Considering various analyses based on keywords, this paper emphasized the following four points. First, the sustainable concept should be considered important in a smart city. Sustainability should be reflected not as a disparate concept to the smart city, but as the underlying concept that should be most fundamental. Sustainability has been a consistent keyword for a long time, and its role has increased further since 2016. Therefore, the controversy over sustainable smart cites and the smart sustainable city should be considered in each city's urban planning. In addition, renewable energy and energy efficiency should be considered important in maintaining urban sustainability. Second, as of 2016, the overall trend of research on the smart city has changed from urban development to urban planning. In the past, conceptual research on the smart city was undertaken to solve environmental challenges such as climate change or urban development. However, after 2016, the smart concept was more emphasized and detailed in urban planning, and the utilized and practical application of big data analytics. In addition, active words in the flow of adoption, such as open data and E-government, have been identified as keywords, and the role of governance is expected to change more significantly as it is linked to the innovation keyword. Third, in the past, the utilization of big data was concentrated in specific areas, such as ITS using GPS and smart devices. However, as of 2016, more technical flows with the advent of various big data collection and interpretation technologies were emphasized than in the past. Finally, the reason the number of studies has increased as of 2016 is likely to be the hosting of many conferences. More than half of the data has been published at conferences at which many researchers exchanged and presented information to the international community. In particular, China has held many international conferences, and big companies such as Huawei are investing in smart city research. Conclusions The smart city has been regarded as an ideal city for solving the challenges that have arisen in various fields, such as the environment, energy, and transportation, within existing cities. However, Sustainability 2019, 11, 3155 20 of 25 many scholars and papers have questioned the difference between a smart city and a sustainable city, the latter of which many cities in the world have been pursuing. In order to introduce the concept of a smart city, various questions around the challenges to be reconciled have first been addressed as an agenda in international society. This is a result of the unclear definition and concept of the smart city, and it is necessary to grasp and understand the flow of research that has been carried out so far, because the smart city is related to various urban elements. Therefore, this study analyzed the comprehensive flow of the smart city, using keywords of papers that have been published so far. Smart cities, which have been heavily researched for conceptual introduction in the past, are increasingly being studied in terms of sustainable urban planning. In particular, with advanced ICT, much research is being carried out on the utilization aspects of big data. However, because the research fields of the smart city concept are wide and diverse, it requires governance based on communication and cooperation of citizens, governments, stakeholders and private companies. All must think together to promote higher urban services, and should work hard to apply smart concepts that are appropriate for each city. Besides, the fact that more conference papers than articles exist in this area indirectly suggests that more research is still needed. This would enable the interpretation, discussion, and exchange of information on individual studies at conferences, as the studies which have been carried out so far are limited in their comprehensive application to urban areas. This process will contribute to smart city projects and papers of high quality in the future. The limitations of our study were that, while keyword analysis was useful for overall flow understanding and quantitative analysis, it may not represent the whole content because it relies on keywords in the analysis process. What the authors wanted to express in each paper might not be used as keywords, and there was a possibility that the smart city concept was used as a trend-sensitive keyword. If the composition of the keywords does not represent the author's paper, the analysis results are likely to be misinterpreted. Second, the keywords that had important meaning in each paper are collected and analyzed, but the results were not compared to review papers written through other qualitative methods. If the relationship between keywords is unclear, or there is no connection point, the interpretation of the analysis results might be incorrect or misinterpreted. We propose two studies of keyword analysis in future research. The first involves a comparative analysis of keywords in papers on how smart concepts play a role in devolved and developing countries undertaking new urban planning. By analyzing developed countries, the results would propose processes that are first required for developing countries, and by analyzing developing countries, the results provide an important guideline for cities that will apply smart concepts in the future. The second is to select specific cities as the study areas, collect keywords relating to the smart city from web-based big data, and analyze how smart concepts are being utilized within urban areas. These results will also be a good guideline for cities that consider smart concepts in new urban planning.
15,755.6
2019-06-04T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. Introduction An all-pass filter means that its magnitude response is exactly equal to some constant value at all frequencies and independent of frequencies. The function of the all-pass filter is mainly to offer a phase modification without changing the magnitude on a given filter. It is rather useful in the theory of minimum-phase systems, in transforming frequencyselective low-pass filters into other frequency-selective forms, and in obtaining variable-cutoff frequency-selective filters [1]. Due to these advantages, the all-pass filter has been applied in many signal processing applications, including the groupdelay equalization, complementary filter banks, multirate filtering, and other fields [2][3][4][5][6][7][8]. A large number of methods for designing all-pass filter have been developed in recent years. In [2], for example, the author proposed a new design method for an all-pass filter where it has a least squares or an equiripple phase-error response. It is based on formulating a weighted error between the desired and the actual phase responses in a quadratic form. Filter coefficients can be solved by using a Toeplitz-plus-Hankel matrix. In [3], an IIR allpass filter with equiripple phase response was designed based on the eigenvalue problem and this design problem can be formulated as the representation of an eigenvalue problem via the Remez exchange algorithm. A Hopfield neural network was combined to the design of IIR all-pass digital filters [5]. In the case, filter coefficients can be evaluated by Hopfield neural networks in a parallelism manner in accordance with the error function that is formulated as a Lyapunov energy function. In addition, the authors developed a digital linear phase notch filter design scheme based on IIR all-pass filter. The designed filter can be realized by parallel connection of two IIR all-pass filters with approximately linear phase. Design algorithms exhibit fast convergence and easy initial values determination [7]. Unlike the above-mentioned design schemes, this paper attempts to utilize a modified particle optimization (MPSO) algorithm to solve the digital recursive all-pass filter design problem. This developed algorithm is a variant of the general PSO but it has a better searching capacity in solving optimized problems. The detailed description for such MPSO algorithm will be addressed later. The remainder of this paper is summarized as follows. Section 2 gives a brief description for the recursive all-pass digital filter. In Section 3, a modified 2 Computational Intelligence and Neuroscience PSO algorithm is introduced in detail and the MPSO-based design steps for all-pass digital filter are also given. Section 4 will provide two different kinds of examples to confirm the applicability of the proposed method and some comparisons with the general PSO are further made. Finally, a conclusion about the proposed method is simply described in Section 5. Recursive All-Pass Digital Filter Let us consider a recursive all-pass digital filter whose transfer function is expressed by where represents the order of the filter, 0 is always set to be 1, 1 , 2 , . . . , are real coefficients, and ( ) = ∑ =0 − . Let = Ω , where Ω denotes the digital frequency, substituting it into (1) to derive the frequency response. The following magnitude response can be easily obtained: It is seen from (2) that the magnitude response is equal to one at all frequencies; that is, it is independent of the filter coefficients. Furthermore, its phase response is derived by is a parameter vector consisting of all filter coefficients. This vector fully dominates the phase response behavior of the digital filter. In this paper, we want to design the parameter vector a such that the phase response achieves certain design specification. Moreover, this vector a is called the particle or individual of the PSO algorithm and many such particles then form a population. Some adjusting mechanisms are utilized on the full population. Moreover, it can be easily seen from (3) that a highly complicated nonlinear function arctan(⋅) is involved and it is difficult to solve. Thus, (3) always needs to be modified as another form for the phase response design [3][4][5]. However, the proposed method in this paper can directly use (3) for the phase response design of recursive all-pass filter. Modified Particle Swarm Optimization (MPSO) Algorithm Kennedy and Eberhart initially proposed the PSO algorithm in 1995 and recently it became one of the popular and efficient optimization algorithms [9]. Like most swarm intelligence algorithms, PSO is also a population-based search algorithm. It simulates the social behavior of organisms, such as fish schooling and bird flocking. Each fish or bird, viewed as a particle or an individual, represents a candidate solution to the optimized problem. By the velocity and position updating formulas, each particle moves through the search space toward the global solution. Based on the PSO algorithm, various engineering optimization applications have been successively developed and explored in recent years, such as power system stabilizer design [10], PID controller design [11,12], FPGA implementations [13,14], Volterra filter modeling [15], QRD-based multirelay system design [16], automatic clustering [17], multifault classification [18], and aeroengine nonlinear programming model [19]. Besides, the authors developed a novel PSO algorithm in which the inertia weight is modified to enhance its search capability [20]. The proposed method has successfully been applied in the high pass FIR digital filter design. Another design method for the low pass FIR digital filter with linear phase properties was also developed [21]. A new definition for the velocity vector and swarm updating of the PSO algorithm was proposed. At the beginning, PSO algorithm requires an objective function to judge the performance of the particle and also to guide the search direction of the algorithm. To solve the phase response design problem for the recursive all-pass digital filter, the objective function (OF) is defined by where is the desired phase response given by the designer, is the actual phase response of the all-pass digital filter as described by (3), and Ω min and Ω max are the integral lower and upper bounds, respectively. The algorithm is utilized to minimize this objective function OF to achieve the optimal phase response design. Each particle is changed according to the following velocity formula of (6) and position formula of (7) for original PSO algorithm: where denotes the th iteration of the algorithm, V , , and represent the velocity, position, and individual best position for the th particle with respect to the th dimension, respectively, represents the global best position with respect to the th dimension among the population, is called the inertia weight, 1 and 2 are two positive acceleration coefficients that pull each particle toward the individual best and the global best positions, respectively, and 1 and 2 are two uniform distribution random numbers chosen from the interval [0, 1]. The PSO algorithm uses these two updating mechanisms to achieve the optimization. In this study, a modified PSO (MPSO) algorithm is taken into the phase response design of recursive all-pass digital filter [11,15]. The difference between the original and modification is to change the velocity formula. In the MPSO, the population needs to be further divided into some subpopulations at the beginning; for example, suppose that the initial population includes 50 particles and it will be divided into five subpopulations. Thus, the first subpopulation is composed of particles from number one to number ten, and the second then contains particles from number eleven to number twenty, and so forth. The best particle of each subpopulation needs to be recorded according to its objective function. Instead of the velocity formula of (6), the MPSO algorithm uses the following improved version: where is a new variable called the local best and represents the position of the best particle of the subpopulation where the th particle is located, 3 is also a positive acceleration coefficient, and 3 is a random number selected from the range [0, 1] uniformly. Design steps of MPSO-based for the phase response design of the recursive all-pass digital filter can be summarized in the following. Data. Filter order in (1) and (3), desired phase response and integral lower and upper bounds Ω min and Ω max in (5), number of particles (population size) Ps, number of subpopulations , number of generations , inertia weight , and positive constants 1 , 2 , and 3 in (8). Goal. Derive a recursive all-pass digital filter with the phase response approaching the desired response . (2) Divide the population into subpopulations by particle serial numbers. (3) If a prescribed number of iterations are achieved, then the algorithm stops. (4) Evaluate the objective function of (5) for each particle and record the related individual best, local best, and global best positions. Simulation Results In this section, we consider two different examples with linear phase design to show the applicability of our proposed method [2,5]. Some comparisons with the general PSO are also performed. In the PSO and MPSO, the variables of the algorithm are given by = 0.8, 1 = 2 = 0.5, and = 0.8, 1 = 2 = 3 = 0.5, respectively, for all of the following simulations. Example 1. In this example, the recursive all-pass filter is designed to approximate a desired Hilbert transformer whose phase response is given by where the lower bound and upper bound are set to Ω min = 0.04 and Ω max = 0.94 , respectively; means the filter order and here it is chosen by = 10. The magnitude response of such a recursive all-pass filter is plotted in Figure 1. In addition, the population size and number of generations are given by Ps = 20 and = 1000 for the PSO and MPSO algorithm, and the number of subpopulations is simply set to = 4 only for the MPSO. To verify the algorithm's robustness and efficiency, 20 independent runs with different initial conditions are executed for both 4 Computational Intelligence and Neuroscience Tables 1 and 2 and shown in Figures 2-4, respectively. Table 1 lists the objective function values of 20 independent runs and it clearly reveals that the results by the MPSO are better than those by the PSO for most of independent runs. The mean and variance are evaluated by = 0.14269337 and 2 = 0.00035958 for the MPSO and = 0.16607883 and 2 = 0.00313722 for the PSO, respectively. To show the design outcomes, Figure 2 displays the convergence trajectories of the objective function for Run 1 of the proposed MPSO and PSO algorithms. As can be seen from Figure 2, the MPSO algorithm has a quicker convergence and lower objective function value than the PSO algorithm. Both phase responses and errors are further shown in Figures 3 and 4, respectively. A better simulation result can be obtained by the proposed method. In addition, all of digital filter coefficients derived by Run 1 of the PSO and MPSO algorithm are listed in Table 2 for comparisons. Example 2. This example will design a recursive all-pass digital filter with a desired sinusoidal phase response expressed by (Ω) = 4 (cos Ω − 1) − 52Ω, Ω min ≤ Ω ≤ Ω max , (10) where Ω min = 0 and Ω max = are given. Its corresponding magnitude response is shown in Figure 5. In the simulation, a digital recursive filter with = 60 is adopted and the population size and iterative number of the algorithms are Computational Intelligence and Neuroscience set to Ps = 40 and = 2000, respectively, for solving such a higher-order digital filter. Moreover, as given in Example 1, the number of subpopulations is chosen by = 4 and 20 independent runs with different sets of initial conditions are also performed for certifying the robustness of the algorithm. Table 3 lists a comparison of the objective function values evaluated by the proposed MPSO and PSO for Run 1 to Run 20, respectively. Some of digital filter coefficients derived are listed in Table 4 for comparisons. Figures 6-8 then show the related design outcomes only for Run 1 of the PSO and proposed algorithm. Again, it can be concluded from these results that the proposed MPSO is superior to the general PSO in the phase response design of recursive all-pass digital filter. Conclusions This paper has developed a new design method for the phase response design of recursive all-pass digital filter. A modified PSO (MPSO) algorithm is suggested to design the filter coefficients such that the obtained phase response can approximate the desired response that is given previously. The difference between the MPSO and PSO is to modify the velocity updating formula of the algorithm. To improve the search capacity, a new factor of local-best particle for each subpopulation is introduced in the modified velocity formula. Finally, two different kinds of examples have been illustrated to verify the efficiency of the proposed method as compared 6 Computational Intelligence and Neuroscience with the general PSO algorithm. Simulation results have sufficiently proven that the proposed MPSO has a better design outcome than the PSO in the phase response design of recursive all-pass digital filter.
3,281.6
2015-08-03T00:00:00.000
[ "Engineering" ]
Negative index fishnet with nanopillars formed by direct nano-imprint lithography In this paper we demonstrate the ability to fabricate fishnets by nanoimprinting directly into a pre-deposited three layer metal–dielectric–metal stack, enabling us to pattern large areas in two minutes. We have designed and fabricated two different fishnet structures of varying dimensions using this method and measured their resonant wavelengths in the near-infrared at 1.45 μm and 1.88 μm. An important by-product of directly imprinting into the metal–dielectric stack, without separation from the substrate, is the formation of rectangular nanopillars that sit within the rectangular apertures between the fishnet slabs. Simulations complement our measurements and suggest a negative refractive index real part with a magnitude of 1.6. Further simulations suggest that if the fishnet were to be detached from the supporting substrate a refractive index real part of 5 and FOM of 2.74 could be obtained. Introduction Since its inception and experimental demonstration [1][2][3] the fishnet structure has been a staple of metamaterial research, particularly with regard to negative-index metamaterials (NIMs) [4][5][6]. The capability of the fishnet structure for producing negative (real part) values of permittivity (ε) and permeability (μ) that can be separately tuned by altering the physical dimensions of the structure [7,8] allows for a negative refractive index at frequencies where both ε and μ are simultaneously less than zero. The fishnet structure can be physically described as an array of perpendicularly interlocking thin metal wires stacked on top of a dielectric layer that separates the wires from another set of parallel metal wires, as shown in figure 1. This configuration allows the fishnet to be regarded as an effective LC circuit array for theoretical and modelling purposes. Wide metal slabs that partly make up the grid lattice enable the creation of anti-symmetric currents when they are aligned parallel to the direction of the magnetic field of the incident light. These currents in turn create a magnetic response that, nearresonance, can produce negative values for μ. The values produced for ε are dependent on having metal wires that are narrower than the slabs used to induce a magnetic response and are orientated perpendicularly, on the same plane as the electric field of the incident light. By acting as a diluted metal with a lower plasma frequency, the wires can therefore produce a negative . The depth of the PMMA layer at its thickest, d t , is 1000 nm. This is the thickness of the PMMA when it is spun onto the SiO 2 substrate. The fishnet sits on the top of the sample with the pillars pressed down into the PMMA layer. The imprinted depth of the pillars in the PMMA is denoted by di, which equals 300 nm for both designs. The metal-dielectric-metal tri-layer of silvermagnesium fluoride-silver (Ag-MgF 2 -Ag) is denoted by tm-d-m and has a total thickness of 110 nm (30 nm Ag, 50 nm MgF 2 , 30 nm Ag). Light incident normally on the top face of the fishnet is polarized as shown inset. (b) Table listing the permittivity. This ability to exhibit negative values of ε and μ allows fishnets to operate as a NIM, a characteristic that has been well reported at near infrared (NIR) and visible wavelengths [9][10][11]. Common fabrication techniques for fishnets involve focused ion beam milling of a predeposited material stack [9] or electron beam patterning followed by the deposition of subsequent metal and dielectric layers [11], i.e. a 'lift-off' process. While these methods can undoubtedly produce high quality structures with resolution in the sub-100 nm scale, there are associated disadvantages that stem from them-primarily a limited pattern area, large patterning time and high costs. Furthermore, these techniques can result in structural asymmetry in the form of tapered sidewalls, introducing bianisotropic effects that reduce the effectiveness of the medium as a negative index material. A number of studies have investigated the use of nanoimprint lithography (NIL) as a technique in fabricating fishnet [12][13][14][15] and nano-hole arrays [16][17][18][19], as well as a variety of metamaterial structures [20,21]. In addition to this, a range of different nano-imprint processes have been studied [22][23][24], including direct imprinting [25][26][27][28]. In this paper we report our work in fabricating two designs with differing dimensions of three layer (single active layer) fishnet with nanopillars by imprinting directly into a metaldielectric-metal stack. These designs, detailed in figure 1, are referred to as structure A and structure B throughout this paper. Analysis of the imprint technique and the quality of the fabricated structures is also given. Additionally we find distinct optical responses at NIR wavelengths for each fishnet design and calculate values for the refractive index and figure of merit (FOM) using finite difference time domain (FDTD) simulations that complement our experimental measurements. Field plots showing the electric and magnetic field strengths in both reflectance and transmission modes have also been obtained from the simulations. We believe that this NIL technique can be adopted to create, easily and rapidly, low-cost, large-area, 3D metamaterial structures. Experiments and simulations A SiC nanoimprint stamp is fabricated by electron-beam lithography using hydrogen silesquioxane (HSQ) as the resist and, post development, as a mask for inductively coupled plasma etching. The cleaved SiC substrate used was 500 μm thick and measured 1 cm 2 in area, with a square patterned area of 3 mm 2 . After etching to a depth of 1 μm the HSQ is removed by hydrofluoric acid (HF)-after which the sample is treated in a solution of heptane and silane F 13 -OTCS ((tridecafluoro-1,1,2,2-tetrahydrooctyl)-trichlorosilane), a common hydrophobic non-stick coating for silicon substrates. This treatment prevents the stamp adhering to the target sample during imprinting and eases separation after patterning. It is important to achieve vertical sidewalls following the etch process, in order to avoid distortion of the pattern dimensions in the fabricated structure during the imprinting process. The target sample consists of a polished fused silica substrate that is spin-coated with a 1 μm thick layer of polymethyl methacrylate (PMMA). A metal-dielectric-metal tri-layer of Ag (30 nm), MgF 2 (50 nm) and Ag (30 nm) is then electron-beam evaporated on to the PMMA. Silver has been selected in preference to gold because of the lower associated optical losses [3]. The SiC stamp is imprinted directly into the tri-layer using a modified Specac hydraulic press-applying a half tonne force (4.9 kPa pressure) at room temperature. The etched SiC pillars define the structure when imprinted and displace part of the metal-dielectric-metal tri-layer into the PMMA beneath, forming rectangular pillars in the apertures of the fishnet. The PMMA beneath the metal-dielectric stack is compressed when the nanopillars are displaced from the fishnet. The compression of PMMA has been shown to change the associated refractive index [29]. The behaviour of PMMA during the imprinting process has also been reported [30]. From [29] we estimate that with the imprint force used the refractive index real part of the compressed PMMA may increase by approximately 0.05, although it should be noted that the values stated in [29] are given for shorter wavelengths than measured here. This change is noted but the variation of index with pressure is not modelled. The imprinting process itself typically takes only two minutes and the stamp can be cleaned in acetone afterwards for re-use. It was found in our experiments that the stamps can be used at least ten times without damage occurring. While the etched pillars remained intact throughout, continued use of a stamp for more than ten imprints resulted in the gradual formation of small cracks in the SiC. These cracks eventually resulted in fragments breaking from the substrate, usually near the cleaved edges. The depth of the pillars can be varied with the imprint force used, but we have found that a half-tonne force (4.9 kPa pressure) results in a depth in the PMMA of approximately 300 nm from the fishnet on top. The NIL process is often enhanced by reducing the viscosity of the polymer that is to be patterned by heating the substrate [31]. However, we have observed that heating the PMMA to improve its flow causes the metal and dielectric layers on top of the PMMA to crack and break up, making lithography at room temperature desirable. The fabricated fishnet and nano-pillar structure exhibits good uniformity across the patterned 3 mm 2 area, with only small, localized regions at the four corners of the imprinted area showing breaks in the metal-dielectric-metal tracks. Close inspection of the imprinted pattern, shown in figure 2, shows uniform, continuous silver wires. The apertures defined by the imprint are consistent in dimension and show rounding at the corners. The edges of the wires exhibit roughness resulting from the cutting of the metal and dielectric by the SiC pillars. Nano-sized cracks are also visible on the imprinted nano-pillars, the regions that are in direct contact with the nanoimprint stamp. Unlike other techniques that utilize NIL to pattern a resist or polymer mask, imprinting directly into the metal-dielectric-metal stack removes the need for etching [12,13]. Reflectance and transmission measurements were obtained using a Bruker Hyperion microscope attached to a vertex Fourier transform infrared spectrometer, configured for NIR wavelengths and using a CaF 2 beamsplitter. Reflectance measurements were taken by illuminating the top of the fishnet and measuring the reflected light, while transmission was measured by passing light through the silica substrate and measuring from above the fishnet. The light incident on the sample was polarized as indicated in figure 1 using a ZnSe crystal polarizer. The reflection measurements were normalized using an unpatterned 30 nm thick sheet of silver on a fused silica substrate as a background-and transmission measurements were normalized against a fused silica substrate with 1 μm of PMMA spun on top. Using the dimensions and constituent material properties of the two different fabricated structures, we have simulated reflection and transmission spectra using the FDTD method to complement the experimental results. The electromagnetic properties of the imprinted fishnet structure were studied by computer simulation using Lumerical FDTD solutions software. The transmission and reflection characteristics were calculated from a single unit-cell and a polarized plane wave source with its polarization defining electric field component, E x , perpendicular to the long axis of the fishnet slab. The imprinted fishnet stack was illuminated by a plane wave source over the wavelength range from 1000 nm to 4000 nm. The simulation area was a rectangular parallelepiped shape. The calculation region is 660 × 660 × 2000 nm 3 , with a conformal mesh region self-adapted to the structure. The very fine mesh size of 3 nm was used within the metal layers and was adequate for simulation convergence. For simulations of just the fishnet portion of the structure, the calculation region is set with a background index of air (n = 1). Depending upon the polarization of the source, the periodic boundaries in the x-and ydirections are replaced by imposing an anti-symmetric/symmetric condition. A perfectly matched layer was imposed on the edge planes of the simulation area, perpendicular to the direction of source propagation. The refractive index of the dielectric spacer material, MgF 2 , was set to 1.38. A Drude model was used for the dielectric function of Ag, with a plasma frequency of 9.0 eV and damping frequency of 0.054 eV [32]. The damping frequency was increased by a factor of three compared with that of bulk silver to account for the additional surface scattering losses in real films [33]. The silica substrate was considered to be semiinfinite in the simulations. Results and discussion There is close agreement between the experimental and simulated reflection and transmission spectra, shown in figure 3. Resonances are observed experimentally at 1.45 μm and 1.88 μm respectively, as the critical dimensions of the structure are increased in scale. The experimental transmission spectra show a trough at approximately 2.7 μm with near-zero transmission for all three structural dimensions. This trough can reasonably be attributed to the excitation of the O-H ion stretch vibration at 2.7 μm in the fused silica substrate, not the fishnet and pillar structures or the PMMA that supports them. The magnitude of the experimental transmission at negative index wavelengths is, at maximum, approximately 40% and 65%, for structures A and B respectively. Both the experimental and simulated transmission and reflection spectra are absolute and not arbitrary units. It should be noted that the metal-dielectric-metal pillars that are depressed into the PMMA fill the areas beneath the apertures of the fishnet, meaning that transmitted light must pass through the sidewall of the rectangular PMMA holes, as well as by extraordinary transmission through the sub-wavelength hole array [34,35]. The divergence between experimental and simulated results below 1.5 μm can be attributed to Fabry-Perot resonances between the fishnet, nanopillars and substrate which are accentuated in the simulations due to impedance mismatch and seen in other reported work concerning fishnets [9]. From the simulated spectra we can retrieve the wavelength-dependent refractive index and FOM values [36] of the fabricated structure, shown in figure 4. The FOM is defined by the ratio of Re(n)/Im(n), where Re(n) is the refractive index real part and Im(n) is the imaginary part. The modified retrieval method for asymmetric structures has been used to retrieve the complex effective index of the structure [37][38][39][40]. It should be noted that the retrieval process is not, in general, trivial-especially when the metamaterials of interest are anisotropic or bianisotropic [41,42]. If the optical path length across the unit cell of a structure is not small, then the effective medium limit even for a symmetric structure is not applicable. If the unit cell is not symmetric in the direction of propagation then the standard retrieval procedure fails to provide a unique answer for n [40]. To calculate the generalized scattering parameters; we consider the two values of the reflection coefficient when illuminating from free space (S 11 ) or through the substrate (S 22 ). The differences between the magnitudes of S 11 and S 22 are modest. However there is a large contrast in the phases of S 11 and S 22 , implying very different properties for the structure and depending on which side of the unit cell is first impacted by the incoming wave (both the phases and magnitudes of S 12 and S 21 are identical) [36]. The fishnet and nanopillars of structures A and B on a 1 μm thick layer of PMMA and silica substrate were simulated to show the real and imaginary part of the refractive index near the respective resonant wavelength and are shown in figure 4. For structure A, a refractive index real part of −0.24 was found at 1.53 μm. This decreased to a maximum magnitude of −0.57 at 1.9 μm. With structure B a refractive index of −0.7 was observed at 2.35 μm and was shown to decrease to −1.5 at 3.35 μm. The PMMA and suppressed pillars are here responsible for the increasing material losses and for limiting the magnitude of n. The broad response of Re(n) in figure 4 is consistent with previously reported periodicity effects in metamaterials with reduced symmetry [40]. The FOM for the as fabricated structure is not presented because discontinuities due to multiple refractive changes in the material make the calculation unreliable. It should be remembered that any optimization of the pillar dimensions [43] using the NIL technique that we have described will also alter the dimensions of the fishnet, since both are defined by the same nanoimprint stamp. The calculation of such a negative refractive index and FOM, coupled to the ease with which the structures can be fabricated by NIL, increases the desirability of detaching the fishnet for transfer to an alternative substrate or, indeed, for suspension in air. While we have not detached the fishnet from the PMMA, depositing a sacrificial soluble layer or imprinting onto PDMS remain plausible methods for achieving pattern transfer. Simulated field plots, displayed in figure 5, show the electric field distribution through a single unit cell of the fishnet as well as interaction between the fishnet and the pillars imprinted in PMMA beneath. All the plots are calculated at the wavelength of magnetic resonance, 1.45 μm or 1.88 μm for structures A and B respectively. Figure 5(a) shows a unit cell of the fishnet and nanopillar on the x-y plane from normal incidence. The wide metal tracks exhibit a strong electric field induced by the magnetic field of the incident wave. This field distribution does not correspond to Bragg scattered surface waves. The field plots in figures 5(b) and (c) show electromagnetic coupling between the fishnets and pillars, through the PMMA. As a means to gauge the effect of the PMMA, nanopillars and silica substrate, simulations modelling only the fishnet portion of structure A and B were also performed and are shown in figure 6. The simulation that modelled solely the fishnet for structure A (and not the nanopillars on PMMA) shows a refractive index of −4 and a FOM of 2.49, while the equivalent values for structure B are −5 and 2.74 respectively. These values are comparable with those quoted for single active layer fishnets that have been reported previously [9,10,15]. The results shown in figure 6 suggest that if the fishnet were to be detached from the supporting PMMA and substrate, the negative real part of the refractive index would increase in magnitude, making a more effective metamaterial. Conclusions We have demonstrated a rapid and effective method for fabricating negative index fishnets. Unlike conventional fishnet fabrication techniques based on direct-write electron-beam lithography, imprinting directly into a metal-dielectric-metal film stack does not require an etch process and enables large-area patterns to be produced. The structural quality and uniformity of the fishnets is good, with some roughness noted at the edges of the apertures. We also note that damage, in the form of cracking, is inflicted on the regions of metal directly in contact with the nanoimprint stamp. Simulations have shown that the refractive index and FOM values of our fishnets are comparable with those previously reported from single active layer fishnets. We believe that the NIL technique presented in this paper can be adapted for use in pattern transfer, by lifting the imprinted fishnet from a sacrificial layer and moving it to a new substrate, such as PDMS. Imprinting into a greater number of metal-dielectric multi-layers, so as to increase the magnitude of the negative refractive index, also remains feasible.
4,282.6
2014-12-10T00:00:00.000
[ "Physics" ]