text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Mining a database of single amplified genomes from Red Sea brine pool extremophiles—improving reliability of gene function prediction using a profile and pattern matching algorithm (PPMA)
Reliable functional annotation of genomic data is the key-step in the discovery of novel enzymes. Intrinsic sequencing data quality problems of single amplified genomes (SAGs) and poor homology of novel extremophile's genomes pose significant challenges for the attribution of functions to the coding sequences identified. The anoxic deep-sea brine pools of the Red Sea are a promising source of novel enzymes with unique evolutionary adaptation. Sequencing data from Red Sea brine pool cultures and SAGs are annotated and stored in the Integrated Data Warehouse of Microbial Genomes (INDIGO) data warehouse. Low sequence homology of annotated genes (no similarity for 35% of these genes) may translate into false positives when searching for specific functions. The Profile and Pattern Matching (PPM) strategy described here was developed to eliminate false positive annotations of enzyme function before progressing to labor-intensive hyper-saline gene expression and characterization. It utilizes InterPro-derived Gene Ontology (GO)-terms (which represent enzyme function profiles) and annotated relevant PROSITE IDs (which are linked to an amino acid consensus pattern). The PPM algorithm was tested on 15 protein families, which were selected based on scientific and commercial potential. An initial list of 2577 enzyme commission (E.C.) numbers was translated into 171 GO-terms and 49 consensus patterns. A subset of INDIGO-sequences consisting of 58 SAGs from six different taxons of bacteria and archaea were selected from six different brine pool environments. Those SAGs code for 74,516 genes, which were independently scanned for the GO-terms (profile filter) and PROSITE IDs (pattern filter). Following stringent reliability filtering, the non-redundant hits (106 profile hits and 147 pattern hits) are classified as reliable, if at least two relevant descriptors (GO-terms and/or consensus patterns) are present. Scripts for annotation, as well as for the PPM algorithm, are available through the INDIGO website.
INTRODUCTION
Discovery of extremophilic enzymes has developed into a major driver for the biotech industry. Although many industrially relevant enzymes were isolated from organisms growing at high temperature, high salt concentration, or in environments contaminated with organic solvents, significant challenges and limitations exist for bio-prospecting of extremophilic enzymes (Liszka et al., 2012). It was estimated that only as few as 0.001-0.1% of microbes in the seawater are currently cultivatable (Amann et al., 1995) and until recently the bottleneck of cultivation not only biased the view of microbial diversity but limited the appreciation of the microbial world in general (Hugenholtz and Tyson, 2008). Novel culture-independent techniques allow the identification of thousands of novel protein motifs, domains and families from different environments (Yooseph et al., 2007). Despite the vast expectations, metagenomic data have not yet lead to the expected boost of biotechnology (Chistoserdova, 2010), mostly because they suffer from short read length, a low probability to identify rare populations (below 1%) (Kunin et al., 2008), and difficulties in assembling larger contigs of genetic material for members of complex communities. Single-cell genomics (Lasken, 2007) circumvents this problem, and larger contigs from uncultured organisms can be analyzed. A major challenge in mining genomic data of uncultured organisms is a lack of homology to genes of established organisms resulting in limited reliability of gene annotation.
A promising source of novel organisms are the deep-sea anoxic brine pools in the northern part of the Red Sea, formed by tectonic shifts (Gurvich, 2006). Interstitial brine was expulsed due to tectonic movements that allowed re-dissolution of evaporitic deposits, and/or phase separation due to temperature variations (Cita, 2006;Hovland et al., 2006). The salt-enriched waters drifted to the seafloor and accumulated in geographical depressions where the brine pools remain stable because of their high density (DasSarma and Arora, 2001). The combination of different extreme physicochemical parameters makes the deep-sea anoxic brine pools one of the most remote, challenging and extreme environments on Earth, while remaining one of the least studied (Antunes et al., 2011). The Red Sea brine pools are extreme in salinity and show a characteristic sharp brine-seawater interface with steep gradients of dissolved O 2 , density, pH, salinity, and temperature (Emery et al., 1969;Ross, 1972;Anschutz and Blanc, 1995). Except for the connected brine pools Atlantis II, Chain, and Discovery Deep (Backer and Schoell, 1972;Faber et al., 1998), environmental conditions vary drastically between the pools, e.g., temperatures range from 22.6 • C (Oceanographer) to 68.2 • C (Atlantis II) and the NaCl concentration vary from 2.6 M (Suakin) to 5.6 M (Discovery) (Antunes et al., 2011). While the brine pools were detected more than 65 years ago by the Swedish RV Albatross expedition (1947)(1948) (Bruneau et al., 1953), microbiological analysis did not start until the late 1960's. The first sampling led to the assumption that under the harsh environmental conditions of the brines life is not possible (Watson and Waterbury, 1969). The search for life in those extreme habitats continuously intensified after the high scientific and economic potential of halophilic organisms became evident (Karan et al., 2012). Since 2010, several sampling expeditions to the Red Sea brine pools have provided a large amount of genomic data, which are collected and annotated at KAUST within the recently described Integrated Data Warehouse of Microbial Genomes (INDIGO) data ware house (Alam et al., 2013). Data stored in INDIGO will stepwise become publicly available.
Analysis and management of next generation whole genome sequencing (NGS) data utilizes comprehensive package of software applications for assembly of sequence reads, mapping to reference genome, variants/SNP calling and annotation, transcript assembly/quantification, and identification of sRNA (Horner et al., 2010;Garber et al., 2011;Pabinger et al., 2014), yet further improvements are required (Dolled-Filhart et al., 2013). Large-scale annotation of DNA sequences with a low homology to genes of experimentally verified function may be flawed and hence represents a major drawback for biomining. The homology-based annotation faces one intrinsic issue: annotation reliability and protein diversity are reciprocal. The situation is complicated by error propagation. The function of the encoded protein was validated experimentally only for a small and continuously diminishing fraction of the gene sequences available. Initially, functions of novel genes were annotated based on gene sequences with experimentally verified function. Based on these data more genes were annotated and so on. While in this chain two proteins are always highly similar, the last annotated gene and the experimentally verified source may possess distinct sequences and functions. In comparison to genomic sequencing, experimental characterization of Single Amplified Genome (SAG) gene products requires gene synthesis, expression, purification as well as functional characterization and therefore is by several orders of magnitude more time consuming. Hence, false positive results from flawed annotation are much more problematic than false negative (due to non-complete annotation) when genomic data are searched for a desired function. This is particularly true for genes from extremophilic organisms, which require slow growing expression systems. Here we present a strategy to minimize false positive identification of the gene product's function. The Profile and Pattern Matching (PPM) algorithm describe below collates complementary information available from (a) InterPro-derived Gene Ontology (GO) terms (Ashburner et al., 2000), which connect an enzyme's function to amino acid sequence profiles and (b) annotated PROSITE IDs (Sigrist et al., 2013), which are linked to an amino acid consensus pattern. This PPM algorithm was tested on 15 protein families of scientific or commercial interest. The strict PPM algorithm initially extracted the most reliably annotated genes, which in this example represent about 1.5% of the genes in the database. Subsequent removal of incomplete genes followed by PPM selection lead to further condensation of gene hits (0.1% of genes in database). A final ranking extracted 11 genes as most likely candidates to code for one of the Protein of Interest (POI) functions.
SAMPLE COLLECTION
All samples were collected during leg 2 of the RV Aegaeo WHOI, AUC-KAUST Red Sea Cruise in October/November 2011. Samples were taken at different depths and locations in the Red Sea, in and outside the brine pools as well as from sediments. For all brine pools, samples were taken in the brine itself, the sediment and at different depths of the brine seawater interphase (Eder et al., 2001). In total 46 casts were done containing 7030 L of water, as well as seven sediment samples. The collected liquid samples were immediately filtered using a TFF (tangential flow filtration) system, concentrated and immediately afterwards stored at −80 • C. During the sampling, different chemical parameters including salinity (conductivity) and temperature were measured. The five brine pools sampled were Kebrit Deep, Nereus Deep, Atlantis II Deep, Discovery Deep, and Erba Deep (Backer and Schoell, 1972;Searle and Ross, 1975;Karbe, 1987;Hartmann et al., 1998).
SINGLE AMPLIFIED GENOME GENERATION
For the production of SAG from single cells, the "SCGC SAG generation service" (cat. no. S-101) at the "BIGELOW Laboratory single cell genomics center," which is part of the Bigelow Laboratory for Ocean Sciences in Boothbay Harbor, Lincoln County, Maine, United States, was used. The service includes initial sample evaluation for FACS suitability, individual cell separation into wells of a 384-well plate, cell lysis, and single cell multiple displacement amplification (MDA).
WHOLE GENOME SEQUENCING AND ASSEMBLY
The whole genome sequencing was performed at the "BIGELOW Laboratory single cell genomics center" using the "Prokaryote SAG whole genome sequencing" service (cat. no. S-014). The service includes sequencing library preparation, genomic sequencing, de novo assembly, and assembly quality control. Service products include contig fasta files and assembly statistics. Assemblies of the single-cell amplified genomes (SAGs) were generated using a pipeline that employs a choice of assemblers designed for single-cell sequencing data including VelvetSC (Chitsaz et al., 2011), SPAdes (Bankevich et al., 2012), and IDBA-UD (Peng et al., 2012), along with several pre-and post-assembly data quality checks using Trimmomatic (Lohse et al., 2012). IDBA-UD was benchmarked as the overall best assembler for our SAGs as is it did reconstruct longer contigs with higher accuracy to the reference genome of Nitrosopumilus maritimus SCM1 (Könneke et al., 2005).
DATASET
The data used in this work consisted of 87 SAGs covering 16 different taxonomic groups, sampled in 11 different environments. A total of 26,626 contigs covering 111,269 ORFs and containing 79.8 Mbp genomic information (Table 1) were analyzed.
Annotation of the dataset
The assembled contig sequences were integrated into the INDIGO data warehouse (Alam et al., 2013) for microbial genomes. INDIGO is a dynamic system using the InterMine framework (Smith et al., 2012), one of the highest benchmarked data warehouses (Triplet and Butler, 2013). INDIGO allows Automatic Annotation of Microbial Genomes (AAMG), extensive query building for annotation integration, creation of customized feature/attribute/entity lists and enrichment analysis for GO concepts, which are crucial steps of the following analysis. Using INDIGO the assembled contig sequences were (i) annotated, (ii) converted into an XML schema, and (iii) implemented into the data warehouse. Figure 1 gives an overview of the workflow (Alam et al., 2013). Assignments of GO-terms are largely independent from PROSITE IDs. GO-terms emerge from domain associations provided by InterPro (Quevillon et al., 2005) (one of several domain resources may be PROSITE). PROSITE consensus patterns are predicted by the PS_Scan (De Castro et al., 2006) tool.
Automatic annotation of microbial genomes (AAMG) pipeline.
Functional annotation of archaeal or bacterial genomes is available via the INDIGO website interface (http://www.cbrc.kaust. edu.sa/indigo/mymine.do?subtab=aamg). Completed genome annotations may be included into the INDIGO database. This enables application of the scripts presented in this work for any novel genetic data.
PHYLOGENETIC ANALYSIS
The evolutionary history was inferred using the Neighbor-Joining method (Saitou and Nei, 1987). All illustrated trees are drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree. The evolutionary distances were computed using the Poisson correction method (Zuckerkandl and Pauling, 1965) and are in the units of the number of amino acid substitutions per site. All positions containing gaps and missing data were eliminated. Evolutionary analyses were conducted in MEGA6 (Tamura et al., 2013).
PPM METHODOLOGY
The PPM algorithm was automated by including two new scripts into INDIGO, which are publicly available from the homepage.
AutoTECNo: automated translation of E.C. numbers
The E.C. No. translator (AutoTECNo) automatically converts a list of given enzyme commission (E.C.) numbers into GO-terms (Kanehisa and Goto, 2000) as well as PROSITE IDs, using open source PROSITE files (Sigrist et al., 2002). Preliminary, transferred and deleted E.C. numbers are ignored. The AutoTECNo provides two XML scripts for the independent profile and pattern search via INDIGO. AutoTECNo is available at the following website: http://www.cbrc.kaust.edu.sa/ppma/ec2gops.html.
PPM processor: automated extraction and ranking of the most reliable hits
The PPM Processor requires one or more tab separated spreadsheets (.tsv) of the independent profile analysis (via GO-terms) and/or pattern analysis (via PROSITE IDs) as input file. The processor generates sets of genes according to their profile and pattern distribution. The resulting list is ranked regarding to the amount of profile and pattern combinations. The PPM processor is available at the following website: http://www.cbrc.kaust.edu. sa/ppma/indigoTbl2PSgoSets.html.
The PPM workflow, starting from a non annotated genome
First, an assembled genome is annotated using the AAMG pipeline as part of the INDIGO data warehouse. Second the E.C. number based list of POI (list) is translated into profile and pattern values (GO-terms and PROSITE IDs) by using AutoTECNo. The resulting XML lists (of pattern and profile values) are separately imported into the INDIGO data warehouse to analyze any listed genome at the following URL: http://www.cbrc.kaust.edu.sa/indigo/importQueries. do?querybuilder=yes. The two resulting tab separated spreadsheets can be uploaded into the PPM processor to generate three PPM sets of genes: (i) profile set, (ii) pattern set, and (iii) profile and pattern set.
PPM: PROFILE AND PATTERN MATCHING FOR FUNCTION IDENTIFICATION
Analysis of the huge amount of data resulting from next generation whole genome sequencing (NGS) requires modern bioinformatic tools. Comparisons of annotation pipelines reveal a surprising level of uncertainty in gene annotation. Annotations of the same genome (strain TY2482) of the enterohemorrhagic diarrhea causing shiga-toxin-producing E. coli O104 (Rohde et al., 2011) by several groups allowed a comparison of the three main annotation pipelines: Broad, BG7, and RAST. While this might not impact in silico analysis e.g., for identification of pathways, a substantial amount of false positives can lead to costly failures in experimental bioprospecting campaigns. Among the descriptors INDIGO annotation associates with genes, two are particularly suited to evaluate the correct assignment of an enzymatic function to a gene product: (i) the GOterm and (ii) the PROSITE ID. The GO project describes genes (gene products) using terms from three structured vocabularies: biological process, cellular component and molecular function. Correspondingly, a list of GO-terms associated with a gene can be seen as the gene's profile. A PROSITE ID relates to a single consensus pattern as "amino acid sequence signature" to characterize protein function. Genes from INDIGO with matching function description of GO-term and PROSITE ID(s) should represent a subset of genes with highly reliable annotation. To extract such genes based on an input list of E.C. numbers of interest, we developed a protein PPM algorithm.
From proteins of interest to bioinformatics descriptors
Initially, we established a set of proteins, which potentially are of scientific and/or commercial interest. Protein classes selected include a variety of hydrolases, ene reductases, dehydrogenases, and carbonic anhydrases (CAs) as well as a range of metalloproteins, porines and potentially new aminoacyl tRNA synthetases. The selected 15 protein families of interest (POI families) are summarized in Table 2. Bioinformatic matching of the POIs vs. the INDIGO database requires a translation of the POI list into terms of the selected descriptors (GO-terms and PROSITE ID). For enzymes, E.C. numbers can be associated with the enzyme family name as well as GO-terms and PROSITE ID and therefore can be used to interconvert these terms. The POI list was translated into the E.C. numbers using BRENDA (Braunschweig Enzyme Database) (Schomburg et al., 2013). Of the resulting 2577 E.C. numbers (Table S1) 434 were non-redundant. Removal of preliminary/transferred and deleted E.C. numbers provided a final list of 265 E.C. numbers (Table S2). The list of E.C. numbers was converted into profiles (GO-terms) and pattern (PROSITE IDs). For gene expression products without enzymatic function like aquaporins and pyltRNA, the respective GO-terms and PROSITE IDs were added manually. The resulting protein profile filter consist of 171 non-redundant GO-terms (BRENDA) ( Table S3). The independent pattern filter consisted of 52 non-redundant PROSITE consensus pattern (Sigrist et al., 2013). Three consensus patterns (PS00198, PS00455, PS00143) were removed because of their low specificity (consensus pattern specificity can be derived from the information available at PROSITE web page: http://prosite.expasy. org), resulting in a final pattern list of 49 consensus pattern (Table S4).
AutoTECNo: automated translation of E.C. numbers.
The webbased AutoTECNo script simplifies conversion of POI classes into the two bioinformatic PPM descriptors described above. A user may enter one or more distinct or flexible E.C. numbers, which are automatically converted into GO-terms and PROSITE IDs. A numeric value is required for the first three digits of flexible E.C. numbers (e.g., 1.1.1. * ). AutoTECNo automatically ignores preliminary, transferred and deleted E.C. numbers. The AutoTECNo output provides two XML scripts, one for each of the independent profile and pattern search, which can be imported directly into the INDIGO data warehouse by using the direct links on the output page.
The PPM (Profile and Pattern Matching) algorithm
The PPM algorithm retrieves those POIs from a database, which are most likely to be annotated correctly. Initially, the GOterm list (profile) and the consensus pattern list (coded by the PROSITE IDs) are matched independently onto the dataset of interest. From each of the resulting subset of genomic data, gene fragments commonly present in SAGs or metagenomic data a gene fragment filter eliminates (i) genes with less than 300 nucleotides (to sustain a minimal length required for functionality) and (ii) genes that are not annotated as complete (indicating that a 3 or 5 part of the gene is missing). In a last step, both filtered lists are transferred to the PPM processor (see below), which arranges all hits into sets of genes having the same combination of identifiers (GO-terms and/or Prosite IDs). Three classes of sets are listed: (i) the profile sets, containing genes with one or more GO-term describing the respective POI, (ii) the pattern sets, containing genes with one or more PROSITE ID of the respective POI and (iii) the profile and pattern set, consisting of genes with at least one GO-term and PROSIT ID of the POI. The annotation of genes is ranked as more reliable with increasing numbers of associated identifiers. The complete PPM algorithm is illustrated in Figure 2.
Identification of the most reliably annotated genes in INDIGO that match our POI served as test-case for the PPM. The genetic database search was restricted to certain brine pool SAGs based on environmental parameters of the sampling locations (salinity ≥ 14% and/or a temperature >44.5 • C). The habitats selected were set to reflect the upper part of moderate halophilic conditions (5-20% salt) as well as extreme halophilic conditions (20-30% salt) (Ollivier et al., 1994) and/or thermophilic conditions [45-80 • C (Madigan et al., 2003)]. The sample subset comprises 58 SAGs from three different brine pools (Atlantis II deep, Discovery, and Kebrit), covering six different environmental conditions. These SAGs contain a total of 73,688 ORFs coding for 74,516 genes. The ORFs were assembled out of 21,519 contigs into genomes of a combined size of 48.2 mega base pairs ( Table 3).
As described above, the POI list was transformed into a protein profile filter consisting of 171 non-redundant GO-terms (BRENDA) and an independent pattern filter of 49 PROSITE IDs (Sigrist et al., 2013). Profile matching of the 74,516 preselected genes with the 171 GO-terms resulted in 520 hits, which were further reduced by the gene fragment filter to 352 ( Table 4). Elimination of duplicates (genes associated to multiple GO-term or PROSITE ID occur multiple times in the output) yielded 106 non-redundant hits, which could then be grouped into five different profile sets, based on the gene-associated GO-terms. The five profile sets contain six different GO-terms, four profiles with only one GO-term and one profile with two GO-terms (Table 5). Categorizing the 106 genes into five profile sets clarifies what functions and functional diversity can be expected from the hits.
The independent pattern filter was applied according to the same scheme. Screening all 58 SAGs against the 49 PROSITE IDs resulted in 1617 hits. Applying the gene reliability filter reduced this number to 1078 hits, which could be further condensed to 142 non-redundant hits. These 142 genes fall into 17 pattern sets containing 25 different PROSITE IDs.
Since the presence of several GO terms, PROSITE IDs or a combination of both indicates a more reliable gene annotation, we used the PPM processor to identify genes which are associated with multiple descriptors. The list ( Table 5) contains three sub-sets: (i) the profile sets (one set of 16 hits), (ii) the pattern sets (10 sets containing 87 hits), and (iii) the profile and pattern sets (one set of 14 hits). Only the profile and pattern set contains genes, which were found independently by both, PPM. In other words, when the INDIGO subset of 74,516 genes is screened for the 434 non-redundant E.C. numbers, only 14 genes have a matching GO-term and PROSITE ID. All 14 hits belong to the same E.C. number (1.3.1.26, dihydrodipicolinate reductase, DHPR). Since some profile or pattern sets stand for the same enzyme type the total amount of 117 most reliably annotated genes that were identified by the PPM algorithm fall under only nine different enzyme families: prephenate DH (1.3.1.13), iron containing ADH (1. AutoTECNo, (ii) individual profile as well as pattern matching via a query in INDIGO and (iii) extraction and ranking of the most reliable result in pattern, profile and profile and pattern by the PPM processor. This process requires two input files: (i) an assembled genome, which can be annotated using the AAMG pipeline and (ii) an E.C. number based POI list. The POI list can directly be copied into the AutoTECNo input mask. After submitting the E.C. number list, AutoTECNo will generate a list of all E.C. numbers and the associated GO-terms and PROSITE IDs. At the bottom of the output mask, three links are provided: "GO xml," "Prosite xml," and "INDIGO datawarehouse." Clicking either of the first two links will open a window, which provides.xml-formatted files (for either GO-terms or PROSITE IDs). These files can be edited and used separately to build INDIGO queries. In such a query, INDIGO is used to match each of the two.xml lists against the selected genomes. Clicking on the "INDIGO datawarehouse" link opens the INDIGO XML input mask, which can be used to initiate a query by pasting the.xml script from AutoTECNo. A graphical overview of the query will be shown and further customization can be done (preset columns should not be deleted). At this stage, both, profile (GO-term) and pattern (Prosite ID) filters can be applied individually in connection with the optional gene fragment filter. Hits will be organized in a table summarizing all information available in INDIGO. The table still may contain duplicates, since one gene can be found under several GO-terms and/or PROSITE IDs. The results-table can be downloaded as "Spreadsheet (tab separated values)" (.tsv file) for import into the PPM processor. The PPM processor output provides a list of non-redundant genes, grouped into subsets of the three classes of hits (profile sets, pattern sets and profile and pattern sets) as well as ranked based on the amount of associated patterns and profiles. A link back to INDIGO allows listing of the obtained hits for a detailed analysis.
MANUAL HIT SELECTION FROM THE PPM PROCESSOR OUTPUT
Grouping of genes into PPM classes and sets immediately highlights expected functional similarities of gene expression products. PPM sets of patterns and/or profiles, which are characteristic for the same protein, can be condensed further into one metaset. For example pattern sets with combinations of PS00136 and PS00137, PS00136, and PS00138 or PS00137 and PS00138 are all indicative of subtilase type serine proteases and these pattern sets were condensed into one meta set. In total nine functionally distinct PPM sets remained after manual condensing ( Table 5).
For experimental characterization, synthesis and expression of 117 genes from halophilic extremophiles still represent an enormous challenge, which mandates identification of those extremophilic proteins as expression targets, which are most typical for each functionality-set. For five of the nine functionally different PPM sets, we were able to pinpoint nine genes representing all three PPM classes (profile, pattern, profile, and pattern) ( Table 5). Amino acid based phylogenetic analysis within each PPM set revealed phylogenetic relations and sequence clusters. The sequence representing most of the set-members was selected, e.g., the PP1 DHPR PPM set contains 14 different hit sequences (isoenzymes). Phylogenetic analysis resulted in four clusters of phylogenetic closely related groups (Figure 3). For each of those four clusters the sequence representing most of the members was selected. This was straightforward for three DHPR clusters, since one sequence contained all elements of the others. In the fourth case as well as for cluster of other sets the selection was more complicated, because phylogenetic sequence clusters showed either an equal distribution of mutations in one cluster or an unequal length of sequences. To address this problem an additional protein BLAST (BLASTp) (Johnson et al., 2008) was performed and the sequence with the highest similarity was chosen for the fourth DHPR and the halolysin cluster. In case of no difference in similarity according to BLASTp, the gene product providing more functional side chains was chosen (e.g., for subtilisin) since additional chemical functionality may indicate more diverse enzyme characteristics (e.g., hydrogen bonding, allosteric pockets, metal complexation etc.). Amino acid sequences typically differed in less than 10 positions [amino acid sequence length: 401 (ADH), 348 (2-hydroxyacid DH), 498-565 (halolysin; the 565 amino acid sequence contains all shorter ones), 528 (subtilisin), 435-440 (prephenate DH), 272-285 (four subgroups of DHRPs)].
FUNCTION IDENTIFICATION OF PROTEINS WITHOUT EXISTING GO-TERMS OR PROSITE IDs
The initial search for CAs was not successful. While distinct GOterms and consensus patterns exist for αand β-CAs (Table 5), non are available for the other three CA families (γ, δ and ζ). According to Ferry the CS chain A from Methanosarcina thermophile (Cam) can be considered the archetype of the γ-CA family, and a distinct, 180 amino acid sequence (no 34-214) is indicative for a γ-CA protein (Smith and Ferry, 2000). An INDIGO internal BLAST of this 180 amino acid motif against all genes yielded 17 potential γ-CAs. Applying the gene fragment filter reduced the candidate pool to six.
As discussed above, an additional pattern matching should increase the reliability of the profile-based protein identification. The analysis of the only two γ-CA class crystal structures reported (Cam from M. thermophila Kisker et al., 1996, see also pdb 3OW5 and a CamH homolog from P. horikoshii Jeyakanthan et al., 2008) revealed nine amino acids in two peptide sequences of 26 and six amino acids as most relevant for enzyme function (Smith and Ferry, 2000). The resulting two initial consensus patterns are shown below [color code: yellow, metal binding motifs (H 81 , H 117 , H 122 ); green, residues directly involved in catalysis (E 62 , N 73 , Q 75 , E 84 ); blue, structurally important residues (R 59 , D 61 ); not highlighted, residues of no specific function as they appear in the γ-CA sequence]. 59 -R 59 SD 61 E 62 GMPIFVGDRSN 73 VQ 75 DGVVLH 81 ALE 84 -84 and 117 -H 117 QSQVH 122 -122 No hit was found for a strict pattern matching of the six potential γ-CAs. This is not surprising, since it is common for consensus patterns that some functionally important amino acids can be altered within a certain threshold. Alignment of the initial γ-CA consensus pattern with the six γ-CA candidate sequences revealed that the 10 amino acid long stretch from E 62 to N 73 was shortened by one amino acid in all six candidates. The resulting structural alteration is unlikely to affect function. Further, the two structurally important residues R 59 and D 61 were conserved as well as two out of the three metal binding histidines (H 81 and H 117 ) ( Table 6). The third metal binding amino acid H 122 was replaced by an N in hit number 6, a mutation, which potentially affects function. Further sequence variations involve the replacement of catalytic E 84 by either D (four cases, potentially not influencing function), or K (two cases, potentially affecting function). The remaining catalytically important residues E 62 , N 73 , and E 75 , which are involved in a hydrogen-bonding network in the M. thermophile protein, are highly variable among the six candidates sequences. Assuming that some of these candidates are CAs because of profile and pattern similarity to the M. thermophile archetype enzyme we concluded that E 62 is not generally important for the function of this enzyme type and that N 73 and E 75 can be replaced by the hydrogen bonding amino acids C or K, respectively. Correspondingly, we suggest the following two consensus patterns for γ-CAs: R-x-D-x(10,11)- and H-x(3)-H Application of the PPM algorithm using the 180 amino acid profile stretch identified from pdb 3OW5 and the new consensus patterns delivered three γ-CAs candidates. Because of high sequence similarity in two out of the three sequences, the sequences of gene 2 (annotated as ferripyochelin binding protein 01) from Atlantis II deep and gene 3 (annotated as predicted acetyltransferase) from discovery deep ( Table 6) were selected as best candidates for experimental studies of γ-CAs [CA_A (Atlantis II deep) and CA_D (Discovery deep) in Table 7].
DISCUSSION
Proteins, which are suitable for the harsh conditions of many biotechnological applications can be obtained through protein engineering, discovery and mining of novel extremophilic genomes or a combination of both. The major challenge in mining genomic data from extreme environments is, that, with increasing extremeness of the habitat, the possibility of culturing the organism thriving under these conditions shrinks substantially (Alain and Querellou, 2009). However, SAGs can provide genomic data from uncultured organism. We believe that improving the quality of SAGs assemblies (higher sequence coverage, longer contigs, and advanced annotation programs) should enable us to utilize SAGs as a rich source for discovery of extremophilic enzymes of scientific interest and commercial value. However, annotation reliability is lowered for both, extremophilic genomes (for which commonly no close relative is known) and SAGs (which may suffer from gaps, incomplete genes, or generally sequencing data of lower quality) and therefore a highly reliable algorithm for identification of genes of interest from extremophilic SAG databases is mandatory before entering labor-intensive expression and characterization of these genes.
PROBLEMS OF SINGLE PROFILE OR PATTERN ANALYSIS AND THE PPM ALGORITHM
Consensus patterns show a good reliability, yet a considerable amount of hits identified via PROSITE ID are false positives (has the motif but not the function), false negatives (has the function but not the motif), unknown (has the motif but no verified function), or partial hits (has the function but only parts of the motif) (Sigrist et al., 2002). Table 8 combines examples illustrating the reliability for consensus pattern based annotation of enzyme function. Reliability may be as low as 55% false positives (PS00136) or 90% false negatives (PS00065). A further problem of pattern-based annotation is the low flexibility because of the short pattern lengths (about 10-20 amino acids Sigrist et al., 2002), typically covering only 1.9-7.9% of the total protein length. Due to the short length of the consensus pattern, a higher reliability requires reducing the permissible flexibility. In the CAs example above, three consensus patterns were available with high reliability (Table 8). Hence we expected to identify several CAs through pattern matching. Yet, no CA was found in the entire database since the rigidity of these consensus patterns prevented identification of novel enzymes with the same function. Finally, a consensus pattern may not be specific for a specific function, e.g., NADH or ATP binding motifs typically are associated with consensus patterns, which occur in several enzyme families. Table 7 illustrates this issue. Four PROSITE IDs are related to both, either alcohol dehydrogenase or ene reductase function. Identifying combinations of patterns can circumvent these problems and increase reliability. According to the PROSITE web page, one of the strongest pattern combinations is PS00136-PS00138. If a protein includes at least two of the three active site signatures, the probability of it showing a protease activity is assumed to be 100%. Ontologies are widely used for functional annotation (Radivojac et al., 2013). Gene ontologies are commonly expressed by GO-terms. The source for GO-terms in the UniProt Gene Ontology Annotation database falls into three categories: (i) the smallest but most reliable category, experimental annotations, (ii) curated non-experimental annotations and last electronic annotations, (iii) with less reliability. Over 98% of the repository of the UniProt Gene Ontology Annotation database is inferred in silico without curator oversight (Škunca et al., 2012). GO-terms are highly flexible, which is reflected in the gene's sequence length associated with it, e.g., annotation of GO-terms in this study covered 1.9-100% of the total gene. The particular sources used for GO-term identification leads to this large range. GO-terms based on consensus pattern naturally are reflected by a short associated sequence length (e.g., the 1.9% lower limit in this study). GOterms determined by different methods (e.g., Hamap, TIGRfam, PIRSF) can take up to 100% of the sequence into consideration. In this analysis GO-terms association to ORFs was in average based on about 65% of the total sequence length. Recent studies could show that electronic annotations are more reliable than generally believed and that the overall reliability of electronically determined GO-annotations is increasing, but still very low. The mean value of reliability was ≈30% in 2006 and increase to 50% in 2011 (Škunca et al., 2012). The variations are significant among different inference methods, types of annotations, and organisms. Further, functional annotation, which is only based on GO-terms can result in a considerable bias . INDIGO utilizes all InterProScan derived GO-terms whether they are emerging from longer domains such as PFAM, TIGRfam, or PROSITE short consensus patterns. It is common that PROSITE IDs do not relate to any GO term, yet a longer domain in the vicinity or around a PROSITE pattern yields a GO-term associated to a POI. Currently, 11,910 ORFs (10.6%) annotated in INDIGO are associated with a GO-term and a PROSIT ID, which both describe the same function. The INDIGO data warehouse (Sigrist et al., 2002).
based annotation (AAMG) combines various annotation methods. Unlike other data warehouses, INDIGO keeps and organizes all annotation meta data even if these are not in agreement with the final annotation (Alam et al., 2013). All GO-term and PROSITE IDs, which are available from these meta data are used by the PPM algorithm. In two cases, the PPM algorithm based function predictions differ from the INDIGO annotation. γ-CA identified by the PPM algorithm were previously annotated as "predicted acetyltransferase isoleucine patch superfamily" or "Ferripyochelin binding protein." Other PPM algorithm based functions narrowed the INDIGO annotation down to only one function. The prephenate DHs were originally annotated as both, Chorismate and Prephenate DH. In summary both, consensus patterns and GO-terms are standard tools to identify the function of a gene, yet they have weaknesses. The key to increase reliability is combination of descriptors. Since GO-terms (profiles) and PROSITE IDS (patterns) provide orthogonal information of protein function (with the exception of GO-terms based on consensus patterns) selecting combination of both descriptors is a powerful tool to identify the function of a gene product with higher reliability, particularly for novel and distantly related organisms. The PPM algorithm combines those advantages and is able to select for all three combinations of descriptors: the profile sets, the pattern sets and the profile and pattern sets. The strict PPM algorithm extracts and ranks in our case the top 0.1% of most reliably annotated genes. Since genomic data are growing at a much faster pace than experimental verification can proceed, a focus on quality rather than quantity is required. The PPM algorithm guides experimentalists to relevant starting points for successful expression, characterization, and verification of gene products.
DISTANTLY RELATED SEQUENCES FROM NOVEL ORGANISMS
Phylogenetic analysis of gene sequences identified as candidates for expression tests revealed a high evolutionary distance to any known sequence (Figure 4). In case of the PPM profile and pattern set hits, which all are DHPRs, the phylogenetic tree with the closest related organisms includes both, the archeal and bacterial domains of life ( Figure 4A). The four identified hits are all in the archeal branch. The three hits from the organism MSBL1 (DR_A1, DR_D, and DR_K) are clustering together in a separate branch, connected to Acheoglobales and Methanomicroba. The hit from the organism MBGE (DR_A2) is in a separate branch and closer related to Methanobacteria and Methanococci. As indicated by the long branches the junction to the closest previously known sequences occurs at 0.3-0.35 amino acid substitutions per site. The PPM multi-profile hit prephenate dehydrogenase from MBGE ( Figure 4B) shows phylogenetic relations similar to DHPR. The closest related enzymes found are from archea and the closest related sequences are from Methanococci and Methanobacteria. The junction to the closest previously known sequences occurs at 0.33 amino acid substitutions per site. The subtilase type sequence from the PPM multi-pattern hit has a different phylogenetic footprint ( Figure 4C). Based on the amino acid sequence the novel subtilisin shows equal evolutionary relations to archea and bacteria, which indicates comparatively low sequence mutations in the two different domains compared to their common ancestor. For the γ-CA hits, which are based on a combination of a new profile and pattern, the phylogenetic tree includes all three known classes of CA ( Figure 4D). The tree reveals clearly, that the identified sequences fall into the γ class of CAs with very distant relations to the α and β class. Distant phylogenetic relationships are also found for all other hits, underlining the novelty of the SAGs analyzed ( Figures S1-S3).
CURRENT LIMITATIONS OF THE PPM APPROACH
The PPM approach intrinsically leads to a high number of false negatives, because not all protein of interest groups can be translated into GO-terms and PROSITE IDs. During conversion from E.C. numbers to profiles (GO-terms) or pattern (PROSITE ID) about 35 or 81% of the POIs are lost, respectively. This limitation will be overcome through the exponential growth of biological data, which will increase the number and precision of GO-terms and PROSITE IDS. The combination of self-derived profiles and pattern can also enhance/enable PPM analysis, even with comparatively flexible sequences that show individually low reliability, as shown for the γ-CA example. Reducing the rigidity of consensus pattern with a high false negative rate may further help to increase hit rates. However, as discussed above, from an experimentalist point of view false positives are of much higher concern and these can be eliminated very effectively by the PPM approach.
OUTLOOK AND CONCLUSION-THE RED SEA EXTREMOPHILES AS SOURCE FOR NOVEL ENZYMES WITH HIGH SCIENTIFIC AND INDUSTRIAL POTENTIAL
For the first time SAGs were used to identify proteins for biotechnological applications. The eleven different genes, which were extracted from the INDIGO database during this study as candidates for expression just give a glimpse of the potential the Red Sea brine pools extremophiles have for discovery of novel enzymes. Not only the great phylogenetic distance to any described organism but also the extreme anoxic, high temperature, and hypersaline environment makes the enzymes of those organisms highly valuable. Enzymatic activity at high temperature and with low water activity can enable biocatalysis to be a tool for complex chemical reactions giving high yield and enantiomeric excess and under conditions that were so far out of reach for biological applications. Investigation of the enzymes, for which genes were identified here, will help understanding the limitations and adaptation of life at such extreme places.
The PPM algorithm is not intended to be a competitor for standard annotation. However, it is a powerful tool to analyze functions of proteins of extremophilic organisms that are only distantly related to organisms described so far. The PPM algorithm helps experimentalists to extract proteins and particularly enzymes with high confidence from databases with only limited annotation reliability, e.g., when SAGs of extremophiles are used.
The combination of orthogonal descriptors may also facilitate screening of other genomic data for proteins of interest, e.g., those resulting from metagenomic or metatranscriptomic sampling as well as from shotgun sequencing. For metagenomic sequences the most reliable functional annotations are achieved using homology-based approaches against publicly available reference sequence databases including GO. Recently, it was recommended for metagenomic data to run a motif-based analysis (e.g., using PROSITE-IDs) in parallel to the homology-based functional prediction (Prakash and Taylor, 2012). The PPM algorithm provides an example using this approach. However, since the PPM algorithm was developed to minimize the number of false positive hits when experimentalists search genomic databases for proteins of interest and we expect also for metagenomic data that the increased reliability of genes identified by this algorithm will be it's main advantage.
The publicly available scripts used in this study (i) AutoTECNo, (ii) PPM processor in combination with (iii) the INDIGO data warehouse are powerful tools, with a minimalistic character to keep handling of extreme large datasets simple. The PPM algorithm will facilitate experimental characterization of extremophilic proteins and therefore help to increase the general understanding of life at extreme conditions and exploiting its biotechnological potential. The enzymes identified in this study will be the first of many proteins on this path. | 9,711.8 | 2014-04-07T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
Invariant Natural Killer T-cells and their subtypes may play a role in the pathogenesis of endometriosis
Highlights • The frequency of iNKT cells in general and their subtype double-negative are related to endometriosis.• The expression of IL-17 and CCR7 by iNKT cells are related to endometriosis-associated pain symptoms.• iNKT cells are numerically and functionally altered in women with endometriosis.
H I G H L I G H T S
The frequency of iNKT cells in general and their subtype double-negative are related to endometriosis. The expression of IL-17 and CCR7 by iNKT cells are related to endometriosis-associated pain symptoms. iNKT cells are numerically and functionally altered in women with endometriosis.
A R T I C L E I N F O A B S T R A C T
Objective: To evaluate the frequencies of iNKT cells and their subsets in patients with deep endometriosis. Methods: A case-control study was conducted between 2013 and 2015, with 73 patients distributed into two groups: 47 women with a histological diagnosis of endometriosis and 26 controls. Peripheral blood, endometriosis lesions, and healthy peritoneal samples were collected on the day of surgery to determine the frequencies of iNKT cells and subtypes via flow cytometry analysis. Results: The authors observed a lower number of iNKT (p = 0.01) and Double-Negative (DN) iNKT cells (p = 0.02) in the blood of patients with endometriosis than in the control group. The number of DN iNKT IL-17 + cells in the secretory phase was lower in the endometriosis group (p = 0.049). There was an increase in the secretion of IL-17 by CD4 + iNKT cells in the blood of patients with endometriosis and severe dysmenorrhea (p = 0.038), and severe acyclic pelvic pain (p = 0.048). Patients with severe dysmenorrhea also had a decreased number of CD4 + CCR7 + cells (p = 0.022). Conclusion: The decreased number of total iNKT and DN iNKT cells in patients with endometriosis suggests that iNKT cells play a role in the pathogenesis of endometriosis and can be used to develop new diagnostic and therapeutic agents.
Introduction
Endometriosis is an inflammatory disease characterized by the presence of endometrial glands and/or stroma outside the uterus, with an estimated prevalence of 10% in women of reproductive age. 1 Its main symptoms are dysmenorrhea, deep dyspareunia, chronic pelvic pain, and infertility. These clinical manifestations are heterogeneous and not always compatible with the severity or stage of the disease. 2 Several studies have demonstrated the importance of the immune system in the pathogenesis of endometriosis. Disturbances in immunological homeostasis can facilitate implantation, proliferation, and angiogenesis of endometrial tissue in the peritoneum. 3,4 Endometriosis is also associated with changes in the frequencies of lymphocyte populations, altered cytotoxicity of Natural Killer (NK) cells, and the Th1 response induced by Th2-type pro-inflammatory and anti-inflammatory cytokines. 5−7 In recent years, the importance of invariant Natural Killer T (iNKT) cells in the control of Th1, Th2, and Th17 immune responses and their relation to certain diseases has been demonstrated. 8
−10 iNKT cells are a subclass of T-lymphocytes that express NK cell markers such as CD161
and an invariant T-Cell Receptor (TCR) α/β with a restricted repertoire.
These cells constitute 0.2% of the total T-cells in the peripheral blood. 14 Given the essential role of iNKT cells in inflammatory, infectious, and autoimmune diseases, 15 the authors hypothesized that iNKT cells could secrete cytokines and modulate the inflammatory response in patients with endometriosis. However, only a few studies on iNKT and endometriosis have been published. 11−13 The objective of this study was to evaluate the association between iNKT cells and their subsets with endometriosis. The secondary objectives include the evaluation of cytokine profiles and the correlation between the frequency of iNKT and pain symptoms.
Study design
A prospective study was conducted between 2013 and 2015 at the Endometriosis Clinic, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil, and the study was approved by its institutional ethics committee (CAPPesq 235869/13), and it is in accordance with the Helsinki Declaration. Forty-seven women aged 18 −49 years with regular menstrual cycles who underwent laparoscopic surgery for the treatment of deep endometriosis with histological confirmation were included in the endometriosis group. Twenty-six healthy women without endometriosis upon laparoscopy for tubal ligation were included in the control group. Patients who received hormonal treatment in the past three months or had autoimmune diseases were excluded. Written informed consent was obtained from all participants.
Clinical data and grading of dysmenorrhea, deep dyspareunia, acyclic pelvic pain, cyclic dyschezia, and cyclic dysuria were obtained from all included patients using a Visual Analog Scale (VAS) from 0 to 10. The authors considered severe pain as having VAS scores between 7 and 10 and mild pain as having VAS scores between 0 and 6.
During laparoscopy, before installation of the pneumoperitoneum, endometrial biopsies for menstrual cycle phase confirmation were obtained using a Novak curette, and blood samples were also collected. A complete evaluation of the pelvis and staging of endometriosis according to the American Society of Reproductive Medicine (ASRM, 1996) was performed, and all suspected lesions were completely resected. Samples of endometriosis lesions were obtained and stored in liquid nitrogen at the Research Center of Hospital Israelita Albert Einstein until subsequent analysis.
Flow cytometry
Peripheral Blood Mononuclear Cells (PBMCs) were thawed, isolated, washed, and counted. Their viability was assessed using a Countess® automated cell counter (Invitrogen, Carlsbad, CA, USA), and they were frozen in liquid nitrogen until use.
To measure cytokine production, PBMCs were incubated in the presence of their cognate iNKT-specific agonist α-galactosylceramide (αGalCer). After incubation for 1h at 37°C under a 5% CO 2 atmosphere , monensin (Golgi Stop, BD Biosciences) or Brefeldin A (Golgi Plug, BD Biosciences) was added. After incubation for 18h, the cells were washed and incubated with monoclonal antibodies against surface antigens. After incubation, the cells were washed, fixed, and permeabilized using reagents from Life Technologies (Thermo Fisher Scientific), according to the manufacturer's instructions. Cells were incubated with antibodies against IL-6 (clone MQ2-13A5, BD Biosciences), IL-10 (clone JES3-1931, Beckman Coulter), and IL-17 (clone: BL168; Biolegend). The cells were then analyzed using an LSR Fortessa flow cytometer (BD Biosciences). All samples were acquired using FACSDiva software (BD Biosciences), and data were processed using FlowJo software (version 9.9, Tree Star).
Immunofluorescence staining and confocal microscopy
For immunofluorescence staining, the endometriosis lesions were cut into 5 μm sections, stained, and analyzed by a pathologist to define the endometriosis lesions and healthy areas in the sections. Afterward, the sections on the slides were dewaxed and subjected to an epitope retrieval step via incubation in sodium citrate buffer for 15 min in a microwave. Next, the slides were incubated overnight at 4°C with mouse anti-human CD1d (BD Biosciences) and rabbit anti-human CD3 polyclonal (Abcam, Cambridge, UK) primary antibodies. The slides were then washed and incubated with secondary antibodies, Alexa 488 antimouse and Alexa 568 anti-rabbit (Life Technologies, Thermo Fisher Scientific) for 2h at 20−30°C. Next, the sections were washed and incubated with anti-CD4 antibody (clone RPA-T4) for 2h, and then incubated with DAPI for 5 min. Slides were mounted with Prolong Gold Antifade Reagent (Life Technologies, Thermo Fisher Scientific). Images were acquired using an LSM 710 confocal microscope (Carl Zeiss, Jena, Germany). The sections were imaged using ZEN 2012 SP2 software (Black, 64-bit, Release Version 11.0, Carl Zeiss).
Statistical analysis
Sample size calculation was performed assuming 95% confidence and 80% power. Previous studies have shown that the percentage of iNKT cells in the blood of normal women ranges from 0.1% to 2%, with a standard deviation of 0.27%; therefore, five women in each group would be required to identify a mean difference of 0.5% iNKT cells between women with and without endometriosis. 16 iNKT cell frequencies and interleukin levels in the peripheral blood were compared between groups using the Mann-Whitney test. Comparisons between groups regarding symptoms, stage of endometriosis, and menstrual cycle phase were also performed using the Mann-Whitney test. The analyses were performed using SPSS software (version 20.0, International Business Machines Corporation, Sao Paulo, SP, Brazil). Statistical significance was set at p≤0.05.
Clinical characteristics of patients
The mean age of patients was similar between the endometriosis (34.3 ± SD 6.2 years) and control groups (34.5 ± 4.6 years; p = 0.87).
The mean Body Mass Index (BMI) was higher in the control group compared to the endometriosis group (24.1 ± 3.1 kg/m 2 vs. 22.5 ± 3.3 kg/ m 2 ; p = 0.046). Patients with endometriosis presented with a higher incidence of dysmenorrhea (63.8% vs. 11.5%; p < 0.001) and cyclic dyschezia (14.9% vs. 0%; p = 0.046) than those without endometriosis. There was no significant difference in acyclic pelvic pain, dyspareunia, or cyclic dysuria between the groups. No differences were observed in the menstrual phases between the groups ( Table 1).
Frequency of iNKT cells in peripheral blood
The number of iNKT cells was determined through the co-expression of surface markers Vα24 and Vβ11, as shown in the gating strategy in Fig. 1, using multiparametric flow cytometry. The authors observed a decrease in the total number of iNKT cells in the peripheral blood of patients with endometriosis compared to those without the disease (0.17 ± 0.55 vs. 0.23 ± 0.25; p = 0.01) ( Table 2, Fig. 2A).
Analysis of the menstrual cycle phases indicated that in the secretory phase, patients in the endometriosis group had lower numbers of total iNKT cells compared to the control group (median 0.05 [range: 0−3.61] vs. 0.25 [0.01−0.94]; p = 0.03) (Fig. 3). There were no statistically significant differences in a total number of iNKT cells between menstrual cycle phases in the endometriosis group (Table 3).
When comparing the menstrual phase and iNKT subtypes, the number of CD4 + iNKT cells expressing CD25 in the proliferative phase was increased in the endometriosis group compared with the control group (44.6 [0−60] vs. 33.2 [0−58.6]; p = 0.022) ( Table 3; Fig. 4A). The number DN iNKT cells expressing CD25 was higher in patients with endometriosis in the proliferative menstrual phase than that in the secretory phase (p = 0.032) ( Table 3, Fig. 4B). There was no significant difference in a total number of iNKT cells and their subsets expressing CD25 in patients with endometriosis and severe pain symptoms compared to those with mild pain symptoms (Tables 4 and 5).
The number of DN iNKT cells was decreased in the proliferative menstrual phase in patients with endometriosis compared to that in the Table 3, Fig. 4C).
The number of CD4 + iNKT cells expressing CCR7 (35.6 ± 30.6 vs. 31.6 ± 31.0; p = 0.439) and the number of DN iNKT expressing CCR7 (11.18 ± 15.87 vs. 6.76±14.08; p = 0.801) were similar between the endometriosis and control groups ( Table 2). There was no significant difference between menstrual cycle phases and a total number of iNKT and their subsets expressing CCR7between the endometriosis and control groups ( Table 3). The number of CD4 + CCR7 + iNKT cells decreased in patients with endometriosis and severe dysmenorrhea compared to patients with mild/absent dysmenorrhea (26.9 ± 31.8 vs. 51.0 ± 21.7; p = 0.022) ( Table 4). There was no significant difference in the number of iNKT cells and their subsets expressing CCR7 in relation to other pain symptoms.
Cytokine profile of CD4 + , CD8 + and DN iNKT cells
The number of iNKT DN cells expressing IL-17 was lower in patients with endometriosis compared to the control group (7.8 ± 4.2 vs. 11.5 ± 5.0; p = 0.05). There were no differences in the cytokine profiles of the other iNKT subsets (Table 6).
Concerning the menstrual cycle phases, the authors observed that the number of DN iNKT cells expressing IL-17 in the secretory phase was decreased in patients with endometriosis compared to the control group There was no significant difference in the number of any iNKT cells expressing IL-10 or IL-6 between the groups during any menstrual cycle phase (Table 3).
There was an increased number of CD4 + iNKT cells expressing IL-17 in the peripheral blood of patients with endometriosis and severe dysmenorrhea compared to those with mild/absent dysmenorrhea (11.1 ± 4.7 vs. 4.3 ± 4.0; p = 0.038) ( Table 4). There was also an increased number of CD4 + iNKT cells expressing IL-17 in the peripheral blood of patients with endometriosis and severe acyclic pelvic pain compared to those with mild pain (13.1 ± 3.9 vs. 7.0 ± 5.2; p = 0.048) ( Table 5). No significant differences in the numbers of iNKT subsets expressing IL-17 were observed in patients with dyspareunia, cyclic intestinal, or urinary symptoms. Furthermore, there was no relationship between fertility status and the frequency of iNKT cells and subtypes between groups.
Presence of CD4 + iNKT cells in deep endometriosis lesions
The authors performed immunofluorescence staining of iNKT cells for the markers CD3, CD4, and CD1d (counterstained with DAPI) in endometriosis lesions and healthy peritoneum from nine patients (Fig. 5A). iNKT cells were present in both endometriosis lesions and healthy areas, and there was no significant difference in the number of iNKT cells between them (Fig. 5B).
Discussion
The role of the immune system in the pathogenesis of endometriosis has been widely demonstrated in the last few decades. Numerous cytokines have been shown to be abnormally expressed, and variable Th1, Th2, and Th17 responses have been observed in patients with endometriosis. 7,19 Recently, NKT cells have been shown to regulate This study evaluated iNKT cell frequency and functionality in patients with endometriosis. To our knowledge, this is the first study that described iNKT cells and their subsets in endometriosis. The baseline patient characteristics were similar between groups, except for BMI, which was higher in the control group. These results were in agreement with previous findings of decreased BMI in patients with endometriosis compared to that in healthy women. 20 As expected, the frequency of patients with severe dysmenorrhea and cyclic dyschezia was significantly higher in the endometriosis group, as expected. Patients with endometriosis are known to have more pain symptoms than healthy controls.
The present study's findings demonstrated a significantly lower frequency of iNKT cells in the peripheral blood of patients with deep endometriosis than in women without endometriosis. A lower frequency of iNKT cells has been observed in several diseases in which the immune response is dysregulated, including HIV and HTLV infection, common variable immunodeficiency, autoimmune diseases, and some cancers. 13,15,21,22 Under different pathological conditions, iNKT cells can have either a protective or harmful role, as they have both classically innate and adaptive immunologic characteristics. In endometriosis, a decreased frequency of iNKT cells may impair local immune surveillance and facilitate ectopic implantation in the endometrium. 13 Estrogen and progesterone control endometrial functions by regulating the expression of thousands of genes during the menstrual cycle. 23 Different profiles of inflammatory cell frequencies and cytokine secretion have been observed in the peripheral blood, peritoneal fluid, and urine, depending on the menstrual cycle phase. 24 These data suggest the essential role of sex steroid hormones in the physiology of the immune microenvironment. 25,26 The authors showed that the number of iNKT cells was decreased in the secretory phase in patients with deep endometriosis compared to those without endometriosis. This difference could Mann-Whitney test; Data described as median (minimum-maximum).
be related to abnormalities in progesterone secretion and sensitivity in patients with endometriosis. Previous studies have shown that progesterone receptor resistance is associated with endometriosis development and persistence. 27,28 Abnormalities in progesterone physiology were directly linked to modifications of the immune environment in the topic and ectopic endometrium. Increased levels of estradiol observed in women with endometriosis may also affect NKT cell cytotoxicity and local immune surveillance. 13 Therefore, the authors hypothesized that the decrease in the number of iNKT cells in patients with endometriosis may be related to an imbalance between estrogen and progesterone levels, which is frequently associated with the disease. By evaluating iNKT cell subsets, the authors observed a decrease in the number of DN iNKT cells, the most predominant iNKT subtype in 30 The authors also compared iNKT cell frequencies in the peripheral blood of patients with endometriosis according to pain intensity and fertility status. The authors observed a decreased number of CD4 + CCR7 + iNKT cells in patients with endometriosis and severe dysmenorrhea, suggesting that the immune response may play a role in the severity of the disease and its symptoms. In 2012, Guo et al. 31 also observed lower NKT cell percentages and IFN-γ and IL-4 levels in the peripheral blood and peritoneal effusions of 60 patients with endometriosis compared with 20 healthy controls. They showed that the number of NKT cells, as well as IFN-γ, and IL-4 levels, were inversely correlated with endometriosis stage, supporting the correlation between the number of NKT cells and the severity of endometriosis.
Evidence suggests that iNKT cell subpopulations (CD4 + , CD8 + , and CD4 − CD8 − ) produced different profiles of cytokine secretion and activation of NK and B cells, leading to different Th responses. 32 In 2011, O'Reilly et al. 33 observed a differential secretion pattern of cytokines after stimulation of CD4 + , CD8 + , and CD4 − CD8 − iNKT cells. Several studies have demonstrated variations in the frequency and function of iNKT cell subsets in patients with different diseases. 13,34 DN iNKT cells have an essential Th1 response pattern, releasing higher amounts of IFN-γ and TNF-α after stimulation. 35 Accordingly, the present study's findings demonstrate a decreased percentage of DN iNKT in patients with endometriosis. Since DN iNKT cells produce a Th1 response and a balance in Th1/Th2 responses is essential for immune homeostasis, the authors hypothesized that this abnormality is implicated in the pathophysiology of endometriosis. 7,19 The authors demonstrated that iNKT DN IL-17 + cells are in present lower proportions in patients with endometriosis than in women without endometriosis. In contrast, the authors also showed an increased frequency of CD4 + IL-17 + iNKT cells in patients with endometriosis with severe dysmenorrhea and with severe acyclic pelvic pain. IL-17 is a member of a family of cytokines predominantly produced by activated CD4 + T-cells. It has potent pro-inflammatory properties and is involved in the modulation of the immune response in inflammatory disorders and pain. 36 Furthermore, IL-17 seems to be implicated in the development of endometriosis by inducing estrogen production, endometriotic stromal cell proliferation, and secretion of inflammatory mediators. 4,37 Previous studies have shown that increased levels of IL-17 are involved in visceral and neuropathic pain. 36,38 The abnormal frequencies of CD4 + IL-17 + iNKT cells observed in the present study may be, in part, responsible for endometriosis-related pain symptoms.
The results of the present study led us to hypothesize that iNKT cells and their subtypes may play an essential role in the pathogenesis of endometriosis. Abnormalities in the frequency of iNKT cells may impair the proper functioning of the immune system, allowing the implantation and proliferation of endometriosis lesions.
Currently, most treatments available for endometriosis are hormonal medications, which also work as contraceptives. iNKT cells may be a target for the development of new non-hormonal drugs that may be important for women who are trying to conceive or have any contraindication to the use of hormones. 3 Since other studies have recently demonstrated a positive effect of immunotherapy by activating iNKT cells with different antigens in liver disease, autoimmune diseases, and antitumor therapy, it is possible to use this treatment strategy for other inflammationrelated disorders such as endometriosis. 39 There was no significant difference between groups (p = 0.14). Mann-Whitney test; SD, Standard Deviation.
Conclusion
In conclusion, the frequency of total iNKT and DN iNKT cells was decreased in patients with endometriosis. Patients with endometriosis with severe dysmenorrhea and acyclic pelvic pain had increased production of IL-17 by CD4 + iNKT cells and decreased numbers of CD4 + CCR7 + cells. Further studies in animal models could use targeted drugs to enhance or inhibit the activity of iNKT cells and further confirm these results, aiming to develop new therapeutics for endometriosis. Overall, these results suggest that iNKT cells play a role in the pathogenesis of endometriosis and can be exploited in the development of new diagnostic and therapeutic agents.
Authors' contributions
Correa FJS study design, collection of data, analysis of results, manuscript drafting. Andres MP and Abrão MS study design, analysis of results, manuscript drafting. Rocha TP analysis of results, manuscript drafting. Carvalho AEZ and Carvalho KI study design, collection of data. TPA Aloia collection of data. Corpa MVN collection of data, analysis of results. Kallas EG and Baracat EC study design, manuscript drafting. Mangueira LP collection of data, analysis of results.
Ethics approval
This research was approved by the institutional ethics committee of Hospital das Clinicas, Faculdade de Medicina, Universidade de São Paulo (CAPPesq 235869/13) and performed in accordance with the ethical standards of the 1964 Declaration of Helsinki.
Availability of data and material
All the data and materials used in this research are available upon request.
Code availability
Not applicable.
Consent to participate
Informed consent was obtained from all study participants.
Consent for publication
Not applicable.
Conflicts of interest
The authors declare no conflicts of interest. | 4,942 | 2022-01-01T00:00:00.000 | [
"Biology"
] |
Correlation of Perceived Nasality with the Acoustic Measures (One Third Octave Spectral Analysis & Voice Low Tone to High Tone Ratio)
Aim: The perceived hypernasality in speech can be evaluated using various qualitative and quantitative methods. The present study aimed to investigate, compare and correlate acoustic parameters one third octave spectra analysis and Voice Low Tone to High Tone Ratio (VLHR) with perceived nasality in children with repaired cleft lip and palate (RCLP) and in age and gender matched typically developing children (TDC). Methods: The study included 73 children (47 RCLP & 26 TDC) in the age range of four to twelve years. The spontaneous speech and sentences were recorded and analyzed for nasality using standardized perceptual four point rating scale. Based on the severity of perceived nasality children were divided into three groups namely normal, mild, and moderate to severe. The production of the vowel /a/ by participants was subjected to acoustic analysis using VLHR and One third octave spectra analysis using MATLAB software. Results: The results indicated significant differences in spectral amplitude measured using one third octave spectra analysis at high frequency region for the vowel /a/ between TDC & RCLP. The VLHR measures were not indicating statistically significant difference and relation with the perceived nasality. Conclusion: One third octave spectra analysis is an effective measure in differentiating the nasality and found to have a significant correlation with perceived nasality in children with RCLP. Hence, one third octave spectra analysis can augment the perceptual evaluation to provide additional information to arrive at a diagnosis.
Introduction
Cleft lip and palate is a congenital disorder resulting in the abnormal closure of the lip and palate. The speech of individuals with cleft lip and palate (CLP) is dominated by the presence of hypernasality. The development of Nasometer by Fletcher and Bishop [1] was a major advancement that facilitated the objective analysis of nasality in speech. Nasometer gives the parameter nasalance which is the ratio of the proportion of the nasal energy in speech to the proportion of nasal and oral energy i.e. Nasalance = nasal/ (nasal+oral) X 100% [2]. Nasometer is extensively used to evaluate the nasality in the speech of individuals with cleft lip and palate.
Hence clinicians often use Nasometer to objectively measure the percentage of the nasal component in the speech of individuals with CLP. Attempts have been made to correlate the nasalance values with the judgments of perceived nasality as perceptual judgment remains the gold standard [3]. The researchers reported the various degree of correlation of nasalance with the perceived nasality [4,5].
The discrepancies among the studies on the correlation of speech based on nasalance with perceived nasality resulted in further investigations to explore the components of speech related to the perceived nasality. The acoustic analysis of speech indicated increased peak amplitudes around first formant region in the speech of individuals with repaired CLP [6][7][8]. Kataoka [9] investigated the variations in spectral amplitude at one third octave frequency bands. This particular bandwidth was selected Similar attempts were made to develop a quantitative index by evaluating the voice spectrum to analyse the effect of nasal obstruction by Lee, Yang, and Kuo [11]. VLHR is the ratio of low frequency power (LPF) to high frequency power (HPF) of the sound power spectrum. The power is expressed in decibels [11]. The cut-off frequency to divide high and low frequencies was calculated by multiplying fundamental frequency (F 0 ) with the square root of (4x5). Lee, Wang, Yang, and Kuo [11] conducted a study to measure VLHR with a cut off frequency of 600Hz. They indicated higher VLHR in speech samples of children with hypernasality and found a significant positive correlation (r = 0.76, P < 0.01) of VLHR with nasalance scores.
Objective measures of speech have always been remained important in augmenting the perceptual evaluation. However, the use of these methods is limited due to lack of adequate published data using appropriate research designs on the sensitivity and specificity of these measures. The easy to use objective measures need to be developed and validated for providing effective clinical and empirical practice. The VLHR and one third octave spectra analysis are new advancements in objective assessment of nasality. This could be an effective alternate or complement the traditional nasalance analysis. Therefore, the present study was taken up with the aim of evaluating the correlation of one third octave spectral analysis and VLHR with the perceived nasality in children with repaired cleft lip and palate.
Objectives of the study a) To perceptually evaluate hypernasality in the speech of children with repaired cleft lip and palate (RCLP) using a standardized rating scale. b) To investigate and compare the One Third Octave Spectra Analysis and VLHR in children with RCLP and typically developing age and gender matched children.
c) To correlate measures of One Third Octave Spectra Analysis and VLHR with the perceived nasality.
Method
The present study considered 73 Kannada speaking children in the age range of four years seven months to twelve years. Among these, forty seven children had misarticulations with RCLP and twenty six were typically developing children. The demographic details are indicated in Table 1. ii. Inclusion Criteria for Group II (Typically developing children): Children who passed informal screening for speech and hearing disorders. Children ruled out for disability by administering World Health Organization (WHO) checklist [12].
Perceptual Analysis
A standardized perceptual rating scale developed by Henningsson, Kuehn, Sell, Sweeney, Trost-Cardamone, and Whitehill [13] was used to evaluate perceived hypernasality. The perceptual rating classifies the data onto a 4-point rating scale ranging from 0 through 3, where 0 = within normal limits (WNL), 1 = mild, 2 = moderate, 3 = severe that reflects increasing severity of hypernasality. Three qualified experienced speech language pathologists were considered as judges.
The stimulus used for perceptual evaluation of nasality was audio and video recordings of participant's spontaneous speech sample recorded with Sony handycam with bearing model no. DCR-SR88. The sample consisted of spontaneous speech (on selfintroduction, school, leisure activities and picture description) for the duration of three to five minutes and repetition of oronasal and oral sentences in Kannada language [14]. The judges rated the samples for the severity of the nasality perceived based on standardized perceptual rating scale by Henningsson et al. [13]. The participants rated by the judges as mild were considered as Group Ia. The participants rated as moderate and severe were together considered as Group Ib.
Acoustic Measures
Instructions and Recording: The participants were demonstrated and instructed to phonate steady state vowel /a/ thrice at a comfortable pitch and loudness in to omni-directional distortion free I BALL microphone. The one third octave spectral measures and VLHR were obtained on the middle 500 millisecond section of the sustained phonation of vowel /a/. The PRAAT 8.1 Version was used to select the steady state portion of the vowel and saved for further analysis using MATLAB 7.0 version software.
Global Journal of Otolaryngology
One-third Octave Spectra Analysis: The speech stimuli were analyzed in 23 one-third octave bands (over a frequency range of 100-16,000 Hz) using a digital filter that was designed to match the ANSI standard (ANSI S1. . One third octave spectra analysis was calculated for frequency bands between 100-16,000 Hz on all samples (/a:/, /i:/, /pIt/, /tIp/). The frequency bands considered for analysis were 396Hz, 500Hz, 630Hz, 793Hz, 1000Hz, 1259Hz, 1587Hz, 2000Hz, 2519Hz, 3174Hz, and 4000Hz.
Voice Low Tone to High Tone Ratio (VLHR):
The lowfrequency power section (LPF) is defined as the summation of the power from 50 Hz to 600Hz and high frequency power section (HPF) also can be expressed as addition of power from 600 Hz to 8063 Hz. Voice low tone to high tone ratio was obtained from 10 x log10 (LPF/HPF) [15].
Statistical Analysis
The data obtained by all these measures was subjected to appropriate statistical analysis using Statistical Package for Social Science, Version 17.0 (SPSS). The normality of the data across the groups was analyzed using Kolmogorov-Smirnov (K-S) test. Multivariate analysis (MANOVA) was administered to differentiate the three groups across all the objective measures (VLHR and one third octave spectra analysis). Post hoc multiple comparison was carried out using Duncan's test followed by MANOVA. Descriptive statistics was used to group the data based on perceptual rating assigned. Cronbach's Alpha coefficient and Spearman rank order correlation were used to analyze the reliability and correlation respectively. The perceived hypernasality was rated based on a four point rating scale for spontaneous speech, oral and oronasal sentences for forty seven children with RCLP. Using rating scale the stimuli were rated by three judges and a consensus agreement by any two out of three judges on a stimulus parameter was obtained to group the participants based on the stimuli. Twenty three were rated as mild hypernasal, sixteen were rated as moderate hypernasal, and eight were rated as severe hypernasal. Table 2 depicts the distribution of the participants based on the nasality across the stimulus.
Inter and intra judge reliability measures
The inter judge reliability of perceptual evaluation of hypernasality was performed using Cronbach's alpha coefficient for the entire sample across the groups with RCLP. The coefficients are 0.72, 0.79 and 0.83 for spontaneous speech, oral sentences and oronasal sentences respectively. The intra judge reliability of perceptual evaluation of hypernasality was performed for 25% of the entire sample across the groups. The intra judge reliability of perceptual evaluation of hypernasality was 0.72 to 0.92 across the stimuli as shown in Table 3. High reliability ratings were obtained for oral sentences and similar alpha coefficients were obtained for oronasal and spontaneous speech.
Discussion
In clinical investigations of hypernasality, perceptual evaluation is considered as gold standard along with the objective measures [3]. Based on perceptual judgment, the children were divided in to normal, mild and moderate to severe hypernasal groups. The intra and inter judge reliability measures denote the relationship between all the ratings given by various judges [16]. In the present study, inter -judge reliability of perceptual rating was almost similar for oral (0.79) and oronasal (0.83) sentences followed by spontaneous speech (0.72). The results indicated low reliability for spontaneous speech than oral and oronasal sentences. This can be due to high fluctuations in acoustic properties of frequency and amplitude during spontaneous speech. The constant change in these properties leads to difficulty to judge the nasality in speech. Similar reliability ratings across the stimulus were reported by Vogel et al. [17] who indicated good inter rater agreement between the judges with an overall score of 0.78 for various oral and nasal passages. The differences in reliability measures with the stimulus variations were also documented by Watterson et al. [18].
The intra -judge reliability for spontaneous speech, oral sentences, and oronasal sentences ranged from 0.78 to 0.92, 0.80 to 0.88, and 0.72 to 0.79 respectively. The variations in intra judge reliability can be attributed to the difficulty to rate standard speech sample, for a given speaker, as the listener will determine different degrees of hypernasality for virtuous points in the rating scale, as this is an arbitrary image, there will be individual variations. The findings of Vogel et al. [17] indicated Global Journal of Otolaryngology intra rater reliability was ranging from 0.66 to 0.91 for passages with varying proportion of nasal phonemes. The reduced reliability ratings can be attributed to the difficulty in judging hypernasality as speech is a multidimensional task [18]. Another study by Tsai [19] reported intra rater reliability of two judges for the spontaneous speech was 0.74 and 0.90 and inter-rater reliability was 0.91. The differences across the studies were attributed to the methodological variations. The ratings of the study by Tsai [19] were based on two judges and the rating scale used was visual analogue scale ranging from 0mm indicating "no nasal resonance" to 100mm representing "the most nasal resonance".
The intra judge reliability measures in the present study are similar to findings of Kataoka et al. [20] who reported intra judge reliability of hypernasality ratings by experienced listeners and graduate students ranged from 0.77 to 0.88 and 0.70 to 0.89 respectively. These results are in close approximation with the present study and this can be due to the similarity in the procedure followed for perceptual evaluation using an equal appearing interval rating scale. Group I a: mild hypernasal graoup Group I b: moderate to several hypernasal Group II: typically developing children A. Mean of one third octave spectra analysis vowel /a/ across the groups: Overall the groups exhibited increased energy concentration at 1000Hz, 1259Hz, and 1587Hz than in the other frequency regions computed. However, in general, minimal differences in energy concentration across the groups observed for all the frequencies. However, the increased spectral amplitude at mid and high frequencies in TDC than RCLP except for 2000Hz and 2519Hz as shown in Figure 1. MANOVA results indicated significant differences in one third octave spectral measures for the vowel /a/ in 1000 Hz, 1587 Hz, and 4000Hz across the groups at p > 0.05 level of significance. The post hoc analysis revealed significant differences in energy concentration at 1000 Hz and 1587 Hz for the vowel /a/ between mild hypernasal and TDC.
B. Correlating the one third octave spectra analysis of /a/ with perceived nasality:
The correlation coefficients were significant at p < 0.05 for vowel /a/ at 3174Hz (-0.23) and 4000Hz (-0.30). The significant correlation coefficients indicate that the perception of nasality in the speech can have a moderate influence on the measures of one third octave spectral energy. Table 4 depicts the results for mean and standard deviation of VLHR for the vowel /a/ across the groups. The increased VLHR measures were observed for children with RCLP (Ia & Ib) than TDC. The increased VLHR is exhibited by mild hypernasal followed by moderate to severe hypernasal and control group for VLHR of /a/. MANOVA was done to find the statistically significant differences across the three groups over the dependent variables. The MANOVA results indicated no significant differences in VLHR measures across the groups [/a/ = {F (2,70) = 2.82, p > 0.05}].
B. Correlating the VLHR measures with perceived nasality:
The relation between perceived nasality exhibited by children with RCLP and TDC with VLHR measures for the vowel /a/ were evaluated using Spearman Correlation Coefficient. The results indicated correlation coefficient of 0.084 (p > 0.05). The VLHR measures were not significantly correlated with perceived nasality. This indicates a poor correlation between VLHR and perceived nasality.
Discussion
Spectral analysis was used to explore the acoustic properties of speech in individuals with cleft lip and palate [21]. In the present study, acoustic analysis of speech using one-third octave spectra analysis and VLHR measures were investigated on children with RCLP and control group. The review of literature indicated that nasalization cannot be measured accurately by using formant analysis alone, specifically in the presence of high fundamental frequency [20]. The shape of the entire region of the spectral envelope is important for vowel perception rather than the frequency and amplitude of the spectral peaks. Therefore, 1/3rd octave spectral analysis evaluates overall spectral envelope to have a theoretical advantage in analyzing hypernasal vowels.
Another advantage of using one third octave spectra analysis is that the 1/3 rd octave bandwidth matches with the Global Journal of Otolaryngology critical band analyzed by ear for the perception of speech [10]. Hence, it can be postulated to correlate well with the perceptual analysis of nasalization. One third octave spectral energy across the frequencies ranging from 100 Hz to 16000 Hz was calculated in the present study. However, literature reported [22] that significant change in spectral amplitude of speech with hypernasality was evident in the frequency bands from 396 Hz to 4000 Hz. Hence, in the presented study we have restricted evaluation of spectral amplitudes between 396 Hz to 4000 Hz for the final analysis.
The results of the present study indicated diversifying outcomes across the stimulus for all the groups. There was no specific trend observed to conclude on the effect of hypernasality in the spectral amplitude of vowel /a/. Among the groups, the differences in spectral amplitudes of TDC were relatively low in children with mild hypernasality than moderate to severe hypernasality. There is an increase in the spectral amplitude around low and mid frequencies and reduction in high frequencies with the increase in perceived nasality. This can be attributed to the increase in the velopharyngeal gap which can lead to increased perception of nasality. The increase in the cross sectional area of the velopharyngeal opening can led to shift in the frequency of the first formant and increased formant bandwidth. The energy concentrated at particular frequencies are indicated as formants. In the first formant region, a pole zero pair is added and the gap between the pole and zero increases with respect to velopharyngeal gap. In this gap, an additional pole is added indicating spectral prominence with the increased VP gap. The results are in agreement with the findings of Vogel et al. [17] who reported higher spectral amplitude at low and mid frequency bands from 476 Hz to 1200 Hz in hypernasal speakers. They also stated that the significant differences in one third octave spectra analysis were only found across severe hypernasal and control groups.
The additional spectral peaks around first formant (F1) were only noticed in moderate to severe hypernasal group and the absence of these peaks indicate reduced hypernasality in the mild hypernasal group. These researchers reported that participants with hypernasal speech were exhibiting increased spectral amplitude between first and second formants around 1 kHz and decreased between second and third formants.
The spectral change over the duration of the vowel was considered as the coexisting speech characteristics that influenced the percentage of hypernasality perceived. Hence, another acoustic measure based on spectral energy was considered for the investigation in VLHR. The results of the present study also indicated increased VLHR measure for vowel /a/ in children with RCLP than control group. The reduced VLHR in control group can be attributed to increased spectral energy at high frequency regions than hypernasal speakers, due to the presence of anti formats toward high frequency regions in hypernasal speech. The reduced spectral energy between F2 and F3 was also reported by Yoshida et al. [23] and Vogel et al. [17]. A study done by Lee et al. [24] indicated decreased high frequency energy (anti-resonance) than low frequencies for nasal voices differentiating significantly from the acoustic characteristics of speech of healthy controls. However, in the present study, the difference in spectral amplitude across the groups was not statistically significant.
The correlation analysis in the present study also indicated no significant relation between the VLHR measures with the perceived nasality. The VLHR measures are based on the sum of the amplitudes in the spectrum. The spectral amplitudes can also be attributed to variations in frequency domain characteristics of voice in nasalized speech, such as a reduction in the intensity of the first formant, the presence of extra resonances, and increased bandwidth of formants [25]. The formants can vary with respect to the position of articulators, particularly tongue [26]. The results are in accordance with the findings of Vogel et al. [17] who also reported no significant differences in the VLHR measures in the children with hypernasality and typically developing children in the age range of 4 years to 12 years, using cut off frequency 600 Hz. In the contrary, few studies measuring VLHR in the adult population with hyponasality [11] showed a significant difference between hyponasal and control groups. The differences in the studies can be attributed to the methodological differences with respect to the subject selection, cut off frequency and the procedure of measuring VLHR [27].
Conclusion
The present study evaluated and correlated measures of hypernasality in children with RCLP and TDC based on perceptual rating scale and objective measures (one third octave spectra analysis & VLHR). In one third octave spectra analysis, the children with hypernasality exhibited significantly less spectral energy at high frequencies than the control group. The increased VLHR was observed in children with hypernasal groups than the control group, however, there were no significant differences noticed across the groups. These are easy to use objective measures which can augment the perceptual evaluation along with other quantitative measures for diagnosis and to evaluate the efficacy of various treatment techniques. | 4,726.4 | 2017-06-06T00:00:00.000 | [
"Physics"
] |
Interval From Simulation Imaging to Treatment Delivery in SABR of Lung Lesions: How Long is Too Long for the Lung?
Purpose The purpose of this study was to evaluate the effect of delay between planning computed tomography (CT) used as a basis for treatment planning and the start of treatment (delay planning treatment [DPT]), on local control (LC) for lung lesions treated by SABR. Methods and Materials We pooled 2 databases from 2 monocentric retrospective analysis previously published and added planning CT and positron emission tomography (PET)–CT dates. We analyzed LC outcomes based on DPT and reviewed all available cofounding factors among demographic data and treatment parameters. Results A total of 210 patients with 257 lung lesions treated with SABR were evaluated. The median DPT was 14 days. Initial analysis revealed a discrepancy in LC as a function of DPT and a cutoff delay of 24 days (21 days for PET-CT almost systematically done 3 days after planning CT) was determined according to the Youden method. Cox model was applied to several predictors of local recurrence–free survival (LRFS). Univariate analysis showed LRFS decreasing significantly related to DPT ≥24 days (P = .0063), gross tumor volume, and clinical target volume (P = .0001 and P = .0022), but also with the presence of >1 lesion treated with the same planning CT (P = .024). LRFS increased significantly with higher biological effective dose (P < .0001). On multivariate analysis, LRFS remained significantly lower for lesions with DPT ≥24 days (hazard ratio, 2.113; 95% confidence interval, 1.097-4.795; P = .027). Conclusions DPT to SABR treatment delivery for lung lesions appears to reduce local control. Timing from imaging acquisition to treatment delivery should be systematically reported and tested in future studies. Our experience suggests that the time from planning imaging to treatment should be <21 days.
Introduction
Time plays a crucial role in radiation oncology (RO), sometimes in unexpected ways (FLASH−radiation therapy [RT], 1 chrono-RT 2 ). In the treatment of cancer, by definition a progressive disease, avoidance of delays is essential. 3,4 This was recently challenged during the COVID-19 pandemic. 5 In SABR, it is obvious that the delay between planning computed tomography (CT) used as a basis for treatment planning and the start of treatment (delay planning treatment [DPT]) must be as short as possible. This reduces changes in the lesion (size and shape) to be treated and/or the patient's anatomy, thus increasing the precision of the delivered treatment. DPT is also a required period for target volume and organs at risk identification, treatment plan preparation and pretreatment quality control. 6 Causes of long DPT are numerous and not fully discussed in this article such as the complexity of treatment plans, increased demand of SABR, 7 treatment machine breakdown and patient's intercurrent pathologies. At the level of the treatment team both oversight or staffing problems can play a role.
Ongoing trials are investigating the efficacy of SABR for oligometastatic disease in up to 10 lesions. 8 If it is not possible to treat all lesions in the same treatment session, the choice of the best sequence (simultaneous, alternating, sequential) is still partly unknown. 9 In case of sequential treatment, the DPT for the last treated lesion could become too long.
But "how long is too long"? For brain metastases RT, a retrospective analysis addressed this question and suggested a maximal delay of 14 days between the MRI scan and the start of stereotactic radiation surgery (SRS). 10 Moreover, a prospective analysis of 69 lesions (including 15 resection cavities) found that in 46% of cases, an interval of <7 days between the planning MRI and a second MRI performed 24 hours before the treatment required a replanning. This percentage increased to 62% of cases with an interval between 8 and 14 days. 11 Although it is reasonable to suppose that replanning does not always mean better local control (LC), this increase in rates remains questioning.
To the best of our knowledge, there is no specific data on DPT in SABR for pulmonary lesions. Moreover, the study protocols like the Radiation Therapy Oncology Group (RTOG) 0915 study for primary lung lesions 12 and the SABR-COMET studies for secondary lesions, 8,13,14 do not report these delays. A recent American Society for Radiation Oncology white paper on the safety of SRS/SABR reiterates the need to specify these temporal criteria in trials and recommends not to exceed a DPT of 14 days for SRS. 15 To assess the effect of DPT on LC for lung lesions, we retrospectively analyzed patients treated by SABR with the Cyber-Knife (CK) system. Our hypothesis was that a long DPT will have a negative effect on LC independently of other variables such as volume or prescription dose.
Patient selection
In 2020, Berkovic et al published the results of a monocentric retrospective analysis of 104 patients and 132 metastatic lung lesions treated with SABR on CK in the setting of oligorecurrent disease between May 2010 and March 2016. 16 In 2017, Janvary et al published a retrospective analysis of 130 patients and 160 lung lesions (primary, recurrent, or metastatic) treated consecutively with SABR at the same center and on the same treatment machine between April 2010 and June 2012. 17 We pooled these 2 databases, removed duplicates by keeping the lesion with the longest follow-up, and added planning CT and positron emission tomography (PET) −CT dates. For the few identified conflicting data, a review of the institutional records of the patients was performed.
Two patients with lung metastases arising from adenoid cystic carcinoma of the salivary glands were removed because the slow progression of this disease could limit the effect of a large DPT on LC and complicate follow-up. Ultimately, 210 patients and 257 lung lesions were analyzed.
Statistical analyses
DPT duration was calculated from existing database items. Results were expressed as means, standard deviations or medians (Q1-Q3) for quantitative variables and as numbers and percentages for categorical variables. To determine the best cutoff value for DPT based on LC, we used the Youden method. Means between the 2 groups thus defined were compared with a Student t test and proportions with a x 2 test. To normalize their distribution, some variables were log-transformed. Local recurrence −free survival (LRFS) was examined using Cox regression models. Multivariate model with stepwise selection was also applied. The hazard ratio (HR) and its 95% confidence interval were reported. LRFS was plotted using Kaplan-Meier curves. Results were considered significant at the 5% significance level (P < .05). All statistical analyses were carried out by SAS version 9.4 (SAS Institute, Cary, NC) and figures by R version 4.1.1.
Demographic data and treatment parameters
A total of 210 patients and 257 treated lesions were included in the analysis. Key demographic and treatment data are available in the source articles. Initial analysis revealed a discrepancy in LC as a function of DPT. According to the Youden method, a cutoff of 24 days was set. This cutoff shows very good specificity (88.1%) but low sensitivity (25.0%). Demographic and treatment parameters are reported in Table 1 in each arm: DPT <24 days (arm A) and DPT ≥24 days (arm B).
A total of 219 (85.2%) lesions were treated in arm A and 38 in arm B. There was no difference between the 2 groups in age, gross tumor volume (GTV) and planning target volume (PTV), biological effective dose (BED), number of fractions, or percentage of treatment of primary or secondary lesions. However, tracking technique (spine tracking versus real-time tumor tracking) was used more frequently in the short delay group (P = .042). Regarding SABR of metastatic lesions, there was no difference in the percentage of pulmonary, digestive, or other primary origin. There was no difference in the percentage of use of previous chemotherapy or radiation therapy. Treatment of multiple lesions with the same planning CT (≥2) is significantly more frequent in arm B (≥24 days; P = .0021). In both arms, the treated volumes were determined using the same margins: 3 mm from GTV to clinical target volume (CTV).
Delay planning treatment
The median time from planning CT to first day of treatment was 14 days (Q1-Q3, 11-19 days). Figure 1 shows the frequency histogram of DPT expressed in days. Almost all patients had planning PET-CT 3 days after planning CT and the histogram is simply shifted by 3 days. This excludes the delay "PET to treatment" as a confounding factor and this delay is therefore not considered further. Abbreviations: BED = biological equivalent dose; CT = computed tomography; GTV = gross tumor volume; PTV = planning target volume; SD = standard deviation. BED with a/b = 10 Gy.
LC and LRFS by DPT
The median DPT was 13 days (11-18 days) for locally controlled lesions versus 14 days (12-23 days) for LR lesions. Using the Cox model, the risk of LR increased significantly with DPT, with a HR of 2.11 (P = .029). Cox model was applied to several predictors of LRFS (Table 2). Univariate regression analysis was performed for different variables of interest, such as time course (DPT and cutoff delay), BED, GTV, PTV, and presence of tumor real-time tracking or prior cytotoxic treatment (chemotherapy or radiation therapy).
Survival curves for the local recurrence event
Discussion
DPT is an important period in RO. Obviously this delay should be as short as possible without compromising the quality of the process leading up to the treatment (a principle that could be called ASASA [as soon as safely achievable]). To reduce this period, online adaptive radiation therapy is a promising concept but technical and clinical challenges remain. 18 To date, little is known about the safe maximal time interval between the planning CT and the start of treatment in SABR of primary and secondary lung lesions. Some guidelines recommend a maximum DPT of 14 days. This cutoff is based on retrospective data from the treatment of brain metastases with SRS. 15 There are no clear recommendations for lung lesions, even in study protocols addressing this technique. 8,[12][13][14] Our retrospective analysis confirms the effect of DPT on LC for primary and secondary lung lesions. This effect withstands multivariate analysis including BED and tumor/target volume known to affect LC. Other available variables were also tested but do not significantly affect LC such as a tracking method or previous chemotherapy/ radiation therapy treatment. The presence of tracking allows to assess whether the decrease of LC could be due to anatomic changes of the patient that make spinal tracking of CK less efficient. Previous cancer treatment could be a marker of radioresistance and/or rapid repopulation, which would explain lower LC. 19 Determining a cutoff DPT with prospective testing of this delay is undesirable for obvious ethical considerations and has no clinical basis. The only way to investigate this DPT is therefore with retrospective data. In our analysis, a cutoff of 24 days was defined based on the Youden method. This cutoff shows a good specificity but poor sensitivity, which is not surprising because a short delay does not provide certainty of LC. It allows us to define 2 arms and observe a significant decrease in LC for patients in arm B with DPT ≥24 days. This is of course a maximum time frame from which a new simulation should be considered. With the 3 days gap between the planning CT and PET-CT, we recommend a maximum of 21 days between imaging used for treatment planning and the start of treatment. The recommended 14 days seem to be a clinically relevant choice while remaining pragmatic. 13 Several hypotheses can explain the decrease in LC with DPT observed in this study. The main one concerns a geometric miss. Lesions can change both in volume and shape with time explaining decrease in LC. This problem could be exacerbated by the current "2.5D" image Abbreviations: 95% CI = 95% confidence interval of hazard ratio; BED = biological equivalent dose; CT = computed tomography; DPT = delay planning treatment; GTV = gross tumor volume; PTV = planning target volume. BED with a/b = 10 Gy. Figure 2 Kaplan-Meier curves for local recurrence−free survival with significantly greater local control for lesions with delay planning treatment (DPT) <24 days (P = .0063).
Advances in Radiation Oncology: March−April 2023
Interval from simulation to treatment in SABR guidance during RT by the CK system, which makes volumetric assessment of the lesion difficult. A full soft-tissue 3-dimensional imaging system and a medical procedure to inspect these images at each fraction would not solve all problems (because microscopic disease is not considered) but could already detect (large) macroscopic changes. 20 The evolution of neoplastic lesions remains very heterogeneous. It is known that tumor volume/growth plays a role on probability of tumor control. 21,22 This raises the question of appropriate DPT depending on the type of lesion, histology, or previous treatments. For example, in primary lung tumors, Murai et al retrospectively analyzed the progression of stage I non-small cell lung cancers treated with SABR between diagnostic CT and planning CT. They showed a 2-fold longer doubling time for adenocarcinomas compared with squamous cell carcinomas. 23 Another cause of geometric miss could be an anatomic change around the lesion. Concerning this geometric miss, the systemic CTV margin of 3 mm used for all of our patients could have compensated for some modifications, thus underestimating the effect of the delay. In most of the other series assessing SABR for lung lesions, the principle of GTV = CTV is often used. 8,[12][13][14] Limitations Several limitations exist in this study. First, it is an aggregation of retrospective studies with sometimes limited follow-up period. This can lead to inexact results or lack of robustness. To evaluate this problem, a rapid update of our data from LC based on available institutional imaging and pathology follow-up protocols was made. Ten additional LR in arm A (DPT <24 days) versus 7 in arm B (DPT ≥24 days) were found (unpublished data). Although this information is basically crude, it seems relevant to us given the relatively short follow-up time of the 2 source studies (median follow-up time of <2 years, 66 lesions <1 year). We note that 10 of the 17 identified recurrences occurred in the first year after SABR. This update increases the significance of all tests performed in this study. For example, HR increases from 2.11 (P = .029) to 2.94 (P = .0003) with a median DPT of 13 days (11-18 days) and 14 days (12-24 days), depending on LC and LR, respectively. To test the robustness of the analysis, we performed an identical analysis with the extreme values of DPT (over 5 weeks) removed. In this scenario, the cutoff delay remains at 24 days and the statistical analysis at this cutoff value remains significant.
A second limitation of this study concerns numerous confounding factors. We have seen that planning PET-CT was almost systematically done 3 days after the simulation CT, so this was not directly considered a factor. Another potential confounding factor could be that the longer DPT is associated to a more complicated treatment plan, which can be associated with a lower BED. A correlation test between the 2 variables showed a negative correlation but remained nonsignificant (P = .11). Furthermore, multivariate analysis accounting for BED and cutoff delay remained significant for the latter.
Regarding the dose, the volume and type of the lesions, the article by Janvary et al showed a better LC for smaller tumor volume, higher BED and for primary tumors compared with metastases. 17 Berkovic et al showed a better LC for tumor volume, BED and for metastases of digestive origin compared with the "other" groups 16 despite conflicting data in the literature. 24 In our study, these different factors were well distributed between both arms.
For metastatic lesions, previous systemic treatments with chemo, immuno-or targeted therapies may act as radiomodulator agents and affect LC. Only information about previous chemotherapy was available and well balanced between both arms.
Another possibility would be a greater radiation resistance of lesions with a larger DPT, either acquired during this period (very hypothetical) or related to the fact that more than one lesion is more often treated in the case of a large DPT. Irradiation of multiple lesions is certainly the most important confounding factor. It can be considered as a cause of delay due to it being more frequent in arm B (DPT ≥24 days; P = .0021). However, a DPT ≥24 days remains significant even after adjusting for irradiation of multiple lesions. Identification of other causes of delay was not the aim of this article. Some of these can also be considered confounders (eg, deterioration of patients with change/disruption in breathing pattern).
Finally, 2 more arguments illustrate the complexity of the situation and the importance of DPT. These aspects are not discussed in this article but support our conclusions. First, cancer treatment care delay is a well-known problem. 4 Delay may have an effect on the distant progression of the disease, especially if RT requires the therapeutic window of systemic treatments. It may also necessitate restaging and a different therapeutic approach. Second, in addition to macroscopic geometric miss, tumor change during the delay may result in a lower and less-uniform dose outside the (true) GTV and underdosage of microscopic disease with the risk of local and distant recurrence. 25,26 Conclusions SABR of lung lesions is now part of routine clinical practice in many radiation therapy centers. The maximum DPT to avoid compromising LC is not known and limited data are available. Our experience reflects the period of the introduction of SABR in our department with some longer delays and thus provides a unique opportunity to assess this issue. This monocentric retrospective study shows that a cutoff of 24 days allows to define 2 groups of patients with different outcomes in terms of LC. New planning CT should be considered after a maximum period of 3 weeks (ideally 2 weeks) between the planning CT and the start of the treatment. Until adaptive online radiation therapy becomes fully integrated in daily practice, the DPT should be systematically reported and tested in the different studies. | 4,214 | 2022-11-30T00:00:00.000 | [
"Medicine",
"Physics"
] |
Insights into nature of magnetization plateaus of a nickel complex [Ni4(CO3)2(aetpy)8](ClO4)4 from a spin-1 Heisenberg diamond cluster
Magnetic and magnetocaloric properties of a spin-1 Heisenberg diamond cluster with two different coupling constants are investigated with the help of an exact diagonalization based on the Kambe's method, which employs a local conservation of composite spins formed by spin-1 entities located in opposite corners of a diamond spin cluster. It is shown that the spin-1 Heisenberg diamond cluster exhibits several intriguing quantum ground states, which are manifested in low-temperature magnetization curves as intermediate plateaus at 1/4, 1/2 and 3/4 of the saturation magnetization. Besides, the spin-1 Heisenberg diamond cluster may also exhibit an enhanced magnetocaloric effect, which may be relevant for a low-temperature refrigeration achieved through the adiabatic demagnetization. It is evidenced that the spin-1 Heisenberg diamond cluster with the antiferromagnetic coupling constants J1/kB = 41.4 K and J2/kB = 9.2 K satisfactorily reproduces a low-temperature magnetization curve recorded for the tetranuclear nickel complex [Ni4(CO3)2(aetpy)8](ClO4)4 (aetpy = 2-aminoethyl-pyridine) including a size and position of intermediate plateaus detected at 1/2 and 3/4 of the saturation magnetization. A microscopic nature of fractional magnetization plateaus observed experimentally is clarified and interpreted in terms of valence-bond crystal with either a single or double valence bond. It is suggested that this frustrated magnetic molecule can provide a prospective cryogenic coolant with the maximal isothermal entropy change - Delta S = 10.6 J/(K.kg) in a temperature range below 2.3 K.
I. INTRODUCTION
Molecular-based magnetic materials have attracted a considerable research interest over the past few decades, because they provide perspective building blocks for a development of new generation of nanoscale devices with a broad application potential 1-4 . Small magnetic molecules composed a few exchange-coupled spin centers might for instance serve for the rational design of high-density storage devices 5 and various spintronic devices [6][7][8] . Another intriguing feature of a special class of molecular magnetic materials with an extremely slow magnetic relaxation, which are commonly referred to as single-molecule magnets, is their possible implementation for developing novel platform for a quantum computation and quantum information processing [9][10][11][12][13][14][15] .
Appearance of plateaus in low-temperature magnetization curves of molecular magnetic materials at rational values of the magnetization represents other fascinating topical issue of current research interest, which can be experimentally easily validated due to a recent development of high-field facilities [16][17][18][19][20][21][22][23] . The magnetization plateaus often bear evidence of unconventional quantum states of matter theoretically predicted by the respective quantum Heisenberg spin models (see Ref. 24 and references cited therein). It should be pointed out, however, that the underlying mechanism for formation of intermediate magnetization plateau does not necessarily need to be of a purely 'quantum' origin, but it may sometimes have 'classical' character. The 'classical' plateau is a simple adiabatic continuation of a commensurate classical spin state realized in the Ising limit that is of course being subject to a quantum reduction of the local magnetization caused by quantum fluctuations, while the purely 'quantum' plateau relates to a massive quantum spin state with an energy gap that does not have any classical counterpart [24][25][26][27][28] .
Naturally, the most comprehensively understood are nowadays rational magnetization plateaus of the simplest molecular materials, which consist of well isolated magnetic molecules involving just a few spin centers coupled through antiferromagnetic exchange interactions. High-field measurements performed at sufficiently low temperatures have for instance evidenced presence of intermediate magnetization plateau(s) for the dinuclear nickel complex {Ni 2 } as an experimental realization of the spin-1 Heisenberg dimer [29][30][31] , the dinuclear nickel-copper complex {NiCu} as an experimental realization of the mixed spin-(1,1/2) Heisenberg dimer 32 , the trinuclear copper {Cu 3 } and nickel {Ni 3 } complexes as experimental realizations of the spin-1/2 and spin-1 Heisenberg triangles [33][34][35] , the oligonuclear compound {Mo 12 Ni 4 } as an experimental realization of the spin-1 Heisenberg tetrahedron [36][37][38][39] , the pentanuclear copper complex {Cu 5 } as an experimental realization of the spin-1/2 Heisenberg hourglass cluster 40,41 , the hexanuclear vanadium A full energy spectrum can be obtained from Eq. (3) after considering all available combinations of the quantum spin numbers S 12 = 0, 1, 2 and S 34 = 0, 1, 2 together with the composition rules for the total spin angular momentum S T = |S 12 − S 34 |, |S 12 − S 34 |+ 1, · · · , S 12 + S 34 and its z-component S z T = −S T , −S T + 1, ..., S T according to the Kambe coupling scheme 61,62 . For completeness, all energy eigenvalues assigned to allowed combinations of the quantum spin numbers S T , S 12 , S 34 and S z T are listed in Tab. I. At this stage, it is quite straightforward to obtain from the full energy spectrum quoted in Tab. I an exact result for the partition function of the spin-1 Heisenberg diamond cluster Z = Tr e −βĤ = 81 i=1 e −βEi with β = 1/(k B T ) (k B is Boltzmann's constant, T is the absolute temperature), which is explicitly given by the following lengthy expression: The magnetization per one spin can be subsequently obtained from the associated Gibbs free energy G = −k B T ln Z by making use of the following formula: whereas the expression Z h ≡ ∂Z/∂(βh) is defined as follows:
Energy
ST S12 S34 S z The magnetic molar entropy of the spin-1 Heisenberg diamond cluster can be similarly obtained from the exact result (4) for the partition function according to the formula: where N A and R stand for Avogadro's and universal gas constant, respectively. It should be mentioned that the final formula for a temperature derivative of the partition function is too lengthy in order to write it down here explicitly.
III. THEORETICAL RESULTS
In this part, we will proceed to a comprehensive analysis of the most interesting results for the ground state, magnetization curves and magnetocaloric properties of the spin-1 Heisenberg diamond cluster. The ground-state phase diagram of the spin-1 Heisenberg diamond cluster is displayed in Fig. 2 in the J 2 /|J 1 | − h/|J 1 | plane for two particular cases, which differ from one another in antiferromagnetic (J 1 > 0) vs. ferromagnetic (J 1 < 0) character of the coupling constant along a shorter diagonal of the diamond spin cluster. One finds by inspection eight different ground states unambiguously given by the eigenvectors |S T = S z T , S 12 , S 34 , which are classified through a set of the quantum spin numbers determining the total spin and its z-component being equal S T = S z T within all ground states, as well as, two composite spins S 12 and S 34 formed by spin-1 entities from opposite corners of the diamond spin cluster. Within the framework of the Kambe's coupling scheme 61,62 , it is convenient to express first the relevant ground states as a linear combination over a tensor product of eigenvectors of two considered spin pairs |S T , S 12 , S 34 = i a i |S 12 , S z 12 ⊗ |S 34 , S z 34 before writing them more explicitly as a linear combination over spin states of the usual Ising basis |S T , S 12 , S 34 = i b i |S z 1 , S z 2 , S z 3 , S z 4 . The exact formulas for the eigenvectors |S 12 , S z 12 and |S 34 , S z 34 of the spin-1 Heisenberg dimers are not quoted here explicitly, because they can be found in our preceding work 30 .
FIG. 2:
The ground-state phase diagram of the spin-1 Heisenberg diamond cluster in the J2/|J1| − h/|J1| plane for two particular cases with: (a) the antiferromagnetic interaction J1 > 0; (b) the ferromagnetic interaction J1 < 0. The eigenvectors |ST = S z T , S12, S34 are specified according to the quantum spin numbers determining the total spin and its z-component ST = S z T , as well as, two composite spins S12 and S34 formed by spin-1 entities from opposite corners of a diamond spin cluster.
Typical isothermal magnetization curves of the spin-1 Heisenberg diamond cluster are plotted in Fig. 4 for the antiferromagnetic interaction J 1 > 0 and a few selected values of the interaction ratio J 2 /J 1 in order to provide an independent check of all possible magnetization profiles and field-driven phase transitions. It should be emphasized that the magnetization curves calculated at the lowest temperature k B T /J 1 = 0.01 are strongly reminiscent of zero-temperature magnetization curves with discontinuous jumps of the magnetization, which take place at the aforementioned critical magnetic fields in agreement with the ground-state phase diagram shown in Fig. 2(a). Note furthermore that the rising temperature causes just a gradual melting of the relevant magnetization curves. The first particular case, which is shown in Fig. 4(a) for the interaction ratio J 2 /J 1 = −1.25 with the dominant ferromagnetic interaction along the sides of a diamond spin cluster, illustrates a smooth magnetization curve without any intermediate plateau. The second particular case with the weaker ferromagnetic interaction J 2 /J 1 = −0.75 shows an abrupt rise of the magnetization in vicinity of zero magnetic field, which is subsequently followed by the intermediate 3/4-plateau ending up just at the saturation field [see Fig. 4(b)]. It is noteworthy that the intermediate 3/4-plateau as well as a steep rise of the magnetization close to the saturation field is gradually smeared out upon increasing of temperature. The magnetization curves with a steep rise of the magnetization followed by the intermediate 1/2-and 3/4-plateaus is depicted in Fig. 4(c) for the specific value of the interaction ratio J 2 /J 1 = 0.25. The magnetization curves of the spin-1 Heisenberg diamond cluster displayed in Fig. 4(d) for the higher value of the interaction ratio J 2 /J 1 = 0.5 indicate presence of the intermediate 1/4-, 1/2-and 3/4-plateaus, which follow-up the initial abrupt rise of the magnetization observable near zero magnetic field. It should be stressed, moreover, that the most narrow 1/4-plateau becomes already indiscernible at relatively low temperature k B T /J 1 ≈ 0.1 due to its tiny energy gap. The magnetization curves of the spin-1 Heisenberg diamond cluster for the last two values of the interaction ratio J 2 /J 1 = 0.75 and 1.25, which are plotted in Fig. 4 for formation of the magnetization plateaus is preserved just for zero plateau, while the microscopic nature of all other magnetization plateaus is completely different as evidenced by the ground-state phase diagram shown in Fig. 2(a). The isothermal entropy change of the spin-1 Heisenberg diamond cluster invoked by the change of magnetic field ∆h = h i − h f is plotted in Fig. 5 as a function of temperature for four different values of the interaction ratio J 2 /J 1 , whereas h i = 0 stands for the initial magnetic field and h f = 0 is the final magnetic field during the isothermal demagnetization. Within the proposed notation the conventional MCE occurs for positive values of the isothermal entropy change −∆S m = S m (h f = 0) − S m (h i = 0) > 0, while the inverse MCE is manifested through its negative values −∆S m < 0. It should be pointed out, moreover, that the zero-temperature asymptotic value of the molar entropy change −∆S m = R ln Ω 0 can be simply related to a degeneracy Ω 0 of the zero-field ground state whenever the magnetic-field change does not coincide with any critical magnetic field ∆h = h c,n . In the reverse case ∆h = h c,n the molar entropy change converges in the zero-temperature limit to the smaller asymptotic value −∆S m = R(ln Ω 0 −ln 2) due to a two-fold degeneracy of two coexistent ground states at a critical magnetic field h c,n . The temperature dependences of the molar entropy change of the spin-1 Heisenberg diamond cluster is shown in Fig. 5(a) and (b) for a few different values of the magnetic-field change and the fixed value of the interaction ratio J 2 /J 1 = 0.25, which is consistent with presence of the valence-bond-crystal ground state |2, 0, 2 in the zero-field limit. It is worthwhile to remark that the singlet state of the near-distant spin pair emergent within the ground state |2, 0, 2 effectively decouples all spin correlations of two further-distant spins. Owing to this fact, the further-distant spins behave at zero magnetic field as free paramagnetic entities and the respective degeneracy of the zero-field ground-state is Ω 0 = 9. It can be seen from Fig. 5(a) and (b) that the molar entropy change actually tends to the specific value −∆S m = R ln 9 ≈ 18.3 J.K −1 .mol −1 for all magnetic-field changes except those being equal to the critical magnetic fields ∆h/J 1 = 1.5 and 2.5. In this latter case, the molar entropy change acquires in zero-temperature limit smaller asymptotic value −∆S m = R(ln 9 − ln 2) ≈ 12.5 J.K −1 .mol −1 in accordance with the previous argumentation [see the curves for ∆h/J 1 = 1.5 and 2.5 in Fig. 5(a) and (b)]. Although the isothermal entropy change generally diminishes upon increasing of temperature, it is quite evident from Fig. 5(a) and (b) that the reverse may be true in a range of moderate temperatures whenever the magnetic-field change is chosen sufficiently close to one of the critical magnetic fields.
The isothermal entropy changes of the spin-1 Heisenberg diamond cluster are depicted in Fig. 5(c) and (d) for relatively small and moderate changes of the magnetic field by assuming the interaction ratio J 2 /J 1 = 0.5 supporting another zero-field ground state |1, 1, 2 . It should be pointed out that the conventional MCE with −∆S m > 0 occurs for any magnetic-field change quite similarly as in the previous case. In spite of this qualitative similarity, the molar entropy change converges in zero-temperature limit to completely different asymptotic values on account of a triply degenerate (Ω 0 = 3) ground state |1, 1, 2 realized in zero-field limit. As a matter of fact, it is obvious from Fig. 5(c) and (d) that the molar entropy change reaches either the asymptotic value −∆S m = R ln 3 ≈ 9.1 J.K −1 .mol −1 or −∆S m = R(ln 3 − ln 2) ≈ 3.4 J.K −1 .mol −1 depending on whether or not the magnetic-field change coincides with the critical magnetic field, whereas the latter smaller value of −∆S m applies only if the magnetic-field change corresponds to one of three critical magnetic fields ∆h/J 1 = 0.5, 2.0 or 3.0. Under these specific conditions, the isothermal entropy change starts from this lower asymptotic value, then it increases with rising temperature to its local maximum before it finally tends to zero upon further increase of temperature. The most interesting temperature dependences of the isothermal entropy change can be found when the magnetic-field change is selected slightly below or above the critical magnetic fields [e.g. ∆h/J 1 = 0.4 or 0.6 in Fig. 5(c)], because the molar entropy change then starts from its higher zero-temperature asymptotic limit, then it shows a rapid decline to a local minimum subsequently followed by a continuous rise to a local maximum upon increasing of temperature before it finally decays to zero in the high-temperature region. If the magnetic-field change is sufficiently far from the critical magnetic fields one either finds a monotonic temperature decline of the isothermal entropy change upon increasing of temperature [see curve for ∆h/J 1 = 0.2 in Fig. 5(c)] or one recovers a nonmonotonic temperature dependence with a single round maximum emerging at some moderate temperature [see the curves for ∆h/J 1 = 1.0 and 1.5 in Fig. 5(c) or ∆h/J 1 = 4.0 in Fig. 5(d)].
The completely different magnetocaloric features of the spin-1 Heisenberg diamond cluster can be traced back from temperature variations of the isothermal entropy change, which are shown in Fig. 5(e)-(h) for two selected values of the interaction ratio J 2 /J 1 = 0.75 and 1.25. The common feature of these two particular cases is that the zerofield ground state is the non-degenerate singlet state |0, 2, 2 , which is responsible for existence of zero magnetization plateau in the respective low-temperature magnetization curves [see Fig. 4(e) and (f)]. in the consequence of that, the molar entropy change asymptotically tends in zero-temperature limit either to zero or to the specific value −∆S m = −R ln 2 ≈ −5.8 J.K −1 .mol −1 depending on whether the magnetic-field change differs or equals to the critical magnetic fields, respectively. It can be seen from Fig. 5(g) and (h) that the spin-1 Heisenberg diamond cluster with the interaction ratio J 2 /J 1 = 1.25 exhibits the inverse MCE with −∆S m < 0 for most of the magneticfield changes in a relatively wide range of temperatures. The exception to this rule are just the isothermal entropy changes, which are induced by sufficiently large change of the magnetic field exceeding the saturation field [see the curve ∆h/J 1 = 6.0 in Fig. 5(h)]. Contrary to this, the spin-1 Heisenberg diamond cluster with the interaction ratio J 2 /J 1 = 0.75 shows an outstanding crossover between the inverse and conventional MCE. While the inverse MCE with −∆S m < 0 prevails at lower temperatures and magnetic-field changes, the conventional MCE with −∆S m > 0 dominates at higher temperatures and magnetic-field changes [see Fig. 5(e)-(f)].
Last but not least, let us examine the adiabatic change of temperature as another basic magnetocaloric property of the spin-1 Heisenberg diamond cluster. For this purpose, density plots of the molar entropy are displayed in Fig. 6(a)-(d) in the magnetic field versus temperature plane for four selected values of the interaction ratio J 2 /J 1 , which have been previously used in order to demonstrate a diversity of the magnetization profiles. It should be emphasized that black contour lines shown in Fig. 6(a)-(d) correspond to isentropy lines, from which one can easily deduce adiabatic changes of temperature achieved upon lowering of the external magnetic field. It is quite evident from Fig. 6(a)-(d) that the most notable changes of temperature occur in vicinity of all critical magnetic fields, whereas a sudden drop (rise) in temperature occurs during the adiabatic demagnetization slightly above (below) critical magnetic field. Hence, it follows that the abrupt magnetization jump manifest itself during the adiabatic demagnetization as a critical fan spread over a respective critical magnetic field. Two critical fans can be accordingly observed in Fig. 6(a), three critical fans are visible in Fig. 6(b) and four critical fans appear in Fig. 6(c) and (d). It can be seen from Fig. 6(a)-(d) that most of isentropes converge to some nonzero temperature as the external magnetic field gradually vanishes. More specifically, all isentropes of the spin-1 Heisenberg diamond cluster with the interaction ratio J 2 /J 1 = 0.75 or 1.25 acquire nonzero temperature as the external magnetic field goes to zero [see Fig. 6(c)-(d)]. This observation can be related with presence of zero-field singlet ground state |0, 2, 2 , which is responsible for zero magnetization plateau. On the other hand, the spin-1 Heisenberg diamond cluster with the interaction ratio J 2 /J 1 = 0.25 or 0.5 may exhibit during the adiabatic demagnetization a sizable drop of temperature down to ultra-low temperatures due to absence of zero magnetization plateau 27 . To achieve this intriguing magnetocaloric feature, the molar entropy should be fixed to a smaller value than the entropy corresponding to a degeneracy of the respective zero-field ground state, i.e. S m < R ln 9 ≈ 18.3 J.K −1 .mol −1 for the zero-field ground state |2, 0, 2 emergent for J 2 /J 1 = 0.25 or S m < R ln 3 ≈ 9.1 J.K −1 .mol −1 for the zero-field ground state |1, 1, 2 emergent for J 2 /J 1 = 0.5, respectively. These findings could be of particular importance when the molecular compound {Ni 4 } would be used for refrigeration at ultra-low temperatures.
IV. THEORETICAL MODELING OF TETRANUCLEAR NICKEL COMPLEX {NI4}
In this part, we will interpret available experimental data for the magnetization and susceptibility of the tetranuclear nickel complex {Ni 4 } 59,60 , which can be theoretically modeled by the spin-1 Heisenberg diamond cluster given by the Hamiltonian (1). It actually follows from Fig. 7 that the magnetic core of the tetranuclear coordination compound {Ni 4 } constitutes a 'butterfly tetrameric' unit composed of four exchange-coupled Ni 2+ ions, which is formally identical with the magnetic structure of the spin-1 Heisenberg diamond cluster schematically illustrated in Fig. 1. High-field magnetization data of the nickel complex {Ni 4 } recorded in pulsed magnetic fields up to approximately 68 T at the sufficiently low temperature 1.3 K are presented in Fig. 8(a) together with the respective theoretical fit based on the spin-1 Heisenberg diamond cluster. It is evident from Fig. 8(a) that the measured magnetization data bear evidence of two wide intermediate plateaus roughly at 1.11 and 1.65 µ B per Ni 2+ ion, which are consistent with 1/2-and 3/4-plateaus when the total magnetization is scaled with respect to its saturation value and the appropriate value of the gyromagnetic factor g = 2.2 of Ni 2+ ions is considered. The abrupt magnetization jumps detected at the critical magnetic fields B c,1 ≈ 40.5 T and B c,2 ≈ 68.5 T clearly delimit a width of these intermediate magnetization plateaus. The distinct magnetization profile with a sole presence of the intermediate 1/2-and 3/4-plateaus enables a simple estimation of the relevant coupling constants. First, it has been argued by the ground-state analysis that the intermediate 1/2-and 3/4-plateaus emerge in a zero-temperature magnetization curve as the only magnetization plateaus just if the interaction ratio falls into the range J 2 /J 1 ∈ (−1/2, 1/3). Second, one may take advantage of the fact that the width of 3/4-plateau ∆B 3/4 = B c,2 − B c,1 is independent of the interaction ratio J 2 /J 1 in contrast with the width of 1/2-plateau ∆B 1/2 = B c,1 . Hence, the relative width of two magnetization plateaus δ r = ∆B 3/4 : ∆B 1/2 = 28 T : 40.5 T ≈ 0.69 observed in experiment can be straightforwardly exploited for an unambiguous determination of a relative strength of the coupling constants: Once determined, the absolute values of the coupling constants J 1 and J 2 can be easily calculated for instance from the first critical field B c,1 = 40.5 T when taking into account knowledge of the interaction ratio (16): In accordance with this argumentation, the spin-1 Heisenberg diamond cluster with the coupling constants J 1 /k B = 41.4 K, J 2 /k B = 9.2 K and the gyromagnetic factor g = 2.2 indeed satisfactorily reproduces the high-field magnetization data of the butterfly-tetramer compound {Ni 4 } as convincingly evidenced by the respective theoretical fit, which is shown in Fig. 8(a) the coupling constants J 1 and J 2 does not significantly improve a theoretical fit of these experimental data. It has been found in Ref. 60 that the significant improvement of the theoretical fit can be achieved only when considering a weak ferromagnetic exchange coupling J 3 /k B = −0.66 K between the further-distant spins S 3 and S 4 , which allows a steeper uprise of the magnetization in a low-field range. A consideration of the exchange coupling between the further-distant spins S 3 and S 4 is however beyond the scope of the present article. Next, we will employ the coupling constants (17) ascribed to the coordination compound {Ni 4 } for a theoretical interpretation of a temperature dependence of the susceptibility times temperature (χT ) product. To this end, the available experimental data for the χT product of the tetranuclear nickel complex {Ni 4 } are confronted in Fig. 9(a) with the respective theoretical prediction based on the spin-1 Heisenberg diamond cluster by assuming the model parameters (17) previously extracted from the fitting procedure of the high-field magnetization data. Although a theoretical curve qualitatively captures all essential features for temperature variations of the χT product including a local minimum experimentally observed around 14 K, the good quantitative accordance between the experimental and theoretical data is found just in a relatively narrow range of temperatures T ∈ (25, 80) K while outside of this temperature range the theoretical data generally underestimate the experimental ones. We have therefore adapted the optimization technique based on a hill-climbing procedure in order to find the best fitting set for the χT data. This procedure provided for the tetranuclear nickel compound {Ni 4 } described by the spin-1 Heisenberg diamond cluster another fitting set of the model parameters J 1 /k B = 54.3 K, J 2 /k B = 13.9 K and g = 2.31, which not only qualitatively but also quantitatively captures the experimental data in a full range of temperatures as exemplified in Fig. 9(b). While the rise of the gyromagnetic factor by a few percent (cca. 5 %) could be attributed to a substantial temperature difference within the magnetization and susceptibility measurements, the relatively large discrepancy in assessment of both coupling constants clearly indicates an oversimplified nature of the spin-1 Heisenberg diamondcluster model given by the Hamiltonian (1). It is quite reasonable to conjecture from nearly isotropic character of the magnetization curves measured along two orthogonal crystallographic axes 60 that the axial and/or rhombic zero-fieldsplitting parameters acting on Ni 2+ ions are presumably negligible and hence, the discrepancies in the magnetization and susceptibility data could be resolved when taking into consideration the biquadratic interaction and/or the pair exchange interaction between the further-distant spins S 3 and S 4 . Last but not least, the best fitting set (17) extracted for the spin-1 Heisenberg diamond-cluster model from the highfield magnetization curve of the tetranuclear nickel complex {Ni 4 } will be used for making a theoretical prediction of its basic magnetocaloric properties not reported experimentally hitherto. More specifically, we will investigate in detail temperature variations of the isothermal magnetic entropy change as well as field-induced changes of temperature during the adiabatic demagnetization. It is evident from Fig. 10(a) that the isothermal mass entropy change of the nickel compound {Ni 4 } gradually diminishes from its maximum value −∆S M ≈ 10.6 J.K −1 .kg −1 upon increasing of temperature whenever the magnetic-field change is sufficiently small ∆B < 15 T. On assumption that the magneticfield change is set ∆B = 7 T the molecular compound {Ni 4 } provides an efficient refrigerant below 2.3 K with the enhanced MCE −∆S M > 10 J.K −1 .kg −1 . It should be stressed that a subtle rise of the isothermal entropy change −∆S M can be detected for the higher magnetic-field changes [e.g. see the curve for ∆B = 20 T in Fig. 10(a)], which is however of very limited applicability for the cooling technologies.
On the other hand, the density plot of the magnetic mass entropy in the magnetic field versus temperature plane is displayed in Fig. 10(b) with the aim to elucidate a parameter space suitable for cooling purposes. The relevant contour lines with constant magnetic entropy bring insight into magnetic-field driven changes of temperature during the process of adiabatic demagnetization. A considerable drop and rise of temperature apparently occurs in the isentropes near the critical magnetic fields, which correspond to the magnetic-field-driven magnetization jumps. If the magnetic entropy is set sufficiently close to the particular value S M ≈ 10.6 J.K −1 .kg −1 , moreover, the adiabatic demagnetization should cause a sizable drop of temperature of the molecular complex {Ni 4 } with up to −∆T ≈ 10 K achieved due to the magnetic-field change ∆B = 7 T.
V. CONCLUSIONS
In the present article we have investigated in detail magnetic and magnetocaloric properties of the spin-1 Heisenberg diamond cluster with two different coupling constants through an exact diagonalization based on the Kambe's method, which takes advantage of a local conservation of composite spins formed by spin-1 entities located in opposite corners of a diamond spin cluster. It has been verified that the spin-1 Heisenberg diamond cluster exhibits several intriguing quantum ground states, which come to light in low-temperature magnetization curves as intermediate 1/4-, 1/2-or 3/4-plateau depending on a specific choice of the interaction ratio and the magnetic field. We have demonstrated a substantial diversity of the magnetization curves, which may exhibit different magnetization profiles with either a single 3/4-plateau, a sequence of two consecutive 1/2-and 3/4-plateaus, three consecutive 1/4-, 1/2-and 3/4plateaus, four consecutive 0-, 1/4-, 1/2-and 3/4-plateaus or is completely free of any plateau. In addition, the spin-1 Heisenberg diamond cluster may also exhibit the enhanced MCE, which may be relevant for a low-temperature refrigeration achieved through the adiabatic demagnetization on assumption that a relative strength of the coupling constants J 2 /J 1 ∈ (−1, 2/3) is consistent with absence of the zero magnetization plateau.
It has been evidenced that the spin-1 Heisenberg diamond cluster with the antiferromagnetic coupling constants J 1 /k B = 41.4 K, J 2 /k B = 9.2 K and the gyromagnetic factor g = 2.2 satisfactorily captures low-temperature magnetization curves recorded for the tetranuclear nickel complex {Ni 4 } including a size and position of the intermediate 1/2and 3/4-plateaus 60 . Moreover, it turns out that the fractional magnetization plateaus observed experimentally bear evidence of two remarkable valence-bond-crystal ground states with either a single or double valence bond between the near-distant spin-1 Ni 2+ ions. It has been also suggested that the molecular compound {Ni 4 } may provide a prospective cryogenic coolant with the maximal isothermal entropy change −∆S M = 10.6 J.K −1 .kg −1 suitable for a low-temperature refrigeration below 2.3 K. | 7,046.2 | 2020-10-06T00:00:00.000 | [
"Physics"
] |
A non-zircon Hf isotope record in Archean black shales from the Pilbara craton confirms changing crustal dynamics ca. 3 Ga ago
Plate tectonics and associated subduction are unique to the Earth. Studies of Archean rocks show significant changes in composition and structural style around 3.0 to 2.5 Ga that are related to changing tectonic regime, possibly associated with the onset of subduction. Whole rock Hf isotope systematics of black shales from the Australian Pilbara craton, selected to exclude detrital zircon components, are employed to evaluate the evolution of the Archean crust. This approach avoids limitations of Hf-in-zircon analyses, which only provide input from rocks of sufficient Zr-concentration, and therefore usually represent domains that already underwent a degree of differentiation. In this study, we demonstrate the applicability of this method through analysis of shales that range in age from 3.5 to 2.8 Ga, and serve as representatives of their crustal sources through time. Their Hf isotopic compositions show a trend from strongly positive εHfinitial values for the oldest samples, to strongly negative values for the younger samples, indicating a shift from juvenile to differentiated material. These results confirm a significant change in the character of the source region of the black shales by 3 Ga, consistent with models invoking a change in global dynamics from crustal growth towards crustal reworking around this time.
The onset of plate tectonics, associated crustal evolution, and related changes in crustal chemistry are strongly debated e.g. [1][2][3] . Estimates for the initiation of plate tectonics range over 3 billion years from the Hadean to Neoproterozoic 4 . This uncertainty reflects the fragmentary and incomplete nature of the rock record, as well as differences in criteria and datasets used to characterize plate tectonic activity e.g. 5,6 . Most studies have been based on a comparison between rock associations from modern plate tectonic environments with those from ancient successions, notably through geochemical and isotopic data, highlighting similarities and/or differences. Recent geochemical studies of Archean crustal domains have shown that their bulk composition records a change from a predominantly mafic tholeiitic to a more evolved calc-alkaline character between 3 Ga and 2.5 Ga e.g. 7,8 . This shift in crustal chemistry has been linked to changes in global geodynamics, most likely associated with the onset of subduction 8,9 . In this paper, we document the Hf isotopic characteristics of black shales from the Pilbara craton of Western Australia, establishing the applicability of this approach in recording the character of changes in crustal source character. These rocks display a change from a juvenile to an evolved crustal source by ca. 3 Ga. In conjunction with field and geochemical data from other sites, we suggest this reflects the development of sufficient crustal rigidity to enable widespread subaerial exposure of the source region and is consistent with models invoking the initiation of subduction of lithospheric plates at this time.
Approach
A key problem in assessing crustal evolution is the sparse rock record of the Archean. Of today's accessible crust, only ca. 7% is older than 2.5 Ga 6,10 , which is also often heavily deformed and chemically altered. A way to circumvent this issue is the analysis of ancient sediments derived from these early crustal assemblages. These rocks preserve a representative chemical composition of their source regions, even when the source is no longer preserved e.g. 11 .
Hafnium isotopes are an excellent tracer of crust-mantle evolution and changes in bulk crustal chemistry, because of their time-integrated parent-daughter ratio evolution ( 176 Lu/ 177 Hf), which reflects partial mantle melting processes as well as intra-crustal reworking e.g., 12 . Thus, Hf isotopes from mineral archives, such as zircons, are a popular method to provide a window into Earth's early crustal evolution. Their robustness against weathering and re-melting together with single grain dating, host-melt records of O isotope signatures, and their possibility to 'freeze' the Hf isotopic composition of their source rock, due to the extreme low Lu and high Hf concentrations, make them ideal reservoirs to study past crustal evolution e.g. [13][14][15] .
The Hf isotopic composition of detrital zircons is thus an extremely powerful tool to investigate sediment provenance, source rocks and evolution of lost crustal domains 16,17 . Zircons, however, form predominantly in evolved melts. More primitive, mafic melts that dominate juvenile crustal reservoirs, often do not reach Zr saturation 18 , yet constitute a substantial part, if not the majority of the Archean crustal rock record. Furthermore, short-lived crustal domains may not be accounted for in the zircon record 19 . Hence, although, zircon forms an invaluable crustal archive, its requirement with respect to specific host magma composition introduces a potential bias and is best complemented by a mafic counterpart. Unfortunately, no such mineral archive with low Lu/Hf and high U/Pb exists for mafic rocks.
To circumvent this issue, we investigated a series of Archean black shales with known ages between 3.46 and 2.74 Ga, overlapping the inferred time of changes in global geodynamics. Black shales are fine-grained, sedimentary rocks accumulating in low energy environments that are representative of the greater crustal provenances from which they were derived. Black shales are composed of authigenic and detrital components. Whilst signatures of aquatic mobile elements, stored in the former component, have been used track changing seawater chemistry under changing redox conditions, e.g. 20 , their detrital component has been less subject of scientific investigations.
Geology
The Pilbara Craton of Western Australia hosts a series of Archean greenstone terranes that are composed of volcano-sedimentary sequences, including komatiites, tholeiitic basalt-rhyolite series, volcanoclastic sedimentary rocks, banded iron formations and black shale horizons 21 . These sequences are intruded by igneous rocks of the tonalite-trondhjemite-granodiorite (TTG) series and post-collisional granites. We analysed 19 samples recovered from drill cores from four different black shale horizons, covering a time span of ca. 800 Ma, between 3.5 and 2.7 Ga. Drilling was performed during the course of the Archean Biosphere Drilling Project (ABDP), and cores are stored at the Geological Survey of Western Australia in Perth. Detailed sample descriptions for each drill core are provided by 20 , who analysed their elemental and Mo-Cr isotope composition, as part of a study into paleo-oxygenation of the Archean atmosphere.
The oldest unit sampled is the 3.47 Ga Duffer Formation, Warrawoona Group, which consists of volcanic flows, pillow basalts, and sedimentary rocks, as well as a ~200 m thick black shale unit 22
Analytical Methods
Hafnium isotope analyses were performed at the Research School of Earth Sciences, Australian National University, Australia. Approximately 100 mg of de-carbonated sample material was spiked with a 176 Lu-178 Hf enriched mixed isotope tracer and dissolved in a HNO 3 /HF mixture in Teflon ® vials. Samples were subjected to both "soft" and "hard" dissolution to evaluate the possible effect of detrital zircons on Hf values. Soft dissolution employs a chemical procedure that ensures that no zircon is dissolved and follows the method and rationale of 23 , whereas hard dissolution will dissolve any detrital zircon. Previous Mo-Cr isotope analyses of these rocks revealed a substantial detrital component 20 , which could include detrital zircon. Since the aim of our study is to obtain a representative record of bulk crustal evolution, it is important to exclude effects of detrital zircons as these will bias results towards their specific crustal end-member compositions. To evaluate the potential of detrital zircon and then to circumvent potential effects, we applied a soft-dissolution technique to the bulk sediments. This technique does not attack or dissolve zircon crystals and avoids the high pressure-high temperature dissolution step that is commonly applied to zircons. The same method has successfully been applied to partial garnet dissolutions 23 and to mafic rocks from a layered intrusion 24 . Whilst high-pressure dissolution included zircon-derived Hf in the analyses, the 'table top'-dissolution technique did not. For soft dissolution, vials with spiked samples were placed at 120 degrees on a hotplate for 48 hours in an HF-HNO 3 mixture. Residues were centrifuged prior to further handling to ensure removal of any detrital phase, in particular zircon, that could be in the sample. For standard high pressure-high temperature dissolution, sample powders are placed at 200 °C in Teflon ® vials inside metal jacked autoclaves for 48 hours. After dissolution and evaporation to dryness, all analysed samples were subsequently re-dissolved and dried down three times in concentrated nitric acid, and finally equilibrated with hydrochloric acid. Hafnium and Lu were separated from the rock matrix using LN-Spec ® chromatography 25 . Isotope ratios were measured on a ThermoFisher Scientific ® NeptunePlus ICP-MS, using a dry plasma with a Cetac ® Aridus II desolvating system and reported relative to JMC-475 176 Comparison of hard vs. soft dissolution for the same samples indicates that zircons have contributed to the bulk Hf budget, albeit only in small proportions (see Table S1). The ε Hf (t) were calculated in the same way compared to those of the soft-dissolutions, and resulted in slightly lower values for the high-pressure batches and is interpreted here as addition of non-radiogenic Hf isotopic compositions of zircons to the Hf in the rock matrix.
Results
All black shale samples show very low Lu and Hf concentrations (<1 ppm, and 0.2-2.6 ppm, respectively), which is typical for sedimentary rocks containing a substantial chemical, authigenic component 27 The incompatible nature of rare earth elements (REE, including Lu) and the high field strength elements (HFSE, including Hf) during partial melting and associated concertation in crustal rocks together with low solubility in seawater indicate that both, Lu and Hf, are fully represented by the detrital component in the shales. The Hf isotopic compositions of weathered material can be significantly different to their continental source rocks 27 . This is the result of either preferential weathering of high Lu/Hf phases, e.g., in phosphates, or Aeolian sorting of low Lu/Hf zircons with an associated bias away from representative crust in their respective Hf isotope composition. Weathering or Aeolian sorting, or a combination of both, can contribute to the bulk chemical budget.
To assess the respective effects, comparison with modern sediments is appropriate. In modern sediments, progressive maturation of a sediment, that is an intensification in this process, is reflected through higher 176 Lu/ 177 Hf 28 . However, Lu/Hf in black shales from this study are low, and continental sources with a high phosphate input are improbable components prior to 2.5 Ga 29 . The so-called 'zircon-effect' in sediment with inherited low ε Hf at the time of deposition 13 is eliminated by the selective dissolution ('soft dissolution') applied in this study. It is thus concluded that only non-zircon, detrital components are contributing to the analysed batches in this study. A second effect that may compromise Lu-Hf studies in shales is that of isotope disturbance through weathering and post-depositional alteration. Studies on African sediments have shown that Al/K (as an index for weathering) and εHf are strongly correlated, trending towards more radiogenic Hf values with higher intensity of chemical weathering 30 . No such co-variation is observed in black shales of this study with both low Al/K (2.4 to 5.8) and ε Hf (t) (Fig. 1) leading to the conclusion that weathering had no influence on isotope systematics.
Discussion
The epsilon notation for Hf isotope compositions allows a direct comparison of samples with different ages. With the assumption that the earliest primitive mantle was (near-)chondritic e.g. 31 , crustal Hf isotopes evolve towards negative values whereas residual mantle with higher Lu/Hf evolves towards positive values. Indeed, all values of the black shales in this study with ages <3 Ga show negative values, indicating evolved crustal domains in their provenance. The effect of Hf hosted in zircon is minimal in these samples, evidenced through the comparison of soft vs hard dissolution. It is, however, noteworthy that this effect on Hf isotopic compositions towards unradiogenic Hf, if present, would be even larger if zircon contributed more to the bulk sediment. Crustal formation model ages, using an average 176 Lu/ 177 Hf = 0.0093 for evolved rocks, of these samples all add up to a peak at ca. 3.25 Ga. This indicates a common provenance of sediments <3 Ga, i.e., they have been derived from similar crustal domains or even a single terrane, and/or from a different one that formed contemporaneously.
Samples >3 Ga do not coincide with their younger counterparts, and are systematically more radiogenic. In an ε Hf (t) versus time diagram (Fig. 2), they plot above the assumed 'depleted mantle' array, which reflects the linear evolution of the present day depleted mantle through time. The approach of describing the evolution of the whole of the depleted mantle in one single straight line is debatable, because it assumes an Earth-wide, uniform reservoir with a constant process of depletion over the past 4.4 Ga e.g. 32 , but to allow for comparison with other data, this approach is adopted here. The super-depleted, positive values indicate that the 3.5 Ga old sedimentary rocks have a provenance that is controlled by a domain of substantially more depleted, juvenile material.
Very few mid-Archean Hf isotope compositions have been reported, and data for komatiites from Barberton, Abitibi and the Pilbara, as well as rare boninitic rocks from Isua, are the only rocks of comparable age that show a similar spread and extremely positive Hf isotope composition 12,31 . Data from the literature on different komatiites show very similar εHf initial to the Duffer Formation. It is therefore most plausible that komatiites contributed to substantial proportions to the detrital component in these black shales, backed by higher Cr concentrations 20 . This is important as it is the first independent evidence for the validity of extreme Hf isotopes in the early Earth other than komatiites. Figure 2 shows that komatiites plot very close, or above the depleted mantle curve, whereas zircons plot below. This is because zircons need more "Zr rich", evolved magmas to crystallize.
It is noted that, despite being derived from one or more sources that formed at or slightly after the Duffer formation, shales studied here with depositional ages <3 Ga have been sourced from evolved, reworked crust. This crust had little, if any juvenile input. Since shales source an area potentially larger than single river systems, this may have been so, on average, for an entire Archean terrane.
Implications and Conclusions
The oldest black shales sampled (Duffer Formation) were derived from mafic, most likely komatiitic rocks, as indicated by varying Lu/Hf and super-chondritic initial ε Hf (t) (Fig. 2). In this era of the Earth's history, formation of new crust was a dominant process and at least ~65% of the present day volume was present, as calculated by 7 , and was dominated by komatiitic assemblages. This crust seemingly lacked exposure of evolved rocks that upon weathering would have fed the sedimentary sequences of the black shales. In contrast, the <3.0 Ga samples of black shale display a change in the character of the crustal source terrane as preserved in Hf isotope signatures, reflecting the change in time-integrated Lu/Hf in crustal domains. Pilbara black shales <3 Ga indeed record host material with low ε Hf (t) and crustal, time-integrated Lu/Hf. Using their Hf model ages as an indicator for the first formation of their parental domain yields a time of approximately 3.25 Ga (Fig. 3). Even though this time predates proposed global changes in average crustal chemistry, indicated at around 3 Ga, it possibly marks a time of enhanced cratonic formation that initiated this change. Integrated structural and geochemical studies of the Pilbara craton have noted a change from within-plate-like magmas and dome and basin features related to vertical tectonics prior to 3.2 Ga to magmatic rocks with subduction related signatures and thrust-dominated structures at <3.1 Ga 33,34 . Other independent data sets have also noted a transition in the character of the Pilbara crust at around this time including the change in felsic magmatic activity from TTG to K-granite and peraluminious granite 35 . On a global scale, changes in average Rb/Sr of new crust 36 , and Ni/Co and Cr/Zn in terrigenous sedimentary and igneous rocks 37 , indicate a change from more mafic to felsic crustal compositions and increases in the thickness and volume of continental crust in the period between 3.2 to 2.5 Ga. These changes are accompanied by the first records of subaerial large igneous provinces 38 and deviations in 87 Sr/ 86 Sr isotope ratios in seawater away from background mantle values after 3 Ga 39 , and evidence for a global increase in the sedimentary contribution to the magmatic record through higher δ 18 O since the late Archean 40 . These observations suggest that significant volumes of continental crust had emerged [41][42][43][44][45] and were available for surficial weathering.
Our data on the Hf isotopic composition of black shales from the Pilbara craton indicate a change from the formation of juvenile crust to more evolved crustal sources and associated intra-crustal reworking from that time onwards. The timing of this change is proposed to initiated at around 3.2 Ga based on the peak in Hf model ages of the <3 Ga old shale data.
Our approach using 'zircon-free dissolutions' of black shales provide evidence from a crustal source without any influence of biased material, such as for instance zircons with a low Lu/Hf. We therefore are able to provide information on pristine crustal material within the limitations of the sampling area. In combination with other global data sets, we consider this change from formation of juvenile crust to crustal reworking to reflect the subaerial emergence of significant volumes of continental crust in the Pilbara region. This change in the character of the Pilbara crust is consistent with a change in global geodynamics from a stagnant lid to a plate tectonic regime e.g., 6,31,33,46 and took place in response to increasing lithospheric rigidity through mantle cooling. | 4,158 | 2018-01-17T00:00:00.000 | [
"Geology"
] |
Use of 16S rDNA Sequencing to Determine Procaryotic Diversity of a Remote Aviation Fuel-Polluted Lentic Ecosystem in Ibeno, Nigeria
Ibeno, the operational base of Mobil Producing Nigeria Unlimited, a subsidiary of ExxonMobil, Nigeria remains one of the most impacted communities by oil and gas activities in the Niger Delta region of Nigeria. Lotic and lentic systems in the region which residents rely on, receive petroleum hydrocarbon inputs almost daily due to oil spills and oily wastes discharges from operators and bunkering activities. This research was carried out to determine the prokaryotic diversity in a remote aviation fuel-polluted lentic ecosystem after 16 years of pollution using metagenomic approaches. DNA extraction from the water samples was carried out using MoBio DNA extraction Kits following the manufacturer’s instructions. Extracted DNA fragments were quantified using picogreen and by recording their UV absorption spectra using NanoDrop spectrophotometer. 16S rDNA sequencing was carried out on a Miseq Illumina sequencing platform and Quantitative Insight Into Microbial Ecology (QIIME) bioinformatics pipeline. Analyses revealed the dominance of bacterial and archaeal communities in both polluted and unpolluted water samples. The polluted sample had 93.83% bacterial and 3.43% archaeal population while the control sample revealed 58.05% bacterial and 39.69% archaeal population. Dominant bacterial phyla from the polluted samples were Proteobacteria, Firmicutes, Actinobacteria, Cyanobacteria, and Chloroflexi while dominant phyla in the unpolluted samples were Proteobacteria, Firmicutes and Actinobacteria. Dominant archaeal phyla from both polluted and unpolluted waters were Euryarchaeota and Crenarchaeota. The use of 16S rDNA metagenomic approach revealed a wide variety of bacterial and archaeal diversity from both polluted and control sites, thus revealing the true ecological status of both sites. *Corresponding author: Ime Udotong, Department of Microbiology, University of Uyo, Uyo, Akwa Ibom State, Nigeria, Tel: +2348146129875; E-mail<EMAIL_ADDRESS>Received July 12, 2017; Accepted July 17, 2017; Published July 22, 2017 Citation: Udotong I, Uko M, Udotong J (2017) Use of 16S rDNA Sequencing to Determine Procaryotic Diversity of a Remote Aviation Fuel-Polluted Lentic Ecosystem in Ibeno, Nigeria. J Environ Anal Toxicol 7: 493. doi: 10.4172/21610525.1000493 Copyright: © 2017 Udotong I, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Citation: Udotong I, Uko M, Udotong J (2017) Use of 16S rDNA Sequencing to Determine Procaryotic Diversity of a Remote Aviation Fuel-Polluted Lentic Ecosystem in Ibeno, Nigeria. J Environ Anal Toxicol 7: 493. doi: 10.4172/2161-0525.1000493
Introduction
Since the inception of oil and gas exploration and production (O&G E&P) activities in Nigeria and in spite of the increasing revenue from these resources, the communities from which they flow continue to experience deprivation and environmental degradation due to daily inputs of petroleum hydrocarbon spills and oily wastes discharges [1]. Ibeno is one of the thirty-one (31) LGAs in Akwa Ibom State, Nigeria. It is the location of massive oil deposits, which have been extracted for decades by Mobil Producing Nigeria Unlimited (MPNU), a subsidiary of ExxonMobil Corporation and some marginal oilfield operators like Frontier Oil Ltd and Network Exploration and Production Nigeria Ltd [2,3]. Over the years, the rivers, streams, marine and forest which happened to be the major income sources for the majority of the rural dwellers in the region have become highly contaminated due to the O&G E&P activities [4].
The presence of petroleum hydrocarbon is considered one of the major factors that influence microbial diversity and succession in polluted water bodies [5]. Diverse groups of microorganisms naturally are capable of hydrocarbon degradation mostly as food due to the ubiquitous distribution of hydrocarbons in the environment from both natural and anthropogenic inputs [6,7]. Numerous genera of bacteria e.g., Staphylococcus, Pseudomonas, Bacillus, Proteus, Micrococcus, Klebsiella, Enterobacter [8]; fungi e.g., Fusarium; [9] and yeasts such as Rhodotorula have been isolated from contaminated aquatic ecosystems and their metabolic activities are strongly considered responsible for the removal of the hydrocarbons from the environment [6].
The gene that encode the small subunit ribosomal RNA is ubiquitous in prokaryotes [10] and serves in the classification of bacteria and archaea owing to its high degree of conservation and its fundamental function in living organisms [11]. It is important to note that several pieces of RNA are important for proper ribosome functions. This RNA is not translated to proteins, the ribosomal RNA is the active component. Thus we refer to the "rRNA gene" or "rDNA" to designate the DNA in the genome that produces the ribosomal RNA. This study was designed to assess the prokaryotic diversity of a remote aviation fuel-contaminated lentic ecosystem after 16 years of aviation fuel pollution alongside a control lentic ecosystem with no history of aviation fuel pollution using 16S rRNA gene amplification and sequencing.
Site description and sample collection
Integrated sampling was carried out at an aviation fuel-polluted lentic ecosystem on longitude 04° 32.647' N, and latitude 007° 59.951' E and on longitude 04° 58.519' N, and latitude 007° 57.908' E as the control. Water samples were collected at different points in one-litre pre-washed plastic containers and taken to the laboratory in ice-packed cooler. Samples from individual site were composited and used for the analyses.
the MoBio DNA Extraction Kit. The eluted DNA was centrifuged at room temperature at 10,000 × g for 30 seconds. The supernatant was discarded leaving the DNA. The DNA was preserved for further analyses at -20° to -80°C. The concentration of the DNA was examined an ultraviolet absorbance spectrophotometry. DNA amplification involves the duplication of the DNA molecules with each strand serving as template for the duplication. Each strand of the DNA molecule serving as template was amplified by Polymerase Chain Reaction with the aid of 16S rRNA primers in a 50 μl reaction mixture with the following programme: denaturation at 94°C for 3 min, and 30 cycles of 94°C for 20 sec, annealing at 53°C for 30 sec, and extension at 68°C for 5 min, with a final extension at 68°C for 10 min. The PCR products were by agarose gel electrophoresis. PCR products were sequenced using the Miseq Illumina platform. The 16S rRNA sequences were analyzed using the Quantitative Insight into Microbial Ecology (QIIME) version 1.8.0 ( Figure 1).
Results and Discussion
Numerous sequences of bacteria and archaea were detected from samples of both water bodies using the 16S rDNA sequencing. The method revealed comparable results ( Figure 2) indicating a higher bacterial population in the polluted water than that of the control water.
Bacteria showed a high occurrence in the two sites with the percentage composition of 93.83% in the polluted sample and 58.05% in the control sample.
Phylum level affiliations of the sequences recovered from the two samples revealed distinct difference in phylum-level community composition. Sequences from 26 and 20 phyla were retrieved form the polluted and control site, respectively. "Other" represents the sum total of all phyla with percentage read of less than one (1). As presented on Table 1, the top / dominant phyla in the bacterial community detected in the polluted water were Unknown, Proteobacteria, Firmicutes, Actinobacteria, Cyanobacteria, and Chloroflexi and represented 37.52%, 33.86%, 7.31%, 6.19%, 3.65% and 2.84% respectively. The top / dominant phyla in the control water sample were Unknown, Proteobacteria, Firmicutes, and Actinobacteria and represented 21.50%, 39.69%, 27.87% and 2.61% of all classified sequences from the sample, respectively. Dominant phyla in the archaeal community retrieved for both waters were Euryarchaeota and Crenarchaeota representing 1.33% and 0.19% in the polluted sample and 0.24% and 0.23% in the control sample.
Sequences belonging to 40 and 34 classes of prokaryotes were retrieved from the polluted and control water samples, respectively. The top 12 Classes are represented on Table 2. Over thirty eight (38.08) percent of sequences retrieved from the polluted water sample and 21.56% of sequences from the control water sample were affiliated to the class "Unknown". Also, 39.55% of sequences from the control site had affiliation to the class 'Not assigned' and were the highest in among the classes. While sequences similar to Betaproteobacteria (20.13%), Alphaproteobacteria (8.10%) and Actinobacteria (6.17%) dominated in the contaminated water, Betaproteobacteria (20.75%), Gammaproteobacteria (4.29%) and Actinobacteria (2.57%) showed highest occurrence in the control sample.
Presented on Table 3 are the sequences retrieved from the polluted water sample that matched those of bacterial and archaeal diversity that belonged to the Order Unknown, Burkholderiales, Actinomycetales, and Rhizobiales while those in the control sample matched the 'Not assigned', Unknown and Burkholderiales, etc. *Unknown are Classes with percentage read count of >1 but not identified in the database while **Other is the sum total of all Classes with percentage read count of <1 *Unknown are Orders with percentage read count of >1 but not identified in the database while **Other is the sum total of all Orders with percentage read count of <1 (16.55%) in the contaminated water and Coniocybaceae (39.51%) and Burkholderiaceae (9.36%) in the control water.
This study revealed a higher bacterial diversity in the aviation fuelpolluted water than in the unpolluted water. This may be attributed to presence of petroleum hydrocarbons and their effects on the diversity and population of prokaryotes especially the bacterial group in the freshwater system, an observation earlier reported by Atlas and Bartha [12]. Bacterial and archaeal species with gene sequences affiliated to those present at the two study sites together with their accession numbers are represented on Table 5. Both sites share some species in composition as observed in other taxa and here are indicated with a "+" sign.
Conclusion
The 16S rDNA analysis of the prokaryotic diversity of the remote aviation fuel-polluted and unpolluted lentic ecosystems revealed an enormous composition of bacteria and archaea in both water bodies. The polluted water had a greater composition of procaryotes. Bacterial community got a higher diversity of the composition in both waters. Archaeal population of the polluted water was remarkably low compared to the high composition observed in the unpolluted and control samples.
According to Ntushelo [13], Approaches to identifying and studying bacterial diversity often relied on the traditional methods of plating bacteria on agar. These approaches are still relevant for culturable bacteria but fall short of detecting fastidious and unculturable bacteria. Molecular-based techniques like targeted sequencing of the 16S rRNA gene from gross DNA samples have facilitated surveys of bacterial diversity. The sequencing and cloning of individual sequences is however tedious and cannot provide a comprehensive survey of a bacterial community. The 16S rRNA gene can be amplified from pure bacterial colonies or can be amplified directly from a crude sample. Amplified from a crude sample, the 16S rRNA gene can be massively sequenced using high -throughput sequencing instruments. Direct amplification of the 16S rRNA gene and its massive sequencing has corrected the underrepresentation of bacteria in many bacterial communities. Analysis of bacterial communities is now made easier by the ample data generated from various bacterial communities survey projects, like this hydrocarbon polluted and unpolluted lentic ecosystem [14,15]. | 2,534 | 2017-07-22T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
A Growth Behavior of Szegö Type Operators
We define new integral
operators on the Haydy space similar to Szego projection. We show that these operators map
from Hp to H2 for some 1 ≤ p 2, where the range of p is depending
on a growth condition. To prove that, we generalize the Hausdorff-Young Theorem
to multi-dimensional case.
Introduction
Let C n denote the Euclidean space of complex dimension n. The inner product on C n is given by ⟨z, w⟩ := z 1 w 1 + · · · + z n w n where z = (z 1 , . . . , z n ) and w = (w 1 , . . . , w n ), and the associated norm is |z| := √ ⟨z, z⟩. The unit ball in C n is the set and its boundary is the unit sphere S n := {z ∈ C n : |z| = 1}.
In case n = 1, denote D in place of B 1 . Let σ n be the normalized surface measure on S n .
For 0 < p < ∞, the Hardy space H p (B n ) is the space of all holomorphic function f on B n for which the "norm" it is known that f has a radial limit f * almost everywhere on S n . Here, the radial limit f * of f is defined by provided that the limit exists for ζ ∈ S n . Moreover the mapping Since H 2 (B n ) can be identified with a closed subspace of L 2 (S n , dσ n ), there exists an orthogonal projection from L 2 (S n , dσ n ) onto H 2 (B n ). By using a reproducing kernel function, which is called the Szegö kernel, we also obtain a function f from its radial function f * . More precisely, . We usually call this integral operator as the Szegö projection. It is well known that for 1 < p < ∞ the Szegö projection maps L p (S n , dσ n ) boundedly onto H p (B n ). For more details, we refer the classical text books [1,2].
In this paper we consider a class of integral operators defined by for m = 1, 2, . . . , n and a positive integer N . Compared with the Szegö projection, the growth condition in the denominator factor is better. Thus these operators are bounded on H 2 (B n ). Interestingly these operators map from H 1 (B n ) to H 2 (B n ) for any positive integer N when 1 ≤ m < n 2 . More precisely we have the following result.
For n 2 ≤ m < n, the operator T m,N maps from H p (B n ) to H 1 (B n ), but the range of p is depending on m, which determines the growth condition of the kernel function. Explicitly we have the following theorem.
For z ∈ C n , the monomial is defined as At first, we show that the Szegö type operators T m,N defined in (1.1) are actually coefficient multipliers.
Since the monomials are orthogonal on L 2 (S n , dσ n ); see ( Expanding the term inside the above integral as we obtain that To prove the main theorems, we need the Hausdorff-Young Theorem for the multi-dimensional Hardy space. For a holomorphic function f in the unit disk, we have the Taylor series expansion as For the Hardy space defined in the unit disk, a relationship between the functions in H p (D) and the growth condition of their coefficients are given by the Hausdorff-Young Theorem, see ( [3] p.76, Theorem A). H p (D)). For 1 ≤ p ≤ ∞, let q be the conjugate exponent, with 1 p + 1 q = 1.
Theorem 2.2 (Hausdorff-Young Theorem for
Before proceeding, we introduce some notation. Let N n 0 be the product set of nonnegative integers.
Define a weight function w n on N n 0 by Using the weight w n , we define a norm on N n 0 by Let l p,t be the collection of all function c defined on N n 0 with the norm ∥c∥ p,t < ∞.
For a holomorphic function f on B n , whose Taylor series is given by H p (B n )). For 1 ≤ p < ∞, let q be the conjugate exponent, with 1 p + 1 q = 1. Proof. For a multi-index α, we note that ∫ Sn |ζ α | 2 dσ n (ζ) = w n (α). (2.2) From the orthogonality of monomials on S n , we get ∫
Proposition 2.3 (Hausdorff-Young Theorem for
Thus we obtain which prove the Proposition (1). Consequently ∥f − f k ∥ H 2 goes to zero as k increase. Hence f k converges to f pointwise and by applying Fatou's lemma we finish the proof.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper. | 1,205.4 | 2020-09-08T00:00:00.000 | [
"Mathematics"
] |
Achromatic arboricity on complete graphs
In this paper we study the {\it {achromatic arboricity}} of the complete graph. This parameter arises from the arboricity of a graph as the achromatic index arises from the chromatic index. The achromatic arboricity of a graph $G$, denoted by $A_{\alpha}(G)$, is the maximum number of colors that can be used to color the edges of $G$ such that every color class induces a forest but any two color classes contain a cycle. In particular, if $G$ is a complete graph we prove that \[\frac{1}{4}n^{\frac{3}{2}}-\Theta(n) \leq A_{\alpha}(G)\leq \frac{1}{\sqrt{2}}n^{\frac{3}{2}}-\Theta(n).\]
Introduction
Let G be a finite simple graph.A k-coloring of G is a surjective function ς that assigns a number from the set {1, 2, . . ., k} to each vertex of G such that any two adjacent vertices have different colors.A k-coloring ς is called complete if for each pair of different colors i, j ∈ {1, 2, . . ., k} there exists an edge xy ∈ E(G) such that ς(x) = i and ς(y) = j.
While the chromatic number χ(G) of G is defined as the smallest number k for which there exists a k-coloring of G, the achromatic number α(G) of G is defined as the largest number k for which there exists a complete k-coloring of G (see [10]).Note that any χ(G)-coloring of G is also complete.Therefore, for any graph G χ(G) ≤ α(G).
In [8] the authors introduce the parameter called a-vertex arboricity of a graph G, denoted as ava(G), defined as the largest number of colors that can be used to color the vertices of G such that every color induces a forest but merging any two yields a monochromatic cycle, this parameter arises from the vertex arboricity parameter, denoted by va(G), which is defined as the minimal number of induced forests which cover all the vertices (see [7]), and clearly as a minimal decomposition of trees is complete, we have that va(G) ≤ ava(G).
Inspired in these parameters and in our previous work related to edge complete colorings, most specifically with the achromatic (proper colorings), the pseudoachromatic (non proper colorings) and pseudoconnected (connected and no proper colorings) indices of the complete graphs [1,2,3,4]; we define the achromatic arboricity parameter for a graph G, denoted by A α (G), as the largest number of colors that can be used to color the edges of G such that each color class is acyclic and any pair of color classes induces a subgraph with at least a cycle.
Clearly, this parameter arises from the well-known arboricity parameter of a graph G, defined by Nash-William in 1961 [12,13], and denoted for A(G), that is the minimum number of acyclic subgraphs into which E(G) can be partitioned.Note that the union of two parts induced a subgraph with at least a cycle.In consequence, we have that In this paper we give a lower and an upper bound for the achromatic arboricity parameter with a small gap between them, more precisely, we prove that This paper is organized as follows.On Section 2 we give a a general upper bound.On Section 3 we give a lower bound using the properties and the structure of the finite projective planes.On Section 4, we prove our main theorem as a consequence of the previous results.Finally, on Section 5 we give the exact values of the achromatic arboricity for K n when 2 ≤ n ≤ 7.
2 The upper bound for the achromatic arboricity of K n In this section we prove an upper bound for A α (K n ).The technique has been used previously by different authors in different papers cited in the introduction of this paper.
Proof.Let ς : E(K n ) → [k] be an acyclic k-edge-coloring of K n such that the union of any two color classes induce at least a cycle, with x is the cardinality of the smallest color class of ς.Without loss of generality, let x = ς −1 (k) be.Since ς defines a partition in the edges of K n it follows that k ≤ f n (x) := n(n − 1)/2x.
Then, since ς is acyclic, we can suppose that ς −1 (k) induces a matching.Moreover, the number of pair of edges in the subgraph induced by . On the other hand, there are 2x(n − 2x) edges incident to some vertex of ς −1 (k) exactly once, we denote this set of edges by X.Since every two color classes of ς have at least two incidences, there are at least two edges that have a vertex in common with some edge in ς −1 (k), hence, the number of color classes incident to ς −1 (k) containing some edge in X is at most x(n − 2x) and the number of color classes incident to ς −1 (k) containing no edge in X is at most x(x − 1) since two edges are required to obtain a cycle.Hence, there are at most g n (x) − 1 color classes incident with some edge in ς −1 (k) where In consequence, we have And we conclude that The function f n is a hyperbola and the function g n is a parabola, see Figure 1.Then we have the following lemma.
The functions g n and f n for a fixed value n.
Theorem 1.Let n ≥ 5 be an integer then the achromatic arboricity of the complete graph of order n is upper bounding by: Proof.By Lemma 2, g n (x 0 ) = nx 0 − x 2 0 − x 0 + 1 and x 0 = n/2 + , for a small > 0. We obtain: and then and the result follows.
3 A lower bound for the achromatic arboricity of K n for some values of n In this section we provide a lower bound for the achromatic arboricity of K n , for some values n greater than 13.We use the well-known technique of identifying the structure of the finite projective plane with the complete graph in order to use its properties.
First that all, we recall some definitions and properties of the projective planes that we will use along the proof for the lower bound.
A projective plane is a set of n points and a set of n lines, with the following properties.
1.For any two distinct points there is exactly one line incident to both.
2. For any two distinct lines there is exactly one point incident to both.
3. There exist four points such that no line is incident to three of them or more.
A projective plane has n = q 2 + q + 1 points, for a suitable q number, and n lines.Each line has q + 1 points and each point belongs to q + 1 lines; we say that q is the order of such plane, and a projective plane of order q is denoted by Π q .
Let IP be the set of points of Π q and let L = {l 1 , . . ., l n } the set of lines of Π q .We identify the points of Π q with the set of vertices of the complete graph K n .Then, the set of points of each line of Π q induces a subgraph K q+1 in K n .Given a line l i ∈ L, let l i = (V (l i ), E(l i )) be the subgraph of K n induced by the set of q + 1 points of l i .By the properties of the projective plane, for each pair i, j ∈ [n], |V (l i ) ∩ V (l j )| = 1 and {E(l 1 ), . . ., E(l n )} is a partition of the edges of K n .Therefore, when we say that a graph G isomorphic to K n is a representation of the projective plane Π q means that V (G) is identified with the points of Π q and that there is a set of subgraphs (lines) {l 1 , . . ., l n } of G, for which a line l i of Π q , l i is the induced subgraph by the set of points of l i .
Let us recall that any complete graph of even order q + 1 admits a factorization by q+1 2 hamiltonian paths, see [6].This factorization of the complete graph K q+1 can be used as an edge-coloring for the lines of K n .
In particular, if q is a prime power there exists a Π q , that arises from finite fields Z q for q prime, and from GF (q), the Galois Field of order q, when q is a prime power.It is called the algebraic projective plane, and it is denoted by P G(2, q) (see [11]).Since the proof of Theorem 3 only requires projective planes of order prime, we will use the algebraic projective plane P G(2, q) for q a prime number.Now we give a useful description P G(2, q): Let P and L be two incident point and line that we will call the infinity point and line, respectively.Let {P 0 , P 1 , . . ., P q−1 } be the set of points such that each point is different to P and incident to L. And let {L 0 , L 1 , . . ., L q−1 } be the set of lines such that each line is different to L and incident to P .Moreover, let {(i, 0), (i, 1), . . ., (i, q − 1)} be the set of points, different to P , incident to L i ; and let {[i, 0], [i, 1], . . ., [i, q − 1]} be the set of lines, different to L, incident to P i .
The remaining lines are denoted as follows.The line [a, b] is adjacent to all the points (x, y) that satisfy y = ax + b using the arithmetic of Z q .Theorem 2. If q is an odd prime number and n = q 2 + q + 1 then Proof.Let q be an odd prime number and G (isomorphic to K n ) be a representation of the algebraic projective plane P G(2, q).
First, we proceed giving a partition of the lines of the projective plane taking a single line and q 2 +q 2 pairs of lines having an intersection point between them which is different for each pair, that is a triplet (p, m, l) such that p = m ∩ l.And second, we give an acyclic edge-coloring of G that uses this partition of the lines and attains the given lower bound concluding the proof as follows.
(5) The last set of triplets is defined by: (6) Hence, we have: 2 triplets that cover a set of 2 + 2(q − 1) + (q − 1) 2 + q − 1 = q 2 + q lines.Then, all the lines of P G(2, q) are covered except for the line L 0 .
The left side of Figure 2 shows a decomposition of the lines of P G(2, 3) while the right side shows K 13 .
To begin with, we color the complete subgraph K q+1 associated with the line L by Hamiltonian paths, therefore we use q+1 2 colors.The remaining lines are colored by pairs, according to the triplets (p, l, m).
For each triplet (p, l, m), we color the complete subgraph K q+1 associated with the line l by Hamiltonian paths and we copy the coloring to the complete subgraph associated with the line m.
Therefore, we use q+1 2 ( n−1 2 + 1) = q+1 4 (n + 1) colors.The coloring is acyclic because the color classes of the line L are paths, ante each color class of a triplet is the identification of two paths by a vertex.Now, if two color classes are in the edges of a line, clearly they contain a cycle since they are the union of two Hamiltonian paths.If a color class is the the edges of L and another color class is in the edges of the lines of a triplet (p, l, m), the triangle formed by l, m and L induces a cycle.
Finally, if two color classes are in the edges of the lines of the triplets (p, l, m) and (p , l , m ), since p = p , the triangle formed by l, m and l induces a cycle, and then the union of any two color classes induce at least a cycle.
The right side of Figure 2 shows the part of K 13 where the line L 0 and the two lines of the triplet (P 1 , [1, 0], [1,1]) are colored.
Main Result
Before proving Theorem 3 we state the following lemma.
Proof.Given a coloring ς of H which performs A α (H), we extend this to an acyclic coloring of G such that any two of color classes have a cycle in a greedy way, that is, the edges of E(G)\E(H) are listed in some specified order, then we assign to the edge under consideration the smallest available color preserving the properties of the coloring.Now we have our main result.Theorem 3. Let n ≥ 13 be an integer then the achromatic arboricity of the complete graph of order n is bounding by: Proof.The upper bound is given in theorem 1.To prove the lower bound, we uses a strengthened version of Bertrand's Postulate, which follows from the Prime Number Theorem, see [5,9]: For > 0, there exists an N , such that for all real x ≥ N there exists a prime q between x and (1+ )x.Let > 0 be given, and suppose n > (N + 1) 2 (1 + ) 2 .Let x = √ n/(1 + ) − 1, so x ≥ N .We now select a prime q with x ≤ q ≤ (1 + )x.Then q 2 + q + 1 ≤ (x + 1) 2 (1 + ) 2 = n.Since projective planes of all prime orders exist it follows from Theorem 2 and Lemma 3 that: Since was arbitrarily small the result follows.
5 The achromatic arboricity of K n for small values of n.
In this section we study the achromatic arboricity of K n for small values of n.Table 1 shows the exact values for 2 ≤ n ≤ 7, Figure 3 displays colorations that attain lower bounds for 2 ≤ n ≤ 7 and Table 2 shows upper and lower bounds for 8 ≤ n ≤ 12.
It is easy to see that the upper bounds for n = 2, 3 equals the values of Table 1.For the case n = 4, if we suppose A α (K 4 ) ≥ 4, then there are at least two color classes of size one, whose union does not contain a cycle, a contradiction, then A α (K 4 ) = 3.
For the case of n = 5, Lemma 1 says that A α (K 5 ) ≤ 5 and the smallest color class has two edges.We suppose A α (K 5 ) = 5 and then each color class has exactly two edges.If the edges of a color class induce a P 3 , say abc, the edge ac is in the cycle with vertices a, b and c, so, the others six edges incident to P 3 generate 3 different color classes necessarily (each on this color classes has to be connected).Hence, the edge ac is in a color class which is a matching, say ac and de.Take the color class containing the edge bd.On one hand, the color class is the set of two edges bd and bc (or bd and ab) making a cycle with abd.On the other hand, the color class is the set of two edges bd and be making a cycle with ac and be, a contradiction.Therefore, each color class is a matching of size two.Then the union of two color classes is a C 4 and the union of three color classes is a K 4 necessarily.The remaining edges generate a K 1,4 which does not contain matchings of size two.In consequence, A α (K 5 ) = 4.
For the case of n = 6, Lemma 1 says that A α (K 6 ) ≤ 7 and the smallest color class has two edges.If we suppose A α (K 6 ) = 7, then there are six color classes of size two and one color class of size three.If the edges of a color class of size two induce a P 3 , it would be incident to at most 5 more color classes, each color class of size two has to be a matching.Then the union of two color classes of size two is a C 4 and the union of three color classes of size two is a K 4 necessarily, a contradiction because there are at least six color classes of size two.Therefore, A α (K 6 ) = 6.
For the case of n = 7, Lemma 1 says that A α (K 7 ) ≤ 9 and the smallest color class has two edges.If we suppose A α (K 7 ) = 9, there is at least six color classes of size two, however, a P 3 is incident to at most six color classes, hence, the color classes of size two are matchings but only there are at most three of them (on a K 4 ), a contradiction.If we suppose A α (K 7 ) = 8, there are three color classes of size two which are matching in a K 4 subgraph and there are five color classes of size three.On one hand, each color class of three vertices must be incident to three vertices of the K 4 .On the other hand, there are at most four color classes of size three incident to three vertices of the K 4 , a contradiction and then A α (K 7 ) = 7.
To end, we calculate the upper bounds given in Lemma 1 for A α (K n ) for 8 ≤ n ≤ 12, and the lower bounds perform their values via greedy colorings. | 4,296.6 | 2021-03-22T00:00:00.000 | [
"Mathematics"
] |
Normal-sized basal ganglia perivascular space related to motor phenotype in Parkinson freezers
Changes in basal ganglia (BG) perivascular spaces (PVSs) are related to motor and cognitive behaviors in Parkinson’s disease (PD). However, the correlation between the initial motor phenotype and PVSs distribution/burden in PD freezing of gait (FOG) remains unclear. In addition, the normal-sized PVSs (nPVSs) have not been well-studied. With high-resolution 7T-MRI, we studied nPVSs burden in BG, thalamus, midbrain and centrum semiovale. The numbers and volume of nPVSs were assessed in 10 healthy controls, 10 PD patients without FOG, 20 with FOG [10 tremor dominant (TD), 10 non-TD subtype]. Correlation analyses were further performed in relation to clinical parameters. In this proof of concept study, we found that the nPVS burden of bilateral and right BG were significantly higher in freezers. A negative correlation existed between the tremor score and BG-nPVSs count. A positive correlation existed between the levodopa equivalent daily dose and BG-nPVSs count. The nPVS burden correlated with the progression to FOG in PD, but the distribution and burden of nPVS differ in TD vs. non-TD subtypes. High resolution 7T-MRI is a sensitive and reliable tool to evaluate BG-nPVS, and may be a useful imaging marker for predicting gait impairment that may evolve into FOG in PD.
INTRODUCTION
Freezing of gait (FOG) is a common symptom in the advanced stages of Parkinson's disease (PD). FOG increases the risk of falls and fall-related injuries with devastating impact on the quality of life of individuals with PD, often triggering a downward spiral of frailty and leading to depression, social isolation, activity avoidance, and fear of falling [1][2][3]. While classically occurring in advanced PD, FOG and falls can be seen in earlier stages, particularly in individuals who suffer from the postural instability gait difficulty (PIGD) subtype, when compared to the tremor-dominant (TD) subtype [4][5][6].
The mechanism of FOG in PD has been intensively studied. The "interference model" describes function interruption between cortical structures and brainstem AGING regions involved in gait control possibly contributing to FOG [7,8]. Similarly, the "decoupling model of FOG" suggests that a breakdown in coupling between posture preparation by the supplemental motor area and step initiation by the motor cortex may be responsible for the "start hesitation" in FOG [9]. It has been suggested that FOG may be due to a failure to generate adequate amplitudes of the intended movement [10]. The anatomical basis might be the failure of structural and functional integrity in the locomotion control system. For example, the widespread white matter damage involving sensorimotor-related and extramotor pathways was reported in PD-FOG patients. Individuals with diffused small vessel disease can frequently manifest Parkinsonian symptoms, while neuroimaging demonstrates diffused white matter hyperintensities (WMH). In addition, more severe WMH was found in the PIGD subtype of PD [11][12][13][14]. Left temporal WMH is related to falls in idiopathic PD [15]. Taken together, the white matter integrity and the subcortical network [involving regions such as the basal ganglia (BG), the thalamus and the mesencephalic locomotion center] are essential to maintain gait and balance. When damaged, FOG and balance impairment can occur.
Since ePVSs are correlated with PD motor and cognitive impairment, one can postulate that the distribution and volume of the normal-sized PVSs (nPVSs) may have certain clinical significance in PD. Previous studies have mainly focused on ePVSs due to limits in imaging resolution. NPVSs are typically invisible due to small size in the range of 0.13-0.96 mm [39]. Seven Tesla (7T) MRI, with increased spatial resolution and signal-to-noise ratio, increases the detection of nPVSs [40,41]. The 7T sequences have been optimized to provide detailed assessment of distributions of nPVSs in the white matter and subcortical nuclei [42].
In this proof of concept study, with 7T MRI, we investigated the clinical and neuroimaging significance of nPVS in important locomotion centers, including the BG, thalamus, midbrain, and CSO in PD freezers with different motor phenotypes. We hypothesized that the count and volume of nPVSs in BG may be different compared to those of age-matched healthy controls (HCs). The nPVSs burden of BG could potentially serve as a biomarker for PD gait impairment, and may further be a factor in distinguishing the motor subtypes in PD patients.
Demographic and clinical characteristics
The demographic and clinical characteristics of the HCs, PD patients without FOG [FOG(-)], PD patients with FOG tremor dominant subtype [FOG(TD)], and those with FOG, but non-TD type [FOG(TD-)] are shown in Table 1. There were no significant differences found in age, sex ratio, vascular risk factors, WMH burden and education level among the four groups. A majority of participants in the two FOG groups had moderate to severe degree of FOG (Table 1). Among the three PD groups, tremor score was significantly higher in the FOG(TD) group. The axial motor score, akinetic score, Levodopa equivalent daily dose (LEDD), Hamilton Depression Scale (HAMD) and Hamilton Anxiety Scale (HAMA) scores were higher in the freezers.
Analysis of the nPVSs in basal ganglia
With 7T MRI, the resolution of the images was high enough to allow analysis of nPVS burden ( Figure 1). NPVS number and volume calculation of PD subgroups and HCs groups were performed ( Table 2). Test-retest reliability using the two-way mixed model for absolute agreement over a one-month interval reached 0.79 and 0.80 for nPVSs number and volume of BG region, 0.72 and 0.74 of thalamic region, 0.89 and 0.93 for the CSO region, and 0.77 and 0.83 of the midbrain, respectively.
The nPVS numbers of the right and bilateral BG were significantly higher in the FOG(TD-) group than the rest of the groups using one-way ANOVA ( Table 2 and Figure 2). The volume of the nPVS of FOG(TD-) group was significantly higher than the other groups when compared unilaterally, bilaterally or choosing a single slice with the highest count ( was seen in the thalamus or midbrain using the scale system previously described (Figure 2) [43].
Correlation between BG-nPVS burden with clinical features and WMH burden
In PD freezers, a significantly negative correlation existed between the tremor score and BG-nPVSs count (r = -0.49, p = 0.04, Figure 3A), and a positive correlation was found between the LEDD and nPVSs count of BG (r = 0.47, p = 0.04, Figure 3B). An overall positive correlation between WMH burden and BG-nPVS (r = 0.37, p = 0.02, Figure 3C) for all 40 participants was found. There were no correlations between nPVS burden and the UPDRS-III as well as other clinical parameters. There were no correlations between BG-nPVS volume and clinical parameters. There was no difference in the nPVS count and burden in the other areas assessed, nor was there any clinical correlation detected.
DISCUSSION
In this proof of concept study, we investigated the utility of ultra-high field 7T MRI to assess nPVS burden and determine whether nPVS counts and volume could serve as imaging tools to distinguish motor phenotypes in PD freezers. First, we established that 7T MRI could be a reliable tool in assessing nPVS. The significance of normal sized nPVS in BG has not been well studied partially due to the challenges associated with nPVS quantitation using lower resolution MRI scanners. Conversely, using a 7T MRI scanner with the higher field strength makes it possible to quantitate nPVSs.
AGING
PVSs are microscopic but visible on MRI when enlarged with the widely used 1.5 and 3T scanner. PVSs are commonly seen in healthy adults, in BG and CSO in up to 60% of individuals [44]. There is clinical relevance to PVS. PVSs that relate to small vessel AGING diseases are contributing factors to stroke and dementia [45,46]. It has also been proposed that ePVS is relevant to the development of neurodegenerative disease [47]. In PD patients, periventricular WMH, brain atrophy, and BG-ePVSs have been noted to impact motor and cognitive functions [16,26]. A previous study has shown that vascular factors might be involved in the pathophysiology of PIGD motor phenotype [48]. Postural and gait control involves integration of sensorimotor, BG, thalamus and cerebellum circuitries [49]. A recent study exploring the association between small-vessel diseases and motor symptoms of PD showed different clinical association. A close association between ePVS in BG and the tremor score, as well as between deep WMH and the axial motor score were seen [50]. However, this study did not explore the correlations with FOG.
The current study demonstrated a link between motor phenotypes and BG-nPVS burden. We first showed that nPVS burden in the BG was significantly higher in PD patients with FOG than those without FOG and the control group. The nPVS burden was significantly higher in right BG and bilateral BG among the PD freezers. Lateralization of the structural and functional connectivities in the human brain was reported in multiple studies of FOG, and it was noted that FOG was strongly related to structural deficits in the right hemisphere's locomotor network [51][52][53][54]. Right hemisphere PD pathology has been associated with more impairments in multiple cognitive domains, including verbal recall, semantic verbal fluency, visuospatial analysis, and attention span [55]; it is also related to slower gait [56] and poorer axial mobility [57]. Functional connectivity was reduced within the executive-attention network in FOG patients within the right middle frontal gyrus [58]. In our study, it is hard to conclude whether the lateralization is significant due to the small sample size.
We observed a less severe nPVS burden with the initial motor phenotype being TD subtype than the non-TD subtypes in PD freezers. The negative correlation between the tremor score and the nPVS number of BG may partially explain why the TD subtype carries a better prognosis. Response to levodopa therapy differs in PD subtypes, and it is known that axial symptoms, i.e. gait and balance tend to be less responsive to dopaminergic agents [59,60]. The higher LEDD dose in the freezers and the positive relationship between LEDD and BG-nPVS number are consistent with the previous observations that poorer levodopa response occurs when higher damage to the neurocircuitry is evident in the PIGD subtype.
We have shown a positive correlation between WMH burden and BG-nPVS. Given the known correlation between WMH and gait deficit in PD [11][12][13][14], and the evolving evidence of BG-ePVSs and motor symptoms [36,37], and cognitive dysfunction [27,38] in PD, our study suggested that increased nPVS in the BG region may act as a biomarker of gait decline if this finding AGING holds in a larger study. Whether such changes relate to disruptions of the neural circuitry for gait control warrants further investigations with structural and functional connectivity studies. There was no association between CSO nPVS burden and PD motor symptoms, which is consistent with previous studies that the severity of axial motor impairments was not associated with the intensity of the periventricular WMH, suggesting certain functional distinctions between BG PVS and CSO PVS [61,62]. Although not well studied, nPVS distribution and burden may also reflect the similar degenerative processes with ePVS. The advance in recent imaging technologies make it possible to assess such microstructural changes in vivo, especially with high-field MRI scanners. Such assessment in relation to clinical parameters can potentially serve as biomarkers to monitor disease progression and more precisely differentiate disease phenotypes.
The strengths of our study include application of a novel tool to assess a potential imaging marker for PD. Although the literature on PVS in PD are growing, and there are more evidence to show the link between higher BG PVS burden and future cognitive decline [38] and motor manifestations [36]; using high resolution 7T MRI to compare the distribution and volume of nPVS in BG, and identifying how these parameters correlate with motor phenotype in PD is novel. We established a method and identified the role of nPVS in a specific group of PD patients, with a focus on the most disabling motor symptom, FOG. With technology advancing rapidly, building on knowledge and expertise with better imaging tools will aid further development in the field. We speculate that the research work with 7T MRI scanners will bring new insights, and soon add new knowledge to clinical practice. This proof of concept study encourages further investigation in future large-scale studies when 7T MRI scanners are more readily available. There are some limitations. This is a single-centered proof of concept study with relatively small sample size. Further, this study has a focus on FOG since it is one of the most disabling symptoms in PD and the mechanism is not fully clear. Due to these factors, we cannot extrapolate the findings to all PD patients, or explore the sex differences. Future large prospective studies will provide more insight to further investigate the utility of 7T MRI in evaluating nPVS as an imaging biomarker for disease phenotyping and trajectory.
CONCLUSIONS
We proposed a method using a high resolution 7T MRI to evaluate nPVS in BG to provide a potential imaging marker for predicting gait impairment in PD. The current study demonstrates that the nPVS burden correlates with the progression to FOG in PD patients, but the distribution and burden of nPVS may differ in people with or without tremor as initial motor presentation. High resolution 7T MRI is a sensitive and reliable tool to evaluate BG-nPVS, and may be a useful imaging marker for predicting gait impairment that may evolve into FOG in PD.
Study participants
Twenty PD patients with FOG, 10 FOG(TD), 10 FOG(TD-), 10 PD(FOG-), and 10 age-and sexmatched HCs were recruited from the Department of Neurology of Sir Run Run Shaw Hospital ( [63], and FOG was defined as a score of one or more on item 3 of the New FOG questionnaire (NFOG-Q) [64] or by history and examination by two experienced movement disorders neurologists. All participants were examined by experienced neurologists with a full neurological examination. Patients with gait issues secondary to visual impairments, sensory ataxia, and orthopedic issues were excluded. We also excluded patients with atypical Parkinsonism. All participants with moderate to significant small vessel disease were excluded, and HCs reported no history of neurological or psychiatric disorders. Clinical assessment included Unified Parkinson's Disease Rating Scale (UPDRS) for PD motor symptoms and NFOG-Q for FOG severity, respectively. Cognitive function and mental health were evaluated using Mini Mental State Examination (MMSE), HAMD and HAMA. LEDD was calculated [65]. Other inclusion criteria of the study included disease duration ≥ 5 years, and Hoehn-Yahr stage < 4. Patients with significant cognitive deficits that prevent them from signing consent, and motor symptoms that were secondary to other etiologies were excluded. Based on the initial motor phenotypes, PD-FOG patients were divided into two groups, FOG(TD) and FOG(TD-) (PIGD and indeterminate) [4]. Figure 1). For BG, thalamus, and CSO, nPVSs were assessed on the slice unilaterally with the highest number for left or right side, followed by the sum of both sides. We then assess a single slice with the highest total nPVS count. For midbrain, given it is a small structure, nPVSs were counted within all slices showing midbrain. A 4-point visual rating scale (0 = no PVSs, 1 = PVSs < 10, 2 = 11-20 PVSs, 3 = 21-40 PVSs, 4 = PVSs > 40) were used to grade the severity of PVS [66]. PVSs severity was then assessed using a semiquantitative scale (none/mild = 0/1, moderate = 2, frequent/severe = 3/4) [43]. All patients were included for test-retest reliability testing. The WMH burden for all participants was assessed by using a semiquantitative rating scale [67].
Statistical analysis
Statistical analysis was performed with SPSS statistics (Version 22, IBM Corporation, Armonk, NY, USA). Categorial variables were analyzed with Fisher's exact test. Continuous variables were analyzed with Oneway ANOVA. Correlation analyses between nPVS burdens, nPVS number and volume, and clinical features, namely MMSE, HAMA, HAMD, UPDRS-III and LEDD, were conducted using spearman correlation analysis. In addition, we also analyzed the correlation between nPVS number of BG and WMH burden. P < 0.05 was considered to define statistical significance.
With SPSS, intra-class correlation coefficients (ICC) was calculated. The ICC analysis assessed the test-retest reliability via the 2-way mixed model for absolute agreement. It was defined that ICC between 0.60-0.74 as good, and above 0.75 being excellent.
AUTHOR CONTRIBUTIONS
Wen Lv and Yumei Yue contributed to organization and execution of the project; design and execution of the statistical analysis; and writing the first draft of the manuscript. Ting Shen contributed to data collection and data analysis. Xingyue Hue contributed data analysis. Lili Chen, Fei Xie, Wenying Zhang, Baorong Zhang and Yaxing Gui contributed to data collection. Hsin-Yi Lai and Fang Ba contributed to conception, organization, execution of the project, as well as review and critique of the manuscript and the data analysis. | 3,869.6 | 2021-07-27T00:00:00.000 | [
"Biology",
"Psychology"
] |
Mol 2 Net Machine Learning and Atom-Based Quadratic Indices for Proteasome Inhibition Prediction
The atom-based quadratic indices are used in this work together with some machine learning techniques that includes: support vector machine, artificial neural network, random forest and k-nearest neighbor. This methodology is used for the development of two quantitative structure-activity relationship (QSAR) studies for the prediction of proteasome inhibition. A first set consisting of active and non-active classes was predicted with model performances above 85% and 80% in training and validation series, respectively. These results provided new approaches on proteasome inhibitor identification encouraged by virtual screenings procedures.
Introduction
SciForum http://sciforum.net/conference/mol2net-1 The ubiquitin-proteasome pathway (UPP) is responsible for the selective degradation of the majority of the intracellular proteins in eukaryotic cells and regulates nearly all cellular processes [1].Disfunction of the ubiquitination machinery or the proteolytic activity of the proteasome is associated with many human diseases [2].Proteasome inhibitors have been developed being effective for some disorders but sometimes show detrimental effects and resistance.Therefore, efforts are currently directed to the development of new therapeutics with adequated potency and safety properties that target enzyme components of the UPP [3,4].
Ligand-based molecular design and QSAR approaches are promising fields with several applications in drug development, which use a battery of novel molecular descriptors and different classification algorithms for in silico virtual drug screening studies [5,6].In the present research, we use and compare a set of different machine learning (ML) techniques using the 2D atom-based quadratic indices as attributes with the objective to perform the QSAR modeling of two datasets.The first dataset allows to separate molecules with proteasome inhibitory activity from inactive ones, and the second provides the numerical prediction of the EC50.
Results and Discussion
In the case of our classification study, we reduced the inactive subset removing all the cases that fall outside of the applicability domain of our model.Therefore, the dataset remains with 705 chemicals, being 258 active and the rest 447 inactive ones.The first 705 dataset used for classification studies generates 529 in the training set (TS) and 176 compounds in the prediction set (PS).Based on the aspects mentioned above for our case a first step with non-supervised feature reduction filtering was done, by using the Shannonś entropy as a measure keeping c.a. the 30% of the features (4 143).In a second step a supervised feature reduction filtering was done.In this stage, the process was carried out for the class problem.In this case the features were reduced a 70%, keeping a total of 1248 for the class data.These feature selection processes were carried out with the IMMAN software an "in house" program.Later, in the two-class data the best subset search was done resulting in 43 selected variables.Then wrapper methods associated with the ML techniques were applied to reduce data sets giving different data subsets combinations.Finally, all these subsets were used to generate diverse ML-QSAR models keeping those with the best results for each algorithm.The results for each ML technique used to develop classification QSAR models to predict proteasome inhibitors are shown in Fig. 1.
As it can be observed in Fig. 1 for the TS the fitted models using RF and MLP techniques showed the best accuracies (Ac = 90.17%and Ac = 89.22%)with Mathewś correlation coefficient (MCC) values of 0.79 and 0.77, respectively.In the case of the PS, the performance of these two QSAR models was of 86.36% (MCC=0.70)and 83.52% (MCC=0.64),respectively.Moreover, can be observed low values of false positive rates, which ensures a good performance at time to perform virtual high-throughput screenings, disminissing the wrong evaluation of predicted positive cases.In the same Fig. 1 can also be noted that RF outperforms other models in most of the quality parameters.Besides, the rest of the models also depicted adequate performances with accuracies values above 85% in the case of the TS and 80 % for the PS.http://sciforum.net/conference/mol2net-1
Materials and Methods
In this study the molecular descriptors atombased quadratic indices were calculated using the TOMOCOMD software version 1.0 [7].We also attempt the different feature selection methods implemented in the IMMAN software [8].Moreover, the attribute selection method based on BestSubset Search (BSS) of LDA discriminant analysis was used [9].Later, the wrapper and ranker methods of Waikato environment for knowledge analysis (WEKA) [10] were considered.As a final stage, the parameter tuning optimization for each ML technique was performed to find the best ML-QSAR models.
A dataset derived from a luminescent cellbased dose titration retest counterscreen assay to identify inhibitors of the proteasome pathway was selected from PubChem BioAssay (AID 2486) where the name, structures, compound identifier (CID), and activities can be found.First, a curation process on the database was assessed removing salts, and inorganic compounds.The main difficulty of the ML approaches is to select attributes from a large list of candidates to describe the data.This is because the complete set of molecular descriptors is not needed for the description of the proteasome inhibition.In this sense, the addition of non-relevant attributes can cause noise to the ML systems [10].Therefore, the feature selection approaches are very suitable to deal with this kind of problem.In this work, different schemes of attribute selection including filter and wrapper approaches implemented in WEKA [10] are examined to select the best attribute subset for each ML technique.Some details, advantages and drawbacks of the two approaches can be reviewed in many works dealing with this subject [11][12][13].
The machine learning methods shows impressive performances a wide diversity of studies involving automated, text classification and drug design [14][15][16].Based on this the machine learning approaches selected were: support vector machine, artificial neural network and k-nearest neighbor also included in the list of http://sciforum.net/conference/mol2net-1 the top ten algorithms used in data mining [17].Besides the random forest technique was included because is fast and robust approach with recent succesfull application into many problems [18][19][20].For each ML method applied in this study, various schemes of selecting attributes were examined and for each selected subset, various models were developed and checked out.
Conclusions
In this work, a QSAR study on a diverse and enlarged proteasome inhibitor database collected from the PubChem Bioassay is shown for the first time.The random forest algorithm demonstrates to be the best technique for the modeling of the proteasome inhibitory activity with high accuracies values in the training and test set.The low false positive rates observed validates the presented workflow based on ML-QSAR for the prediction of active proteasome inhibitors compounds from inactive ones.
Figure 1 .
Figure 1.Performance of the ML-based QSAR classifiers | 1,496.6 | 2015-12-04T00:00:00.000 | [
"Chemistry",
"Computer Science",
"Medicine"
] |
Emerging spin–phonon coupling through cross-talk of two magnetic sublattices
Many material properties such as superconductivity, magnetoresistance or magnetoelectricity emerge from the non-linear interactions of spins and lattice/phonons. Hence, an in-depth understanding of spin–phonon coupling is at the heart of these properties. While most examples deal with one magnetic lattice only, the simultaneous presence of multiple magnetic orderings yield potentially unknown properties. We demonstrate a strong spin–phonon coupling in SmFeO3 that emerges from the interaction of both, iron and samarium spins. We probe this coupling as a remarkably large shift of phonon frequencies and the appearance of new phonons. The spin–phonon coupling is absent for the magnetic ordering of iron alone but emerges with the additional ordering of the samarium spins. Intriguingly, this ordering is not spontaneous but induced by the iron magnetism. Our findings show an emergent phenomenon from the non-linear interaction by multiple orders, which do not need to occur spontaneously. This allows for a conceptually different approach in the search for yet unknown properties.
T he richness of physical phenomena in correlated oxides roots in the interaction and competition of coexisting properties and instabilities. A fundamental facet in materials with magnetic ions is the coupling of magnetic spins, the crystal lattice and lattice vibrations. From this interaction fascinating phenomena emerge, such as superconductivity 1 , multiferroicity 2,3 , giant thermal Hall effect 4 or ferroelectric phase transitions 5 . The presence of two magnetic ions in different sublattices makes the interaction particularly complex, but likewise intriguing. For instance, the cross-talk of transition-metal and rare-earth ions (R 3+ ) in complex oxides leads to phenomena such as spin-reorientations, magnetic compensation 6,7 , solitonic lattices 8 or multiferroicity 9,10 , including domain inversion 11 and interconversion of domains and domain walls 12,13 . At first sight, these phenomena seem to be of magnetic nature only. However, upon closer inspection, the coupling to the crystal lattice and related lattice vibrations turns out to be vital. For instance, multiferroicity entails an ionic displacement and tilts of the oxygen octahedra can give rise to a net-magnetization by a canting of the spins and thereby steer the rare-earth magnetism induced by the transition-metal ion 14 . Hence, the combination of the primary magnetic and structural orders is considerably more than the sum of its parts, manifesting in the emergence of enhanced or additional properties.
Despite the important interactions of magnetism and structure, spin-phonon coupling arising from the cross-talk of two magnetic ion subsystem remains largely unexplored. The observation and understanding of such cross-talk are at the heart of the present work. We show how the interaction of two magnetic sublattices in SmFeO 3 leads to the rise of an extraordinarily strong coupling between spins, lattice and lattice vibrations. First, for the high-temperature regime, we reveal a softening of the elastic moduli during the spin reorientation. Second, below room temperature, an unprecedentedly strong spin-phonon coupling arises from the non-spontaneous ordering of the Sm 3+ spins thanks to the exchange-field of the iron magnetism. Here we find strong indications that this "spin-spin-phonon" coupling gives rise to a phase change in SmFeO 3 . Hence, the interaction of both magnetic sublattices drives a strongly non-linear material response-entirely absent for the individual magnetic sublattices.
Results and discussion SmFeO 3 crystallizes in a perovskite-type structure with the space group Pnma 7 . The primary structural distortions from the ideal perovskite structure are tilts of the oxygen octahedra, ab + ain Glazer's notation 15 (Fig. 1a). At T N = 680 K, the iron spins order antiferromagnetically along the c-axis (G z -type) 7 . A spin canting induces a weak-ferromagnetic moment along the b-axis (F y ) and an A-type component along the a-axis (expressed in Bertaut's notation 16 ) (Fig. 1b). Between 450 K and 480 K, the iron-spin lattice experiences a spin reorientation to a C x G y F z -type ordering triggered by the anisotropy change of the samarium moments 7 . Subsequently, the samarium spins align in the exchange field of iron. This becomes clear from magnetic measurements as a decline of the overall magnetization below 140 K 17,18 with a full compensation at 3.9 K 19 . Staub et al. showed that the Fe 3+ magnetism can even induce an antiferromagnetic order on the rare-earth sublattices below spin-reorientation 20 . Warshi et al. showed that this low-temperature ordering involves the formation of a cluster glass rather than a discrete transition 21 . The high temperatures of the magnetic ordering T N and the spinreorientation 7 allow us to disentangle high-and lowtemperature phenomena. This makes SmFeO 3 a model material, unlike other rare-earth transition-metal ions, in which most magnetic interactions occur far below room temperature.
To assess the interaction of spins and phonons, we performed Raman scattering complemented with resonant ultrasound spectroscopy (RUS) from 800 K down to 4 K. Both are excellent probes for detecting and tracing even subtle structural and magnetic changes 22,23 . Raman spectroscopy probes directly the optical phonons. Thanks to a recent work 24 , we can assign all 24 Raman active phonons (Γ = 7 A g + 5 B 1g + 7 B 2g + 5 B 3g 25 ) to their specific vibrational patterns (see Fig. 2a and Supplementary Note 1). RUS provides a highly sensitive probe of static and dynamical lattice distortions that accompany relaxational processes and phase transitions 26,27 . In an RUS experiment, variations of elastic moduli scale with the square of the frequencies of individual resonances which are dominated by shearing motions, and acoustic loss is expressed in terms of the inverse mechanical quality factor, Q −1 . Combining both techniques provides access to the form and strength of both spin-phonon and spin-lattice coupling. 55 . Oxygen, iron and rare-earth ions are given in red, green and turquoise, respectively, FeO 6 octahedra in pale blue. Solid and dashed lines describe the orthorhombic unit cell and the pseudo-cubic setting, respectively. b Evolution of the magnetization (data taken from Ref. 19 ): at the Néel temperature T N , the Fe 3+ spins order in a A x F y G z -type fashion. At the spin reorientation, the magnetic order changes to C x G y F z . The spin canting leads to a net-magnetization. Both Fe 3+ spin structures are sketched in pseudo-cubic settings. At low temperatures, the iron magnetism induces the magnetic Sm 3+ sublattice with a net-magnetic moment (turquoise arrows) aligning antiparallel to the iron moments, which leads to a magnetic compensation point T comp at 3.9 K.
Néel temperature (T N = 680 K). To understand the spin-phonon coupling, we start by investigating how the ordering of the iron spins at T N = 680 K affects the vibrational system. When crossing T N , both Raman and RUS data show no discernable discontinuity-or change in gradient-in either the wavenumber or the full width half maximum (FWHM) as well as the shear elastic moduli or the acoustic loss, respectively. This is illustrated by two representative Raman bands (Fig. 2b, c and Supplementary Notes 1, 2) and an RUS acoustic resonance ( Supplementary Fig. 4b). This indicates that spin-lattice coupling for the Fe 3+ magnetic order alone is very weak or absent, consistent with measurements of the lattice parameters 28 . The absence of spin-phonon coupling is surprising in the light of classical systems such as orthochromites and orthomanganites 29,30 , where spin-phonon coupling occurs at T N . This observation can be understood by the lack of Jahn-Teller distortion in orthoferrites, which, in these other cases, provides a stronger and direct link between the electronic subsystem and the crystal structure. We conclude that the magnetic order of iron alone has no detectable impact on the (an)elastic properties or the vibrational system.
Spin reorientation (T SR = 450 to 480 K). With decreasing temperature, the magnetic Sm 3+ anisotropy changes. This change in anisotropy induces a rotation of the iron spin system 7 (Fig. 1b) marking the incipient cross-talk between the iron and the samarium magnetism. Unlike at T N , the RUS data show elastic softening by up to a few percent and a closely correlated increase in acoustic loss between 460 and 480 K, before f 2 reverts to the same trend below 460 K as observed above 480 K ( Fig. 2e-g, Supplementary Figs. [4][5][6]. Observing the same temperature evolution of resonance frequencies before and after the onset of magnetic order means that the magnetic order parameter can only be very weakly coupled to strainor that magnetoelastic coupling is completely absent, consistent with the literature and our findings at T N 28,31 . Further, this lack of coupling of the magnetic order parameters with macroscopic strains together with the inverse correlations between the variations of f 2 and Q −1 , indicates that softening in the transition interval is due to anelastic relaxation. The mechanism responsible for the concomitant changes of elastic moduli and acoustic losses is not known other than it involves anelastic relaxations of magnetoelastic origin. Strain relaxation of the structure in response to a dynamic stress is required, and one possibility is the existence of local regions with monoclinic distortions. However, although a monoclinic phase would occur in a continuous spin rotation 32 , a coherent monoclinic distortion has not been detected by X-ray diffraction in orthoferrites 32,33 . Evidence from NMR spectroscopy that there could be local monoclinic symmetry in these two phases 34,35 has also been disputed 33
. (For further information see Supplementary Note 4).
In contrast to the strong RUS anomaly, frequencies of the optical Raman phonons show no observable change through the spin reorientation (Fig. 2b, c). This is consistent with the absence of changes in strain and the timescale of relaxational effects in the order of~10 −6 s (ultrasound frequency), which would not be detected on the phonon timescale of~10 −12 s. The FWHM (Supplementary Note 2) is characterized by a stagnation around the spin reorientation, and, in turn, the phonon lifetime does not lengthen any further. This indicates a reduced phonon correlationlength arising from a non-collective rotation of the iron spins and the competition of both magnetic phases (for further discussion see Supplementary Note 2). There is no evidence for a collective rotation and a resulting symmetry breaking on the phonon length scale, i.e., the intermediate length scale between the strictly local scale of NMR and the macroscopic length scale of X-ray diffraction.
To conclude the high temperature analysis, we find that the elastic moduli of SmFeO 3 soften significantly during the spin reorientation by a magnetoelastic relaxation mechanism present only in the reorientation state driven by the Fe 3+ -Sm 3+ -interplay. The magnitude of any macroscopic strains coupled with the Fig. 2 High-temperature evolution of lattice vibrations and elastic properties. a A g -mode Raman spectrum of SmFeO 3 at room temperature in z(yy)z̅ configuration using Porto's notation 37 (Symbols to the left and right of the parentheses denote the propagation direction, while symbols inside the parentheses indicate polarization of incident and scattered light, respectively.). b, c Evolution of the frequency of A g (3) and A g (4) Raman modes from 300 to 850 K, across the spin reorientation T SR and Néel T N temperatures. For the temperature evolution of the complete Raman spectra, the phonon frequencies and FWHM, please refer to Supplementary Note 1+2. d Temperature evolution of a representative resonance peak in RUS spectra from room temperature to 700 K, thus above the magnetic ordering (this behavior is alike for all ultrasound resonance bands; see Supplementary Figs. 5 and 6). e Magnification of the RUS spectra across the spin reorientation interval revealing a significant deviation in elastic moduli. f Evolution of the square of the frequency, f 2 , which scales with some combination of single crystal elastic moduli, showing a softening through the reorientation transition. Induced Sm 3+ ordering (T < 300 K). Having observed emergent magnetoelastic properties as the samarium acts on the magnetic order of iron, we now turn to the reverse case with iron acting on samarium. Upon cooling below the spin reorientation, the iron magnetism induces a non-spontaneous ordering of the Sm 3+ spins. Our Raman spectra reveal two types of spectral anomalies in this regime. First, the frequency evolution of several vibrational bands dramatically deviates with temperature from a typical anharmonic behavior (Fig. 3). Second, we observe the emergence of new Raman features (Fig. 4). In comparison with the Raman results, RUS is virtually featureless with only subtle anomalies at 250 K that could be consistent with magnetoelastic coupling (see Supplementary Note 8).
The observed deviations of the Raman frequencies from a typical thermal behavior are unprecedentedly strong in perovskites. While spin-phonon coupling commonly leads to Raman shifts of a few wavenumbers at most 29,30,36 , we observe deviations of over 10 cm −1 and up to 8%. To understand the underlying mechanism of the deviations, we take a closer look at the vibrational pattern of the affected Raman modes. The low frequency B 3g (1) and A g (2) modes are pure samarium vibration modes along the y-and z-axis, respectively, while B 1g (2), A g (3) and B 2g (3) are rotation modes of the octahedra, which include samarium displacements. The anomalies of the pure samarium vibrations (B 3g (1) and A g (2)) are a clear sign of a modification of the Sm 3+ sublattice emerging from the non-spontaneous alignment of the samarium spins. Changes of the tilt vibrations, B 1g (2), A g (3) and B 2g (3), are, at first sight, less intuitive to relate to the Sm 3+ spin ordering. However, in the same way as the rotations of the octahedra affect the interaction of neighboring iron spins, FeO 6 rotations alter the Fe-O-Sm coupling path. The orientation of the samarium-spin sublattice is steered by the trilinear coupling of Fe 3+ spins, Sm 3+ spins and the octahedral tilt system, as proposed theoretically by Zhao and co-workers 14 .
Hence, the effective field acting on the Sm 3+ spins is of magnetostructural origin and links the samarium magnetism to the rotations of the octahedra. The FWHM that are commonly more susceptible to coupling phenomena show anomalies for all phonon modes below 350 cm −1 , i.e. modes that include Sm 3+ motions. Only motions in x-direction are not affected, which epitomises the naturally anisotropic character of the Sm 3+ -Fe 3+ spin interaction. For a further discussion see Suppelmatary Note 2. Overall, the interaction between Sm 3+ and Fe 3+ gives rise to the emergence of spin-phonon coupling, which we probe as a strong anomaly of samarium and octahedron vibrations.
New Raman features are highlighted in Fig. 4 as red-shaded areas. These features are not observed at ambient conditions but emerge gradually with decreasing temperature at 112 and 224 cm −1 in z(xy)z ̅ as well as 130, and 287 cm −1 in y(xz)y ̅ configuration, using Porto's notation 37 . To better illustrate the emergence, we show in To understand the consequences that come with the emergence of new Raman features, we need to identify their nature. Firstly, the new bands cannot be previously masked vibration modes, since all vibrational bands of Pnma symmetry in SmFeO 3 have been identified 24 . We need to consider the following typical origins for the new features: (i) Sm 3+ crystal-field excitations: Because of the noncentrosymmetric position of Sm 3+ ions, the ground-state energy of samarium is split. The interaction between rare-earth and iron can impact these low energy levels as earlier investigations of ReFeO 3 show [38][39][40][41][42][43] . However, such bands have only been observed by submillimeter spectroscopy and low frequency Raman modes are limited to the well-known magnon excitations of the Fe 3+spin 7 . Overall, electronic Sm 3+ excitations are expected at frequencies below 100 cm −1 , much lower than the position of our new Raman active features. Therefore, we exclude low energy Sm 3+ -transitions as origin of the new features.
(ii) Two-magnon bands result from scattering of two magnetic excitations not only at the Γ-point but also at the zone boundary. Therefore, they can be found at higher frequencies. Resulting from a second order process, two-magnon bands are known to show broader and more asymmetric shapes than first-order processes, see for example in TbMnO 3 upon the transition into the incommensurate multiferroic phase 44 . However, since the single-magnon bands do not show an anomalous behavior below room temperatures, a magnonic nature of the new bands is unlikely (see Supplementary Note 7). (iii) New phonons: The shape of the emerging bands at 112, 224 and 287 cm −1 strongly resemble the neighboring phonon modes. In addition, these features emerge simultaneously with the anomalies of the vibrational modes. Excluding the previous options, we therefore assign these bands as new phonon bands.
To put these findings into context, we compare them with wellknown systems that experience a magnetically induced phase transition. In collinearly antiferromagnetic RMn 2 O 5 the symmetry breaking and induced ferroelectricity are barely detectable in the phonon spectrum (deviations < 0.5 cm −1 ) and no new Raman bands are observed [45][46][47][48] . Likewise, phonon deviations in spinspiral systems are below the resolution limit, e.g. in MnWO 4 49 . In TbMnO 3 , the symmetry breaking can only be hypothesized from shifts of the vibration frequencies smaller than 1.5 cm −1 and new extremely broad features are assigned to two-magnon excitations 50 . Overall, anomalies in SmFeO 3 driven by the interplay of iron and samarium exceed these examples by one order of magnitude. Furthermore, it is instructive to compare SmFeO 3 to its ferroelectric sibling GdFeO 3 . Unlike SmFeO 3 where the Sm 3+ sublattice order is induced, the ferroelectric phase transition in GdFeO 3 results from the independent and spontaneous ordering of both, the Gd 3+ and the Fe 3+ sublattices. The changes to the Raman spectra, however, show identical characteristics. With the ordering of the magnetic Gd 3+ spins, Gd 3+ and octahedral tilt vibrations show the same anomalous deviations as in SmFeO 3 and the emergence of new bands 51 . These similarities reinforce the assignment of the new features as phonons. Yet, the physical origins of the phenomena in GdFeO 3 and SmFeO 3 are strikingly different resulting from classical spontaneous Gd 3+ -Gd 3+ ordering, as opposed to non-spontaneous, but iron-induced Sm 3+ ordering. Therefore, in GdFeO 3 the anomalies are limited to the ferroelectric phase below 2.5 K 9 , while in SmFeO 3 they occur at two orders of magnitude higher temperatures.
Furthermore, the emergence of new phonon bands itself, in direct analogue to GdFeO 3 , provides evidence for a change of phase in SmFeO 3 through a symmetry lowering. Raman spectroscopy does not allow for an identification of the symmetry. From the evolution of the phonon modes and the lattice constants (see Supplementary Note 9), we can estimate, however, that the material remains orthorhombic with the possible point groups 222 or mm2. This consequence of the Fe 3+ -Sm 3+interplay is astonishing and goes beyond a rise of strong spin-phonon coupling.
In conclusion, we scrutinized the spin-phonon coupling in SmFeO 3 , a model material for the interaction of two magnetic ions, throughout all magnetic phases. Any coupling between the magnetic order parameters of iron alone and strain is weak, such that there are no obvious anomalies in the evolution of elastic moduli through the Néel critical point, on either side of the spin reorientation transition, or associated with the magnetic cluster glass formation. This is a reflection, primarily, of the fact that Fe 3+ is not Jahn-Teller active. On the other hand, there is a significant anelastic effect through the temperature interval of the spin reorientation transition, which is ascribed to relaxational magnetoelastic effects of locally strained regions that might possibly be monoclinic. Once the Sm 3+ and Fe 3+ spins start interacting, however, a strong spin-phonon coupling emerges. This coupling manifests in the anomalous evolution of vibrational bands and in the emergence of new Raman active modes. It is activated by the nonspontaneous, though intrinsic ordering of the Sm 3+ spins in the exchange field of the magnetic Fe 3+ -sublattice. We observe strong indications-identical to the ferroelectric phase transition in GdFeO 3 -that this non-spontaneous ordering induces a phase change in SmFeO 3 . In addition, our findings support the theoretical prediction of the trilinear coupling between Fe 3+ and Sm 3+ spins and the FeO 6 tilt vibrations 14 . While this seminal theoretical work focuses on the influence of the tilts on the magnetism, we demonstrate here the reverse-effect of the magnetism on the structural vibrations.
We have shown how the non-linear interplay of two magnetic orders can trigger significant variations of lattice motions. We expect that the presented effects are not limited to SmFeO 3 , but likely exist in a vast variety of systems like rare-earth manganites, ferrites or chromites, where magnetic transition-metal sublattices impose a magnetic ordering on rare-earth sublattices. We have shown that such non-linear coupling of magnetic orders can give rise to enhancement phenomena and even new phases exceeding by far the sum of the initial properties. We expect that such phenomena are not limited to magnetic orders but may play a role for a large number of interacting orders. Importantly, these effects emerge just below room temperature and are not limited to cryogenic temperatures which makes them attractive for potential applications. This work motivates the specific search for hidden non-linear material responses, in experiment and theory, to achieve a conclusive picture of the microscopic interaction mechanisms at play and their potential exploration in technology applications.
Methods
Sample preparation. SmFeO 3 single crystal samples were grown in a four-mirror optical-floating-zone furnace (FZ-T-10000-H-VI-P-SH, Crystal Systems Corp.) as described elsewhere 19 . Crystals of all three orthorhombic orientations were prepared, lapped to a thickness of 80 μm and polished optically flat.
Raman spectroscopy. Raman spectroscopy measurements were performed with an inVia Renishaw Reflex Raman Microscope in micro-Raman mode with a 633nm He-Ne laser. We avoided sample heating by limiting the laser power. Frequencies and FWHM of the phonon modes were obtained by fitting the Raman spectra with Lorentzian functions. During the Raman scattering experiments, the temperature of the crystals was controlled using a Linkam THMS600 stage and an Oxford Instruments Microstat for cryogenic temperatures.
Resonant ultrasound spectroscopy. The technique of Resonant Ultrasound Spectroscopy (RUS) involves the measurement of acoustic resonances of mm-sized samples between two piezoelectric transducers and has been described in detail by Migliori and Sarrao 26 . The first transducer excites mechanical vibrations, typically in the frequency range 0.01-1 MHz, and the second detects resonances at frequencies which depend on the size, shape and density of the sample and on the values of its elastic moduli.
Individual peaks in the primary spectra are fitted to determine their frequency, f, and width at half maximum height Δf. Each resonance is typically dominated by shearing motions and the square of the resonance frequencies scales with different combinations of the (predominantly shear) elastic moduli. Acoustic loss is expressed in terms of the inverse mechanical quality factor, Q −1 , which is taken to be Δf/f. Two different instruments were used for measurements above and below room temperature. In the low temperature instrument, the sample sits directly between the transducers and the holder is lowered into a helium flow cryostat 52 . A few mbars of helium are added to the sample chamber to assist thermal equilibration between the sample and the cryostat. In the high temperature instrument, the sample sits between the tips of alumina buffer rods which are inserted into a horizontal resistance furnace with the transducers attached to the ends of the rods, outside the furnace 53 .
An irregular fragment with dimension~1 mm 3 and mass 0.0158 g was selected for study on the basis that it did not show any externally visible cracks. Spectra containing 65,000 data points were collected in automated cooling/heating sequences using a settle time of 20 min at each set point to allow for thermal equilibration. Liquid nitrogen was used for cooling down to~110 K. For the high temperature measurements, the sample was held in an argon atmosphere. Selected peaks in the primary spectra were fitted with an asymmetric Lorentzian function to extract values of f and Δf as a function of temperature, using the software package Igor (Wavemetrics).
Data availability
The Raman spectroscopy and Resonant Ultrasound Spectroscopy data generated in this study have been deposited in the Research Collection database of the ETH Zurich under accession code https://doi.org/10.3929/ethz-b-000512405 54 . | 5,747 | 2022-01-21T00:00:00.000 | [
"Physics"
] |
The ternary phase diagram of nitrogen doped lutetium hydrides can not explain its claimed high Tc superconductivity
This paper presents the results of an extensive structural search of ternary solids containing lutetium, nitrogen and hydrogen. Based on thousands of thermodynamically stable structures the convex hull of the formation enthalpies is constructed. To obtain the correct energetic ordering, the highly accurate RSCAN DFT functional is used in high quality all-electron calculations, eliminating possible pseudopotential errors. In this way, a novel lutetium hydride structure (HLu2) is found that is on the convex hull. An electron phonon analysis however shows that it is not a candidate structure for near ambient superconductivity. Besides this structure, which appears to have been missed in previous searches, possibly due to different DFT methodologies, our results agree closely with the results of previously published structure search efforts. This shows, that the field of crystal structure prediction has matured to a state where independent methodologies produce consistent and reproducible results, underlining the trustworthiness of modern crystal structure predictions. Hence it is quite unlikely that a structure, that would give rise within standard BCS theory to the superconducting properties, claimed to have been observed by Dasenbrock-Gammon et al (2023 Nature 615 244), exists. This solidifies the evidence that structures with high Tc conventional superconductivity, that could give rise to the experimental claims, do not exist in this material.
Introduction
In their recent publication Dasenbrock-Gammon et al [1] claim to have experimentally observed superconductivity in bulk nitrogen doped lutetium hydride (Lu-N-H) at a T c of 294 K and at a pressure of 1 GPa.Since no detailed analysis of the structure, that is claimed to be superconductive at near ambient conditions, is given, an explanation of the mechanism that could lead to the observed superconductivity is missing.The mystery of the exact composition and structure of the putative superconductor has raised great interest into Lu-N-H structures throughout the entire materials science and solid state physics community.
The reaction of the community to the news of another room temperature superconductor from Dias and coworkers was prompt.Already a few days later, Shan et al [2] published their experimental study about pressure induced color changes in LuH 2 .The observed color changes in the samples are similar to the ones presented in [1] but resistivity measurements showed no signs of superconductivity above 1.5 K.One of the first theoretical studies on the Lu-N-H system was conducted by Liu et al [3].Their work also focused on lutetium hydrides.In order to investigate the convex hull of Lu-H the evolutionary structure prediction algorithm from the USPEX [4] package was used.Liu et al found LuH 2 to be the most stable lutetium hydride and they conclude that the LuH 2 is the parent structure when lutetium hydrides are doped with Nitrogen.Dangić et al [5] also investigated lutetium hydrides where they investigated Raman spectra, phonon band structures and optical properties.They find that LuH 2 is the only structure that can explain the color change in many experiments and conclude that it is synthesized in most experiments.Both Huo et al [6] and Xie et al [7] perform a ternary Lu-N-H structure search where only binary structures are found that are on the convex hull.A subsequent electron phonon analysis in [6] shows that no high T c structures where found by Huo et al An overview of the Lu-N-H convex hull can be found in the recent work of Ferreira et al [8] were the they present the results of a detailed structure search at ambient pressure.In the study of Ferreira et al, the configurational space of the ternary Lu-N-H structure was investigated thoroughly using the USPEX [4] evolutionary search method and the AIRSS [9] random structure search method.In the evolutionary search with USPEX, Ferreira et al calculated energy and forces on the density functional theory (DFT) level and in the random structure search with AIRSS, ephemeral data derived potentials [10] were used.An electron phonon analysis of the best candidate structures for room temperature superconductivity from Ferreira et al disagrees with the observation of near ambient superconductivity made by Dasenbrock-Gammon et al [1].Based on their results, Ferreira et al conclude that the observations made by Dasenbrock-Gammon et al [1] cannot be explained with the electron phonon mechanism that describes conventional superconductivity.
Given that Liu et al [3,8,11] have investigated the configurational and compositional space of Lu-N-H thoroughly the excitement about the Lu-N-H superconductor has been damped considerably as the observations made by Dasenbrock-Gammon et al [1] could not be explained using the current state of the art theoretical materials science methods.
There are basically three options that explain this disagreement between theory and experiment: • Dasenbrock-Gammon et al [1] observed unconventional superconductivity.
• There is an error in the experimental setup of Dasenbrock-Gammon et al [1].
• The correct structure was not found in all theoretical structure searches.
In this paper we present the results of an independent structure search in the ternary Lu-N-H phase diagram, ruling further out the last possibility that an important structure was overlooked.All presented final results were obtained with the regularized SCAN (RSCAN) functional [12], which is widely considered to be the most accurate functional for cohesive energies.Well tested pseudo-potentials for this functional are however scarce.To eliminate any pseudo-potential errors we have therefore performed highly accurate all-electron calculations.Therefore, our results are expected to be more accurate than all other previous results.The same approach has recently been used in a large scale structure search [13] for the putative carbonaceous sulphur hydrides superconductor [14].
Our results solidify the conclusions from the previous studies [3,8,11] that no conventionally superconducting structure can be found in the ternary Lu-N-H phase diagram.
Structure search with minima hopping
The phase diagram of the ternary Lu-N-H structures were explored using the minima-hopping method [15][16][17][18][19][20].Minima hopping is a method that reliably finds the global minimum of potential energy surfaces using a combination of variable cell shape molecular dynamics [21] along soft modes of the potential energy surface and variable cell shape geometry optimization [22].Since it is not supposed to generate a thermodynamic distribution, it can escape from any funnel by crossing high energy barriers.Because of that, minima hopping will always find the global minimum given a sufficiently long simulation.Other methods such as evolutionary search algorithms introduce moves to generate new structures which can be insufficient to escape from a deep funnel.
In the Minima Hopping runs, energies, forces and the stress tensor were calculated on the DFT level with the standard PBE functional [23] and the SIRIUS library [24,25] which is a GPU accelerated and MPI parallelized plane wave code.Ultrasoft pseudo potentials [26] were used to eliminate the core electrons.A plane wave cutoff of 1400 eV was used and a tight 4 × 4 × 4 Monkhorst-Pack [27] k-point grid was chosen.
In total 108 different stoichiometries were sampled at a pressure of 1 GPa.To ensure convergence of the minima hopping method, the search was only stopped after 25 000 distinct local minima were found.On average 230 different minima were found for every stoichiometry.
All electron calculations
In order to increase the accuracy of the DFT calculations from section 2.1 the 20 lowest minima that are in the energy range of 50 meV per atom compared to the ground state of each stoichiometry were recalculated with a more precise DFT method.With these criteria, 1600 structures were selected for further processing.The error introduced by the pseudopotentials was eliminated by performing an all electron calculation and the error from the PBE functional was reduced by using the accurate RSCAN exchange correlation functional [12,28,29].
FHI-aims [30][31][32] was used for a geometry optimization of the 1200 systems with the previously mentioned settings, a 5 × 5 × 5 Γ centered k-point grid and the tier 2 basis set.The resulting energies are our most accurate ones.They reduce errors in the energetic ordering of the 1200 lowest systems and therefore the chance of finding the wrong ground state.All energies used in the convex hull plots of this paper were obtained using this procedure.
Comparison between the high-performance and high-accuracy DFT calculations
The energies of the most promising structures that were found using the minima hopping method were recalculated with all electron DFT simulations that used the RSCAN functional.The difference in formation enthalpy of the plane wave calculations with PBE functional and the all electron calculation with the RSCAN functional is displayed in figure 2. For most stoichiometries, the error is between 50 meV per atom and 100 meV per atoms.The energy difference between the pseudopotential and the all electron calculation is rather large.Nevertheless, errors of this magnitude are not too uncommon in DFT calculations.The good correlation in formation enthalpy of the nitrogen hydrides indicates that lutetium is responsible for a large part of the energy error.
Even though the energetic error of the pseudopotential calculations is rather large, the energetic ordering of the structures on the convex hull is surprisingly good and the ground state structures were predicted correctly by the pseudopotential calculations.
Lutetium hydrides
In order to get an initial impression for the Lu-N-H system, a structure search for binary lutetium hydrides was first conducted and the formation enthalpies were also verified using highly accurate all electron calculations.The convex hull of the binary system is displayed in figure 3. H 3 Lu, H 2 Lu and HLu 2 all lie on the convex hull of formation enthalpies which makes them thermodynamically stable.HLu is only 16 meV per atom above the convex hull which is within the typical uncertainty of DFT calculations.Therefore, HLu may also be thermodynamically stable.The high accuracy DFT calculations presented in section 3.1 show that HLu 2 is on the convex hull.The structure and the electronic density of states of H 3 Lu, H 2 Lu and HLu 2 is displayed in figures 4(a)-(c) and (e).Because neither Liu et al [3] or Ferreira et al [8] have found an HLu 2 structure that is on the convex hull we decided to calculate the T c using Quantum Espresso [33] and the Allen-Dynes equation [34].The calculations were done using the same procedure as in [13].To calculate the T c of HLu 2 , a plane wave cutoff 1370 eV, a 16 × 16 × 16 k-grid (a spacing of ∼0.03A −1 ) and a 4 × 4 × 4 q-grid (a spacing of ∼0.13A −1 ) were used.The resulting T c is 0.5 K which is obtained using the value 0.1 for µ * .In order to provide more information about the HLu 2 structure, its electronic density of states and phonon dispersion is shown in figures 5 and 6.
Ternary lutetium nitrogen hydrides
The convex hull of formation enthalpies of the Lu-N-H system is displayed in figure 1.There is only one ternary stoichiometry, that is on the convex hull: Additional electron phonon calculations were done for H 2 Lu 5 N, H 3 Lu 4 N, H 3 Lu 4 N 2 H 4 Lu 4 N 2 , H 3 Lu 9 N and H 5 Lu 9 N because their convex hull distance is smaller than 50 meV per atom and they exhibit a moderately pronounced singularity at the Fermi level.The highest T c that was calculated for these structures is H 3 Lu 4 N with 5 K.
Our results indicate that nitrogen doped LuH x crystals are thermodynamically unstable since they are all above the convex hull of formation ethalpies.Other than that, no ternary Lu-N-H structures with a low convex hull distance and interesting features in the electronic density of states was found in our structure search.Our results therefore suggest that Lu-N-H compositions are not responsible for the high T c measured by Dasenbrock-Gammon et al [1] or that unconventional superconductivity, which cannot be described by our theory, was observed by Dasenbrock-Gammon et al [1]
Conclusion
Theoretical structure prediction is by no means an easy or routine task, especially for complex ternary such as Lu-N-H involving rare elements such as Lu.It is therefore reassuring, that two independent studies based on completely different methodologies and codes come to comparable conclusion.The only notable difference between our study and the previous studies is that we have identified HLu 2 to be on the convex hull.Our calculation of the T c of HLu 2 shows that it is not responsible for the high T c measured by Dasenbrock-Gammon et al [1] Otherwise, very similar structures were found which lead to a comparable convex hull.Only a few (H 2 Lu, H 3 Lu, HLu 2 , LuN and H 5 Lu 4 N 2 ) Lu-N-H stoichiometries lie on this convex hull at 1 GPa of pressure.This is consistent with prior structure searches [3,8,11].Firstly, this highlights the fact, that modern crystal structure prediction methods have reached a level of maturity, where consistent and reproducible results can be expected.Secondly, the fact that two studies, based on independent methodologies, come to the same central conclusion reduces the risk that a bias was introduced during the structure search and that a possible structure with superconducting properties was overlooked.The theoretical results from our study and previous structure searches [3,8,11] disagree with the experimental observation made by Dasenbrock-Gammon et al [1] since no structure was found that can explain the claimed room temperature superconductivity.This allows us to conclude, with a very high certainty, that no conventionally superconducting Lu-N-H and Lu-H structure exists.
Structural data
Low enthalpy Lu-N-H structures, their electronic density of states and convex hull distances can be found in this GitHub repository: https://github.com/moritzgubler/H-Lu-N.The enthalpy of these structures was calculated as described in section 3.1.
Figure 1 .
Figure 1.Formation enthalpy difference to the convex hull in meV per atom at 1 GPa of pressure.The black lines indicate the convex hull.
Figure 2 .
Figure 2. Difference in formation enthalpy for the ground state of each stoichiometry between a PBE pseudopotential calculation and an all electron RSCAN calculation at 1 GPa of pressure.
Figure 3 .
Figure 3.A comparison of the binary convex hull of the Lu-H system calculated with the high throughput and high accuracy DFT methods.The convex hull is displayed by the black lines.
H 5 Lu 4 N 2 .It has an orthorombic lattice with cell parameters a = 3.42 A, b = 5.57A and c = 5.90 A. It is pictured in figure 4(d) and a plot of the electronic density of states can be found in figure 4(e).The density of states has no Van Hove singularity at the Fermi level or any other special features which indicates that it is an ordinary metal.
Figure 4 .
Figure 4.A selection of structures that lie on the convex hull at 1 GPa of pressure and electronic density of calculated with an all electron RSCAN simulation.Hydrogen is pictured in white, nitrogen in blue and lutetium in red and the Fermi level is shifted to zero.
Figure 5 .
Figure 5. Electronic band structure and the density of states of HLu2.The dashed line shows the Fermi energy of 14.54 eV.
Figure 6 .
Figure 6.Phonon dispersion and density of states of HLu2. | 3,497.2 | 2023-11-20T00:00:00.000 | [
"Physics",
"Materials Science"
] |
miRNA‐532‐5p functions as an oncogenic microRNA in human gastric cancer by directly targeting RUNX3
Abstract Accumulating data reveal that microRNAs are involved in gastric carcinogenesis. To date, no information was reported about the function and regulatory mechanism of miR‐532‐5p in human gastric cancer (GC). Thus, our study aims to determine the role and regulation of miR‐532‐5p in GC. Here, we found that transient and stable overexpression of miR‐532‐5p dramatically increased the potential of colony formation and migration of GC cells, decreased the percentage of cells in G1 phase and cell apoptosis in vitro, and increased the weight of mice lungs and number of lung xenografts in vivo. Gain‐of‐function, loss‐of‐function and luciferase activity assays demonstrated that miR‐532‐5p negatively regulated the expression of RUNX3 and its targets directly. We also found that miR‐532‐5p level was negatively correlated with RUNX3 gene expression in various GC cell lines. Our results indicate that miR‐532‐5p functions as an oncogenic miRNA by promoting cell growth, migration and invasion in human GC cells.
Introduction
Gastric cancer is one of the most common human malignant tumours and ranks the second in terms of global cancer-related death [1]. Clinically, the absence of specific symptoms renders early diagnosis of this deadly disease difficult, thus most of patients are diagnosed at advanced stages. Improvement of diagnosis and treatment has resulted in good long-term survival for patients with early GC, whereas the prognosis of patients with advanced GC is still poor [2]. Therefore, further studies are necessary to better understand the pathogenesis of GC. Recent studies have revealed that microRNAs (miRNAs) are novel regulator of tumour progression and potential therapeutic targets in GC [3,4]. miRNAs are small, single-strand, 18-25 nucleotides RNAs resulting in target mRNA degradation or translational repression [5]. Accumulating evidence indicate that miRNAs are involved in many physiological processes, including cellular proliferation, differentiation, development and apoptosis [6,7]. Clear explanation of miRNAs function and regulation of their targets will bring a prospective future for diagnosis and treatment of GC.
miR-532-5p is located at human chromosome Xp11.23 and mature miR-532-5p consists of 22 nucleotides. Analysis of miR-532-5p sequence by ClustalW displays its conservative sequence between various species, which implies its important role in evolutional progress. However, little information was reported about miR-532-5p. We tried to get some useful information using different algorithms. At least four databases including miRBase, TargetScan, microrna.org and Diana demonstrate that the 2-8 bases (seed sequence) and 14-20 bases of miR-532-5p are completely complementary with the 890-908 bases of known tumour suppressor gene RUNX3 mRNA 3 0 -UTR, indicating that RUNX3 is a potential target of miR-532-5p. Moreover, there is no report about the function of miR-532-5p in human GC, and our group focused on RUNX3 in human GC [8][9][10]. Accordingly, our study aims to determine the role of miR-532-5p in tumorigenesis and progression of GC and regulation of RUNX3 by miR-532-5p.
Materials and methods
miR-532-5p mimics, inhibitor, plasmid and cell transfection RNA extraction, reverse transcription and QRT-PCR Total RNAs from cells or tissues were extracted using Trizol reagent (Invitrogen) according to the manufacturer's instructions. Reverse transcription of RNAs was performed with M-MLV reverse transcriptase (Fermentas, Vilnius, Lithuania). cDNA for miRNA and total cDNA were synthesized using the specific miRNA primers from RiboBio and random hexamer from Fermentas respectively. Expression of RUNX3 mRNA and mature miR-532-5p were assessed by QRT-PCR using SYBR â Premix Ex Taq TM (TaKaRa, Dalian, China). QRT-PCR was performed in an ABI7700 sequence detector (Applied Biosystems, Foster City, CA, USA). Beta-2-microglobulin and U6 small nuclear RNA were used as the control for RNA loading in detection of RUNX3 mRNA and miR-532-5p respectively.
Western blotting
Total cellular proteins were extracted with lysis buffer, separated by SDS-PAGE and transferred to a nitrocellulose membrane. After incubating in the blocking buffer, the membrane was probed with specific antibodies against RUNX3, Bim (both from Abcam, Cambridge, UK), p21 or b-actin (both from Santa Cruz Biotechnology, Santa Cruz, CA, USA) overnight at 4°C, followed by horseradish peroxidase-conjugated IgG and developed with enhanced chemiluminescent reagent (Millipore, Billerica, MA, USA) according to the manufacturer's instructions.
Construction of reporter plasmid and luciferase activity assay
The 315 bp wild-type 3 0 -UTR of human RUNX3 mRNA containing miR-532-5p binding site was amplified by PCR and inserted into the SpeI/ HindIII sites of pMIR-REPORT TM luciferase reporter plasmid to generate pMIR-RUNX3/wt plasmid. The complementary sequence for miR-532-5p seed sequence in RUNX3 3 0 -UTR was mutated using QuickChange site-directed mutagenesis kit with the pMIR-RUNX3/wt plasmid as template. The mutant was named as pMIR-RUNX3/mut plasmid. The cells were cotransfected with miR-532-5p mimics and pMIR-RUNX3/wt or pMIR-RUNX3/mut transiently. Meanwhile, pMIR-REPORT TM b-gal control plasmid was cotransfected to normalize variability because of differences in cell viability and transfection efficiency. 48 hrs later, luciferase activity and b-galactosidase activity were determined using a dual luciferase reporter assay system and b-Galactosidase enzyme assay system (both from Promega, Madison, WI, USA).
Colony formation assay
The cells were seeded into 6-well plates (300 cells/well) and cultured for 10-14 days at 37°C with 95% air/5% CO 2 . Cell colonies were fixed with methanol and stained with crystal violet, and the colony foci with more than 50 cells were counted.
Cell cycle and apoptosis assay
For cell-cycle assay, the cells were fixed with 70% ethanol and stained with propidium iodide (PI) containing RNase A in the dark. For apoptosis assay, the cells were stained with PI and AnnexinV-FITC. Thereafter, the cells were analysed using a flow cytometer (BD Biosciences, Bedford, MA, USA). Each experiment was performed in triplicate and the data were analysed with FCS Express V3.0612 soft (De Novo Software, Glendale, Canada).
Cell migration assay
The cells were harvested and resuspended in serum-free RPMI 1640 medium, and 3 9 10 4 cells were placed into chambers (Costar, Cambridge, MA, USA). The chambers were then inserted into the wells of 24-well plates and incubated. After 48 hours' culture, the cells remaining on the upper surface of the chamber were gently removed, whereas the cells adhering to the lower surface were fixed with methanol, stained with crystal violet and counted under a microscope (Olympus, Shinjuku, Tokyo, Japan).
Nude mice xenograft model
The lung tumour xenografts were established by injecting 6 9 10 5 stable cells into the tail vein of athymic 5-6 week-old BALB/c nude mice (Peking University, Beijing, China). The mice were killed after 4 weeks and the dissected lungs were collected and weighed. A part of lung tissues were fixed in paraformaldehyde, embedded in paraffin, sectioned at 4 lm thickness and stained with haematoxylin-eosin. The rest of lung tissues were used to examine the expression of miR-532-5p and RUNX3. All the animal experiments were approved by the local ethics committee of Shandong University.
Statistical analysis
All data presented were repeated at least three independent experiments. Student's t-test was used to evaluate the data of different treatment group and P < 0.05 was considered statistically significant. All data were analysed by SPSS17.0 statistical software (SPSS Inc., Chicago, IL, USA).
Result miR-532-5p promotes GC cells growth and migration in vitro
To determine the functions of miR-532-5p in human GC cells, several experiments were performed. The specific miR-532-5p mimics was used to reinforce miR-532-5p expression (Fig. 1A). Stable cell clone was established by G418 selection and exhibited high miR-532-5p expression compared to control ( Fig. 2A). Colony formation assay showed that both transient and stable overexpression of miR-532-5p led to an increase in foci number as well as size in GC cells (Figs 1B, C and 2B, C), indicating that miR-532-5p promoted cell growth. To explore the mechanisms underlying the promotion of cell growth, we examined cell cycle distribution and apoptosis after miR-532-5p treatment. Both transient and stable overexpression of miR-532-5p decreased cell number in G1 phase, which was accompanied by an increase in the cell population in S phase and G2/M phase (Figs 1D, E and 2D, E). Compared to negative control, the proliferation index of miR-532-5p-treated cells was higher (Figs 1F and 2F). Moreover, overexpressed miR-532-5p suppressed cell apoptosis (Figs 1G and 2G). Both viable apoptotic cells and non-viable apoptotic cells decreased significantly after miR-532-5p treatment (Figs 1H and 2H). Lastly, miR-532-5p overexpression induced more number of GC cells to migrate through the chamber well adhering to the low surface (Figs 1I, J and 2I, J). These results indicate that miR-532-5p functions as an oncogenic miRNA by promoting GC cell growth by pushing cell cycle progression and inhibiting cell apoptosis, and promoting GC cells migration.
miR-532-5p promotes GC cells invasion and colonization in vivo
To determine whether miR-532-5p could trigger tumour growth in vivo, as observed in cultured cells, the lung xenografts were estab-lished in nude mice. The lungs of stable miR-532-5p expression group had a significant larger volume and weight than those of control group (Fig. 3A and B). HE staining of lung tissue slices displayed more number of tumour foci in stable miR-532-5p expression group (Fig. 3C and D), indicating that miR-532-5p facilitated tumour cell colonization and growth. Stable miR-532-5p expression group had higher level of miR-532-5p than control group (Fig. 3E), whereas RUNX3 gene expression level in stable miR-532-5p expression group was lower than control group (Fig. 3F and G). These data imply that miR-532-5p triggers GC cells invasion from the vein to lungs and strengthens GC cells colonization and growth in mice lungs.
miR-532-5p inhibits RUNX3 gene expression at transcriptional and translational level
Since miRNAs exerted their functions by negatively regulating the expression of their target genes, putative targets of miR-532-5p were predicted using four databases. These databases predict that RUNX3 is a potential target of miR-532-5p. Our group has been interested in RUNX3 and there is still no report that RUNX3 is a direct target in GC, thus we investigated whether miR-532-5p is capable of regulating endogenous RUNX3 expression.
Gain-of-function and loss-of-function experiments were used to testify our prediction. Transfection of specific miR-532-5p mimics induced strong increase in mature miR-532-5p up to seven hundred times compared to control (Fig. 4A), which caused a sharp reduction in RUNX3 mRNA and protein expression in these cells ( Fig. 4B and C). Stable miR-532-5p expression exhibited the similar results accordant with the mimics treatment ( Fig. 4D-F). Moreover, overexpression of miR-532-5p resulted in the protein reduction in known RUNX3 targets such as p21 and Bim ( Fig. 4C and F). On the other hand, knockdown of miR-532-5p expression by specific miR-532-5p inhibitor increased RUNX3 gene expression ( Fig. 4G-I). Accordingly, miR-532-5p is able to suppress RUNX3 mRNA and protein expression.
miRNA-532-5p expression is negatively correlated with RUNX3 gene expression in various GC cells
In addition, the expression of miR-532-5p, RUNX3 mRNA and protein in eight GC cell lines were examined by QRT-PCR and western blotting respectively (Fig. 5A-C). The correlation between the level of miR-532-5p and RUNX3 was analysed by SPSS 17.0 statistical software. As Figure 5D and E shown, both RUNX3 mRNA and protein level were negatively correlated with the expression of miR-532-5p implying the regulation of RUNX3 by miR-532-5p.
miR-532-5p targets RUNX3 directly in GC cells
To confirm that RUNX3 is a direct target of miR-532-5p, luciferase reporter assay was performed. Wild-type and mutant RUNX3 3 0 -UTR having and lacking miR-532-5p binding sequence were cloned into the downstream of firefly luciferase coding region in luciferase reporter vector (Fig. 5F). The relative luciferase activity driven by wild-type RUNX3 3 0 -UTR was reduced about 50% after miR-532-5p mimics treatment, but transfection of mutant RUNX3 3 0 -UTR restored the relative luciferase by about 90% (Fig. 5G), suggesting that miR-532-5p binding sequence was essential for negative regulation of luciferase expression driven by RUNX3 3 0 -UTR.
Collectively, our results (Figs 4 and 5) strongly support a direct suppression of RUNX3 by miR-532-5p by means of mRNA degradation as well as translational repression.
Discussion
Recent advances potentiate miRNAs as targets for the diagnosis and antitumour therapies of human cancers, including GC. More and more number of miRNAs were shown to participate in the initiation and progression of GC. Among reported miRNAs, most of them functions as tumour suppressive gene [11][12][13], few are oncogenic miRNAs [14]. Accordingly, miRNA genes have been characterized as novel proto-oncogenes or tumour suppressor genes in carcinogenesis [15]. miR-532-5p was first reported in cutaneous melanoma [16]. The study found miR-532-5p was overexpressed in cutaneous melanoma Fig. 3 The role of miR-532-5p in nude mice with stable miR-532-5p expressing BGC-823 cells. (A) After 4 weeks' treatment with stable BGC-823 cells, the lung xenografts were developed and the lungs were dissected (The scale bar indicates the lungs size and the black arrows denote tumour nodules). (B) The weight of lungs was assessed and analysed statistically. (C) The lung tissue slices were stained with HE. The representative results were shown (The scale bar represents 500 lm and the dark blue nodule represents tumour foci). (D) The number of tumour foci in lungs was counted and analysed by Student's t-test. (E-G) The level of miR-532-5p, RUNX3 mRNA and protein level in lungs were examined by QRT-PCR and western blotting respectively (*P < 0.05, **P < 0.01) and anti-miR-532-5p inhibited RUNX3 mRNA and protein expression. Another study found that miR-532-5p expression was lower in borderline than benign neoplasm and significantly down-regulated in Her2/neu-positive ovarian carcinoma [17]. However, no information was reported about the function and regulatory mechanism of miR-532-5p in other human solid tumour, including GC. The functions of miRNAs are determined by their targets. So, we tried to seek for potential targets of miR-532-5p. As listed before, four databases predict that RUNX3 is a candidate.
Previous studies have reported that RUNX3 was absent or lowly expressed in 50-70% primary GC and various GC cells because of hypermethylation of RUNX3 gene promoter, loss of allele heterozygosity and cytoplasmic sequestration [18][19][20][21], but it is unable to interpret the inactivated RUNX3 in those patients without promoter hypermethylation, hemizygous deletion and protein mislocalization. Thus, miRNAs become new candidates to study the mechanism of RUNX3 inactivation. Although inhibition of miR-532-5p increased RUNX3 expression in cutaneous melanoma [16], there was still no direct evidence that RUNX3 was a direct target of miR-532-5p, as well as the regulation of RUNX3 by miR-532-5p in GC. More important, our group have been focusing on RUNX3 in human GC [8][9][10], thus we decided to determine the role of miR-532-5p and RUNX3 regulation by miR-532-5p in GC.
In this study, both transient and stable miR-532-5p overexpression promoted cell growth in vitro (Figs 1B, C and 2B, C). To investigate the mechanisms underlying cell growth, cell cycle distribution and cell apoptosis were examined. As Figures 1D-H and 2D-H shown, miR-532-5p treatment decreased the ratio of cells in G1 phase and apoptotic cells. Previous studies reported that Runx3 could induce p21 and Bim up-regulation resulting in cell growth inhibition and apoptosis [22,23], thus we explored the effect of miR-532-5p on p21 and Bim expression. As expected, overexpressed miR-532-5p decreased the level of p21 and Bim protein ( Fig. 4C and F). Overexpression of RUNX3 increased the proportion of cells in G0/G1 phase slightly (Fig. S1) and inhibited proliferation index of GC cells, and our previous studies have showed that overexpressed RUNX3 increased the ratio of apoptosis [10]. Thus, we concluded that miR-532-5p could inhibit the expression of RUNX3 and its targets p21 and Bim protein, resulting in the relieve of cell growth inhibition and suppression of To investigate the function of miR-532-5p in vivo, we injected stable cells into the flank region of nude mice. But, we could not gain the expected subcutaneous xenografts (data not shown). However, we found apparent lung xenografts after injecting stable cells into the tail vein of nude mice compared to control group (Fig. 3A-D). Both increased weight of lungs and more number of tumour foci in lungs confirmed that miR-532-5p triggered the invasion of cells from the vein to lungs, cells colonization and growth in lungs. These results can be interpreted by our previous study that overexpressed RUNX3 decreased the number of metastatic nodules in mice lungs [9]. High level of miR-532-5p and low level of RUNX3 in stable miR-532-5p expression group confirmed the stable miR-532-5p expression and regulation of RUNX3 by miR-532-5p in vivo. Although previous study have reported that overexpressed RUNX3 inhibited cell invasion in vitro [9], more detailed mechanism underlying the reinforced invasion of GC cells after miR-532-5p treatment need to be explored. Some fresh tissues from primary GC patients were used to determine the expression of miR-532-5p and RUNX3, but we could not gain significant results (data not shown). Perhaps the number of samples and the deviation of sampling affected our results. We tried to collect more tissues to determine the expression and role of miR-532-5p in GC patients.
Collectively, our present data demonstrated that miR-532-5p functions as an oncogenic miRNA in GC cells by targeting RUNX3 at both transcriptional and translational level in vitro and in vivo. All the results implied that miR-532-5p may play an important role in GC development and progression. | 3,928.4 | 2015-10-30T00:00:00.000 | [
"Biology"
] |
Effect of peak sun hour on energy productivity of solar photovoltaic power system
A solar cell is a type of renewable energy engineering technology that can convert photons coming from the sun to be converted into electrical energy. The amount of energy that can be converted by a solar cell is determined by the effective insolation time. Peak sun hours (PSH) are the focus of this research. This PSH analysis aims to determine the potential for solar energy obtained in geographical locations throughout the year. Geographical location and the position of the astronomical coordinates of a certain area affect PSH. Therefore, the orientation of solar panel installation, including the height, slope, and latitude of the solar panel surface needs to be considered in order to get maximum solar energy. The results of this study can be used by technicians in determining the orientation of solar panel development in an area.
INTRODUCTION
The application of renewable energy has begun to increase along with the high demand for electrification and concerns about climate change.The utilization of renewable energy is also carried out in the Universitas Airlangga.Universitas Airlangga is a study environment located in East Java with a wet tropical climate type with coordinates at latitude 7°16'1"S and longitude 112°47'7"E.In the study environment, a charging station for electric vehicles was built.To fulfill the necessity for electrification of the electric vehicle charging station, this station is equipped with solar panels with a capacity of 5.4 kW.In obtaining peak sun hours (PSH) data, output energy data is taken from Hoymiles microinverter data and is taken every 15 minutes at time intervals of 05:30-18:30.
Solar cells utilize solar energy to be engineered into energy through the photoelectric effect [1], [2].Materials the amount of energy that can be converted into electrical energy depends on the length of solar irradiation and the size of the power of a solar panel (Watt peak).However, the length of time that the sun shines cannot be said to be effective time.The optimum conversion of solar energy occurs during insolation at the average maximum irradiation time or what is called PSH.PSH is a parameter that states the ratio of the maximum duration of solar radiation in hours per day to the standard intensity of solar radiation which is 1 kW/m 2 [3].
Basically, the solar insolation on the solar panel surface is fluctuating where the intensity increases in the morning and decreases in the afternoon.The effective duration of solar radiation for solar panels affects the high and low PSH.In its application, the photovoltaic semiconductor plate will receive maximum solar ISSN: 2302-9285 irradiation if the direction of the incident photons is perpendicular to the surface of the solar panel [4], [5].Thus, PSH has a value of 3-7 hours per day depending on the geographical and astronomical location of an area and the slope of the solar panel surface [6]- [10].
The main focus of this research is to analyze the actual value of PSH in Indonesia by using the location of Universitas Airlangga, Surabaya as the analyzing point.Second, with the PSH value determined by analyzing data from the field, it can be used to design a reliable solar photovoltaic power system in Indonesia.Analyzing the PSH value can be used to determine the size and configuration of solar panel array needed for the system.In addition, it can also be used to predict and optimize energy production in solar photovoltaic power systems.
METHOD 2.1. Photovoltaic effect
The conversion of solar energy into electrical energy occurs in photovoltaic semiconductor materials through a photoelectric process [11], [12].
Where is the kinetic energy needed to move free electron to generate electricity, W is the photon energy and is the material threshold energy obtained from: where c is the Speed of light (3 10 8 /). is wave length in meter.f is wave frequency (Hz).
Solar declination
The solar declination reading (δ) is obtained from the coordinates of the globe in terms of the equator coordinates.The value of solar declination is measured by measuring the angle between the equator drawn from the center of the earth to the center of the sun.The process of the earth around the sun on its polar axis can form a slope of 0° -23,45° [13].
Where n is Julian day.
Latitude and longitude effect
The influence of the intensity of the sun is determined by the location of the area on the earth's surface [14], [15].The position refers to the astronomical coordinates of latitude and longitude.Figure 1 below explains the solar path of the selected location.The solar path diagram illustrates the comparation of the azimuth to sun height in degrees for an annual cycle.It can be used as a reference for an overview of solar resources within a particular location.
Tilt
Solar energy can convert maximally into electrical energy if the surface of the solar panel is perpendicular to the direction of the sun's rays (source).The orientation of the construction of solar panels must pay attention to the position of the sun, longitude and latitude [16]- [18].This is because each region based on geographical and astronomical aspects has a different position.
Where: : Optimal tilt angle ∅ : Longitude area : Declination angle Figure 2 shows the tilt angle of the installed solar cell in Universitas Airlangga.The measurement of the solar panel is conducted by applying magnetic water pass directly into the solar panel.This method can be reassured by using (5).
Determine peak sun hour
PSH take the solar irradiation interval when energy output increases by 60% until the output decreases by 60% [19], [20].The process of obtaining Peak Sun Hour (PSH) value can be done by using the equation of nominal peak power or (5).after the maximum output value is obtained, the PSH value can be determined by substituting the peak power value into the equation.
Irradiance factor
Solar radiation is the power per unit area received from the sun in the form of electromagnetic radiation measured in the wavelength range of the measuring instrument [21].Irradiation can be measured in space or at the earth's surface after absorption and scattering of the atmosphere.Irradiation plays a role in predicting the energy productivity of solar power plants.The distribution of irradiation levels across the Java Archipelago is described in Figure 3, with the selected location represented in the figure as the blue mark.The figure explains the irradiation factor within a color range from yellow as lower to orange as the higher value.
RESULTS AND DISCUSSION
In this section will explain the PSH analysis of solar cells found at Universitas Airlangga.In the previous section, the method to get the PSH value was explained and the things that affect the PSH value such as the photovoltaic effect, solar declination, longitude and latitude, tilt surface, and irradiance factor.Through the described method, a graph of the PSH value will be displayed at the observation location.The installed photovoltaic system in the observation location is equipped with a monocrystalline type solar cell which is shown in Figure 4
Peak sun hours analysis
The energy output produced by the solar cell shows a different amount of energy conversion every day.The fluctuating energy output is due to the PSH factor [22].The PSH value that is less makes the need for solar cells increase.The location of Airlangga University, Surabaya, Indonesia, has an average PSH value of 4.5 hours/day.
Figure 5 shows the data for calculating the PSH solar cell of 5.4 kWh which is calculated manually through real-time solar power station monitoring (S-miles cloud).The value shown by the graph is the PSH value measured at 12 hour intervals.From the graph, it can be seen that the PSH value fluctuates.This fluctuating value is caused by weather factors that can make insolation not optimal and have an impact on low solar irradiation values.
Figure 6 shows the results of the energy output that can be used for electrification of electricity needs.The measurement results are obtained through real time measurements for 3 months.The results obtained, are the results obtained from the average energy output at the PSH interval (4.5 hours) according to the PSH value of the research results.Based on the results shown in Figures 5 and 6, there is a correlation that shows results that are directly proportional to PSH and energy output.From the graph shown, it can be seen that the low PSH in a period will affect the low energy output (and vice versa).Figure 7 shows the results of the power output starting from sunrise to sunset.The power output shown every hour is the result of the average power output for 3 months.The first red line showed in the Figure 7 (the left one) is the time of PSH start point, which more than 60% of the peak power is reached.Besides, the second red line (the right one) is the time when PSH is ended.This means the power decrease into less than 60% of peak power.Based on this average value, the PSH zone can be determined, namely in the graph area bounded by the red line.The PSH solar cells are at intervals from 09:30 to 14:00 (4.5 hours).This PSH value will be used to determine the optimal hour for converting solar energy into electrical energy.
Energy output from solar cell
The on grid solar cell photovoltaic installed in the Airlangga University study area has a capacity of 5.4 kWh a 24 Volt system.The normalized energy, performance ratio, global incident in coll, and power injected into grid values are values analyzed using PVsyst software.The orientation of the PVsyst has been adjusted including geographical and astronomical coordinates, the slope of the solar cell surface, and solar irradiation data for the area (Airlangga University, East Java) for one year.Thus, the PSH value and energy output from solar panels can be analyzed regarding the relationship between analyzed from the software.
Figure 8 explains the system's capability in producing power month-by-month in a year.The figure shows the system's daily useful energy referred to the nominal power and the losses that occurred.Those losses include the collection losses that happened because of thermal, wiring, shading, or other inefficiencies [23], [24].Also, the system loss which in the case of the proposed method happened because of inverter inefficiencies [25]- [27].While Figure 9 shows the system's effectiveness in producing energy if the system continuously working.Figure 10 shows a graph that represents the photovoltaic daily production.The graph shows a correlation between daily irradiation and system daily productivity.While Figure 11 shows the accumulations of all energies registered by the system during the simulation period, with the instantaneous output power injected into the grid [28]- [30].
CONCLUSION
PSH is an indicator that determines the amount of energy output required in the installation of solar panels, especially on 5.4 kWh solar panels in the Airlangga University area, Surabaya, East Java.Based on PSH data collection using the observation method, the average PSH in the observation area was 4.5 hours.The PSH value can produce an average energy of 3.28 kWh/day and a performance ratio value of 0.831.Thus, the analysis of the PSH value can be a reference for technicians and renewable energy consumers in solar panel installations.
Figure 3 .
Figure 3. Solar map in Java Archipelago (map solar irradiance on data of year 2021, ENERGYDATA.INFO) (a), and a panel box with a display to present solar measurement parameters, which are shown in Figure 4(b).
Figure 4
Figure 4. 5.4 kWp photovoltaic power plant installation in Universitas Airlangga (a) the photovoltaic array, and (b) the control panel
Figure 7 .
Figure 7. Energy output based on PSH data
Figure 8 .
Figure 8. Energy productivity every month Figure 9. Monthly performance ratio
Figure 10 .
Figure 10.Daily energy supply to daily irradiation Figure 11.Power and energy supply | 2,711.8 | 2022-10-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Sexual Rehabilitation of People with Physical Disabilities: Sexuality and Spinal Injury
This article intends to discuss the sexuality of people with physical deficiencies, focusing on sexual rehabilitation. It is based on a comprehensive review intending to explore some fundamental concepts, theoretical reflections and practices in the text about this thematic dealing with: (a) Concepts about sexuality and disability, (b) The sexuality of people with physical disabilities and (c) The sexual rehabilitation of people with a spinal injury.
"normal" contains certain sexual patterns, which in our society, are related to questions like: being heterosexual, having a thin and skinny body, having sexual and reproductive health and having a sexually functional response.We want "normal" sexuality based on these standards, believing that if we're "adequate", we'll feel pleasure and happiness (Maia, 2009).It's important to point out that not corresponding to certain sexual standards imposed by societies do not make somebody asexual, but it can result in fragmented sexual expression and cause unhappiness and social maladjustment.Thus, functional and dysfunctional social practices reflect a notion of normalcy and ideology.Another important concept is disability as a social construction, because the organic and functional limits of the human body correspond to disadvantageous representations when society is based on the notion of productivity and competitiveness (Aranha, 1995;Amaral, 1995;Marques, 1997;Omote, 1999;Ross, 1998;Tomasini, 1998).I am referring to the exclusion of those who possess unequal conditions for productivity, such as the elderly and people with deficiencies."Disability" refers to a series of general conditions that limit someone's life biologically, psychologically or socially throughout their development (Maia, 2006).It emerges as something that separates the subject from normalcy; it is considered a deviation, placed in a condition of "defectiveness", "insufficiency" and "imperfection".The way in which such differences are judged reflects how we conceive what is and what should be normal and healthy (Amaral, 1995;Maia, 2009;Omote, 1999;Tomasini, 1998;Siebers, 2008;Sorrentino, 1990).Therefore, even though the disability and difference appears in a biological body or atypical behavior, it can only be considered a "disability" as a social phenomenon, that is out of the subject's control, and not intrinsic to them.It is society that judges and classifies that as a disability or not, and establishes the parameters of what it means to be host that difference in comparison to everybody else (Amaral, 1995;Aranha, 1995;Omote, 1999).That judgment results in stigmatization (Goffman, 1988).Generally, social opinion places disability as a condition of disadvantage based on socially undesirable attributes.It's evident that disability isn't just a mere detail, but a label, a stamp that makes its subject deal with a series of difficulties, a constant fight for equal rights, favorable conditions in order to be a conscientious citizen (Maia, 2006;2011).Siebers (2008) claims that disability is a minority identity that has been historically seen as a condition that's a target for medical intervention, but should be understood as a product of society, constructed in the context where it manifests itself.The disability isn't a personal and individual problem, but a social and collective one (Edwards, 1997;Maia, 2011;Mitchell & Synder, 1997;Pristley, 2001;Siebers, 2008).Thus, we can grasp that the concept of disability is created and maintained by society.That is, the social belief about the phenomenon, in this case, the whole idea is therefore social, cultural and historical (Amaral, 1995;Marques, 1997;Omote, 1999;Ribas, 1998;Siebers, 2008;Tomasini, 1998).According to Aranha (1995, p.69) "those who don't correspond to the efficiency/production parameters, will naturally be worthless by becoming contradictions to the system exposing its limitations".Above all, it's because of this, that it is necessary to understand -and reflect -about the prevailing concept in relation to disabled people and disabilities and deficiencies in our society at this historic moment.Certainly, despite the advance represented by the inclusive paradigm even though in practice, there aren't any guarantees, of accomplishing the best possibilities of developing a life healthy and worthy of a conscientious citizen in relation to education, work, and also sexuality (Maia, 2006;2011).It's possible to deduce that one of the great barriers to inclusion is the stigma, and this, as in all prejudice, also disregards diversity with respect to sexuality.Considering sexuality and disability as social conditions, think about the biological body in terms that sexual rehabilitation results in questioning which social meanings are subjective components and which are feelings of personal inadequacy.
The sexuality of people with physical disabilities
The capacity to love and be loved and the erotic desires that are inherent to human beings are preserved under any limitation; that is to say that no human being loses their sexuality even when they have certain motor or physiological restrictions.Many times social prejudice turns sexuality impossible in people with disabilities."It's necessary to be clear that sexuality is independent -or not -on the existence of incapacity; in other words, sexuality is inherent to human beings; the differences occur in the manifestation of the sexual activity, which can be modified in some cases.Disability isn't definitely synonymous to asexuality or sexual problematic" (Pinel, 1999, p.214-215).The greatest difficulties in the expression of sexuality in the case of people with physical disabilities refer to social questions more than to organic limitation.The main questions are prejudice, misinformation, discrimination, inability, lack of adequate sexual orientation, insufficient or inadequate process of sex education by their Family, disbelief in the capacity of disabled people to express their sentiments and sexual desires, values, and distorted ideas associated with physical disabilities (Blackburn, 2002;Pinel, 1999;Maia, 2006;2011).The sexuality of the disabled person is a multi-faceted phenomenon: economic, political, cultural and educational questions (Fróes, 2000;Maia, 2006).Additionally, people with a disability suffer the effects of beauty standards, perfection and happiness, especially when they are women.Many people with a disability incorporate the expectations of sexual standards and internalize the even more difficult task of reaching them, when the disability exists (França & Chaves, 2005;Louro, Faro & Chaves, 1997;Maia, 2011;Sorrentino, 1990;Werebe, 1984).Authors such as Anderson and Kitchin (2000) have defended that the majority of day-to-day difficulties encountered by people with disabilities in relation to sexuality is caused by the failure of available education resources and services to provide them with clarification about the subject.According to Pinel (1999) the majority of people with disabilities reproduce a social image that can generate socialization problems related to deprivation of affection, emotional dependence and also difficulties in becoming adults capable of fighting for their rights including those related to sexuality.So, the sexuality of people with disabilities is perceived in common sense-that reflects on work with teachers, diverse professional, and clients themselves and their family membersbased on different myths.Some of these myths are: people with disabilities are asexual: they have no feelings, thoughts and sexual needs, people with disabilities have a hyper sexuality: their desires are uncontrollable and exacerbated, people with disabilities are unattractive, undesirable and unable to love and have a sexual relationship, people with disabilities are unable to enjoy normal sex and have sexual dysfunctions related to desire, excitement and orgasm, reproduction for people with disabilities is always problematic because people are infertile, have children with disabled or are unable to take care of them (Andreson, 2000;Amaral, 1995;Baer, 2003;Giami, 2004;Kaufman, Silverberg & Odette, 2003;Maia, 2011;Maia & Ribeiro, 2010;Pinel, 1999;Salimene, 1995).Those ideas are myths because they don't correspond to the truth.People with disabilities are always sexual beings, even though they might have some sort of physical or sexual limitation: they don't characterize themselves as asexual nor as hypersexual, their anxieties, necessities and desires are the same as those with healthy bodies.Additionally, possible problems in the sexual response phases, such as desire, excitement and orgasm are common in groups with and without disabilities.In both cases there are resources and behavioral technologies that can help overcome these obstacles, therefore the sex life of a person with a disability is not synonymous with incapacity and unhappiness.Belief in these myths reveals a biased way of understanding the sexuality of disabled people as deviant from normal standards and it becomes an obstacle to love and to have sex for these who are stigmatized by the disability and because of this, clarifying these myths is a necessary task to minimize the prejudice that sustains and reproduces them (Maia & Ribeiro, 2010).
Sexual rehabilitation of people with spinal cord injuries 1.3.1 Spinal cord injury: Characteristics, etiology and prognosis
Spinal cord injury is a medical condition that severely affects various bodily functions, often causing motor paralysis, loss of sensibility in certain body parts and lack of bladder or bowel control.These symptoms may be temporary, but often are permanent (Ducharme & Gill, 1997).Spinal cord injury, therefore, is defined as a clinical condition that produces alteration in motor, sensory and neurovegetative functions, which also are reflected in profound psychological and social changes.The spine consists of vertebrae superimposed on a regular basis, held together by ligaments and disposed on the center line of the posterior trunk.Its function is to hold the bones of the body and protect the spinal cord.The spine is divided into four regions -cervical, thoracic, lumbar and sacral.The spinal canal serves to protect the spinal cord, the roots of spinal nerves and the meninges (Baer, 2003;Cardoso, 2006).To define the spinal cord injury it is important to consider the specific circumstances of each case, depending on the level and extent of injury.A neurological examination is able to evaluate the injury by determining the level of damage, whether it will result in paraplegia or quadriplegia and whether it is complete or incomplete.Cardoso (2006) explains: "Thus, tetraplegia is defined as the loss or impairment of motor and/or sensory function in the cervical segments of the spinal cord caused by destruction of neural elements within the spinal canal resulting in a alteration of function of the upper and lower limbs, trunk and pelvic organs.(...) In turn paraplegia is defined as the loss or impairment of motor and/or sensory function in thoracic, lumbar or sacral spinal cord, because of the destruction of neural elements within the spinal canal.In paraplegia the upper limb function remains intact, but depending on the level of the injury, trunk, lower limbs and pelvic organs may be functionally impaired" (p.58).Other issues arising from spinal cord injuries are secondary complications such as pressure ulcers, urinary infections, pain, spasticity, and obesity, problems that worsen with time (Salimene, 1995, Maia, 2011).The causes of spinal cord injuries can be grouped into traumatic and non-traumatic.In the first group, the lesions occur in car accidents, falls, firearms, at work or in sports practice, etc..In the second group are the medical conditions (spinal tumors, myelitis, scoliosis, multiple sclerosis, congenital malformations, spinal vascular accidents etc (Cardoso, 2006;Salimene, 1995, Maia, 2006).Spinal cord injury affects mainly young male adults.It is rare among children and these data are similar in different countries (Baer, 2003;Pinel, 1999;Salimene, 1995).Moreover, the spinal segments that suffer most injuries are located between the cervical articulations.The severity of neurological impairment resulting from a spinal cord injury reflects the nature and magnitude of the violence of the injury, which may result from bending, compression, hyperextension and flexion-rotation against any region of the column where this impact operates (Cardoso, 2006).Concerning these aspects, the prognosis will depend crucially on the area and the extent of injury.According to Cardoso (2006), considering the injuries in general, there is a mortality rate of 38% soon after the injury or the initial phase, due to respiratory or trophy disease.However, currently, the expectation of life of individuals affected by the injury has increased significantly.
The sexual response in people with spinal cord injury
Physical disabilities, especially those of the spinal cord, were total and partial paralysis, loss of motor functions and feeling in the legs (paraplegia), or in the legs, torso and arms (tetraplegia) can have direct implications on the sexual response mechanism.That is to say problems in the sexual phase (penile erection or vaginal lubrication) and even more in the orgasm and ejaculation phases.Depending on the level and extension of the spinal cord injury, some sexual response alterations are recurring, specially in men, where changes in ejaculation can occur (ejaculation lock) and in the erection (partial or complete erectile dysfunction or maintenance) and retrograde ejaculation (Baer, 2003;Cardoso, 2006;Ducharme & Gill, 1997;Maia, 2010, Maior, 1988;Pinel, 1999).Sexual function consists of three levels, the psychic, gonadal, and neuromuscular, and for its manifestation to occur normally, good functioning and integration of these three levels are necessary (Maior 1988).Salimene (1995) says it's evident that spinal cord injuries accentuate physical and functional limitations, but that's not to say that there's necessarily going to be problems in relation to the overall sexual manifestation.According to Cardoso (2006), the limbic system and spinal cord centers constitute sexuality's neurological substrate, but this is influenced by cognitive and sociocultural mechanisms like fears, expectations and beliefs, and by personal evaluation of one's sexual response.From a psychological point of view, sexual desire seems to be associated with cognitive activity and, from an organic viewpoint, it is related to cerebral activity, activity through the limbic system, influenced by testosterone.Desire is governed by many biological mechanisms in relation to availability and the subject's receptivity with the other that had psychological and social influence.In people with physical disabilities and spinal cord injuries, desire is a phase where they might or might not suffer alterations, especially arising from psychological and social issues, more than organic ones.It is common to hear between those with spinal cord injuries, that desire decreases after the lesion, what could be related to the lesion, but also to mechanisms that affect the nervous system and even reduced mobility, and spasticity and problems with intestinal and bladder control.On the other hand, physical intimacy, even degenitalized can be gratifying and this can increase sexual desire (Cardoso, 2006, Maia, 2011;Maior, 1988;Pinel, 1999).From a neurological point of view, the autonomous nervous system is the main culprit for human excitement capacity and many psychological factors can prevent a person from www.intechopen.comfeeling excitement by blocking their neurological signs.To define the excitement of the person with the spinal cord injury, it's necessary to know the level and extent of the injury and if the sacral reflex arc was affected.If the reflex pathway is maintained, which occurs in spinal cord injuries above the sacral segments, the reflex erection is possible, but in complete lesions, the psychogenic erection would already be inexistent.Men with complete upper lesions can maintain their reflex erection capacity, but not the psychogenic; in incomplete upper lesions, reflex erections would be normal and the psychogenic could exist.In the complete lower lesions, the reflex erections would be impossible and psychogenics would be possible and in incomplete lower lesions, both erections would be possible.However, in every case, the organic alterations depend on the emotional and social alterations (Cardoso, 2006;Maia, 2011, Maior, 1988;Pinel, 1999).The orgasm, however, can be felt in some cases even though it's a complex phenomenon.Although the penile and vaginal sensations might not be felt in people who have spinal cord injuries, other physiological changes related to the orgasm, extragenital, for example, can be observed and felt by those people: other erogenous zones let the subject experience sensations of pleasure and corporal satisfaction or even the satisfaction of being with the other person.As a result of ejaculatory problems, masculine infertility is also frequent, mainly in complete lower lesions (Cardoso, 2006;Maior, 1988;Pinel, 1999).In women, the ability to get pregnant is preserved, but changes in the sexual response can also occur, such as alterations in the clitoral or anal stimulation sensitivity, reduced lubrication and congestion of external genitalia.For men and women, orgasms are experienced more frequently in incomplete lesions or even the so-called "phantom orgasms" or "paraorgasms" that are pleasurable sensations after stimulation of the erogenous zones that are not affected by the lesion (Maia, 2006;Maior, 1988;Pinel, 1999;Salimene, 1995).Pinel (1999) explains: "Sexual response involves profound changes in the body as a whole and not just nongenital: blood pressure and heart beat increases, the person becomes breatheless, with skin blushes.As well as the orgasms are not identical in intensity to the same person, the organic alterations will cause changes in the perception.[...].Today we know that orgasm is possible after a spinal cord injury.Although it is not easy or automatic, orgasm can be built, regardless of erection, ejaculation or vaginal lubrication.This, however, usually involves a work of re-identification and redefinition of sensations [...].The relearning of the spinal cord injured person goes further than physiotherapy and caring of bladder and intestines.It includes social, emotional and sexual restructuring that enables the person to life again" (Pinel,p.220).Feminine reproduction is preserved after the lesion.In the case of men, the chance of ejaculation is low and some fertility treatments that can be used or recommended are: insemination, in vitro fertilization, gamete intrafallopian transfer, and intracytoplasmic sperm injection (Full-Riede, Hausmann & Schneider 2003) or electroejaculation, penile vibratory stimulation, pharmacological agents that induce ejaculation (Baer, 2003).It's common for people with spinal cord injuries to make comparisons with their sex life before the injury, associate erections and orgasms as indispensable phenomenona of sexual intercourse and this increases feelings of failure, a higher degree of anxiety and depression that end up decreasing desire and excitement (Baer, 2003;Cardoso, 2006;Pinel, 1999).In addition, some authors (Ferri & Gregg, 1998;Silva & Albertini, 2007;Soares, Moreira & Monteiro, 2008) argue that socially determined gender questions influence coping with the disability in a different way.That is, the impact of acquired disability may have different psychosocial implications when it comes to men or women.
The sexuality counseling
The spinal cord injury also involves important psychological changes that must be considered in clinical treatments for this population.It is common, given the situation of extreme physical and emotional dependence of other people, spinal cord injured people express attitudes of rejection and denial of reality.There are also feelings of denial, grief, anger, and also reactions of depression and low self-esteem (Maior, 1988;Maia, 2006;Puhlmann, 2000)."The most common psychological reactions of people who become physically disabled involve emotional dependence, rejection of reality attitudes, , alternated phases of depression and euphoria, loss of self-esteem, lack of confidence and satisfaction withown body, presence of inferiority and neglect feelings, decreased sexual desire, or excessive preoccupation with sexuality.There are also conflicts with body image and feelings of shame, fear and isolation appear, with concerns of social and sexual rejection" (Puhlmann, 2000, p.36).In this sense, sexuality is an important issue that deserves special attention of professionals in rehabilitation programs, because sexual dysfunctions are common in people suffering from spinal cord injury.However, few health professionals have specific training to attend this demand (Major, 1988;Maia, 2011, Pinel, 1999).The possibility of having a sexual dysfunction, especially among men is usually a humiliating and difficult condition because society in general values (and relates) social and sexual power.Sexual dysfunction treatment can be done with medication, always under the supervision of a doctor associated with sexual therapy or psychotherapy.In the case of organic causes, the sexual dysfunctions are usually treated with the following treatments: intravenous, with the use of substances such as papaverine, phentolamine and prostaglandin E1 which basically is a penile injection that causes muscle tissue relaxation thus favoring the erection, b) urethral medication system with the introduction of prostaglandin E1 in the urethral canal c) oral medication such as sildenafil, which inhibits enzymes and assists smooth muscle with sexual stimulation.Other invasive treatments can be vascular surgery (low success rate) and even a penile implant, placed in the corpus cavernosum, which provides a mechanical or flexible hydraulic base.Other treatments can be no invasive and nonpharmacological, such as the use of a penis pump or penile rings (Baer, 2003;Ducharme & Gill, 1997;Full-Riede, Hausmann & Schneider 2003;Maior, 1988).Problems such as urinary incontinence and spasticity are also common.Some techniques that decrease spasticity are recommendable like the appropriate temperature at the spot of sexual relation, massaging and antispasmodic medication.Also, there are certain positions that are important for stabilizing the articulation.In the case of incontinence, it's necessary that the bladder and rectum be emptied before the sexual relation and the use of mattress protectors and towels facilitate the necessary hygiene (Full-Riede, Hausmann & Schneider 2003).Today, there are different tools for sexual dysfunction arising from spinal cord injuries spanning from sexual therapy technique that can help a person recover their sexual function response.Sexual therapy and rehabilitation process counseling for the population with physical disabilities, more specifically those with spinal cord injuries, have proven to be an important path to sexual health (Blackburn, 2002;Cardoso, 2006;Chigier, 1981;Maia, 2006;Maior, 1988;Puhlmann, 2000).
According to Maior (1988), sexual counseling programs for people with spinal cord injury are made from general strategies of sex therapy, including education and information, attitude change, elimination of anxiety before the performance, techniques of communication improvement and sexual behavior change, attending the impact of injury on sexual function.These programs should include an initial assessment phase, a work contract and planned counseling sessions that can be individual or in group.At the initial assessment is necessary to survey the following information: (a) how was sexuality before and after the injury, (b) how is anal, bladder, urethra and genitalia sensitivity; if any drugs and medicines are used and how is the control of spasticity, (c) investigation of the sexual response: desire, arousal, orgasm, (d) investigation of the reproductive functions: menstruation, ovulation and ejaculation (Major, 1988;Maia, 2006).Maia (2006, p.182) says that is also necessary to investigate "sexual experiences prior to the injury, the frequency of interest and involvement in sexual activities, the most sensitive areas of the body, emotional relationships (whether or not a male or female partner) and desire to have children".So, before intervention, a diagnostic evaluation is necessary in which information regarding sexual response before the injury is gathered, what the ideas about sexuality were, urinary function, intestinal and sexual evaluations are necessary, questions specifically related to masculinity and femininity.Objective data such as skin sensitivity, reflex or voluntary motor activity, the entirety of the reflex arcs, the level and degree of the spinal cord injury etc., are important for an appropriate diagnostic.The author adds that the more sexuality is seen as genital and focused on sexual functions, the more difficult sexual rehabilitation will be (Maior, 1988).Some psychologists and sexual therapists have invested in specialized care for people with disabilities in order to ease possible dysfunctions arising from the disability, with several behavioral techniques or the use of equipment and "sex toys", such as vibrators and lubricants (Baer, 2003;Fürll-Riede, Hausmann & Schneider, 2003;Maior, 1988;Puhlmann, 2000)."Sometimes people with disabilities need to be touched to have an erection.In this case, the accessories that stimulate the sensations of the skin across the body can be used.[...].The stimulation of sexual organs can be produced with the caress and with the encouragement of sensory responses.To make this process dynamic, we can use contrast of cold and heat, or strength and weakness stimuli, seeking to provoke the activation of reflexes and deep sensation.The very touch of warm and cold hands can trigger reflex erection, massages with aromatic oils, or the subtle touch of soft tissues may facilitate arousal and are being widely used by disabled people.The so-called electric massagers and vibrators have facilitated not only male ejaculation in some cases of physical disability, where the ejaculatory reflex is impaired, but also the female orgasm, by strengthening local stimuli" (Puhlmann, 2000, p.105).
Along with this, it also takes time.Sexual readjustment does not happen immediately because restructuring conditions require time, trust and practice.Masturbation could be a form of practicing without any demands from the partner and help one get to know one's self sexually.It's necessary to have good communication, reduce anxiety and to clarify expectations, talking about feelings of pain under special conditions such as in spinal cord injury.It's necessary to relearn spontaneity, know how to express fantasies and sexual desires.Experimenting various sexual techniques such as oral and anal sex and trying different positions can be a very important resource for sexual rehabilitation (Baer, 2003;Ducharme & Gill, 1997;Kaufman, Silverg & Odette, 2003).Finally, feeling desired and having high self-esteem is essential for sexual rehabilitation (Baer, 2003).
Psychotherapy processes can help reconstruct the personal perception of what it's like to desire and be desirable, and this them should be given priority before applying sexual techniques.The first step for a subject is recognizing themselves as erotic human beings with disabilities.Other things should be considered along with the sexual response side: the existence of sensations of pain, fatigue, motor limitations, impaired ability to communicate assertively, unfavorable cognitive conditions (destructive thoughts and beliefs), privacy issues, difficulty in perceiving stimulation and finally, there can be issues due to side effects from medication.All of this needs to be considered.In any case, sexual health should be ensured, preventing the transmission of sexually transmitted diseases and unplanned pregnancy or situations of violence (Ducharme & Gill, 1997;Kaufman, Silverg & Odette, 2003).In sexual rehabilitation, it's necessary to discuss the organically produced responses by the disability that in general, becomes problematic and join the psychological and social issues.The psychological issues are prioritized when attending people with disabilities beyond or together with sexual techniques, addressing subjects such as: body image, confronting myths and prejudice, restructuring of masculinity and femininity, reflecting about aesthetics standards, emotional difficulties that involve marital relationships, expectations about reproduction or even the difficulties of occurring sicknesses."From the point of view of attitudes, body image is a central issue.If a deficiency altered the appearance and/or mobility of a person beyond the accepted rules, the dislike of the body can assume proportions that interfere in the sexual encounter.Basically, if you hate the appearance of your body and how it behaves, will not be easy gladly offer it to a lover.Learn to love your own body, no matter how far he is from the ideal induced by the cinema (or even a more reasonable standard) takes time and is part of a wider process of selfacceptance" (Vash, 1988, p.90 ).Fear of sexual dysfunction, feelings of inferiority, and problems with their companion or finding a sexual partner, lack of knowledge about how the body works, limitations due to spinal cord injuries, possibilities in sexual relations, possible problems and solutions are common self-esteem problems (Baer, 2003;Fürll-Riede, Hausmann & Schneider, 2003;Kaufman, Silverg & Odette, 2003;Maior, 1988;Puhlmann, 2000)."A new image must be constructed from the reactions of this body and the reactions of others to a new body.[...].Initially, many adopt an attitude of isolation and even of indifference to your problem.To establish their new body image, the spinal cord injured people need to know their limitations and modifications, including how to deal with equipment that use (wheelchair, crutches, urine collector), in a new experience of his own body; they must be able to expose this situation which is different to the others.[...].People who base their self-esteem in physical capacity will probably struggle to readjust after injury [...].Develop a new body image and restore self-esteem and sexual identity are the basic points for re-balancing of personality, appearing then confidence to assume a positive social and sexual role" (Major, 1988, p.25).Finally, effective education programs and sexual rehabilitation should consider, above all, a few basic procedures.In first place group work is required.Group sessions are indispensable for sharing experiences, frustrations and successes.Many subjects need to perceive that they are not alone in confronting sexual difficulties.Besides, family groups or couples are interesting alternatives to the extent that many times family support is necessary to recover self-esteem.Maior (1988, p.93) says that: "It is agree that discussion groups should work from six to twelve people, including disabled people, partners and professionals.Although most programs work with groups, to each participant is given the option of complement individual counseling, individually or in couple with a partner" Secondly, care should be given by a multidisciplinary team trained in the area, including psychologists, physical therapists, sexual therapists, doctors, etc.A treatment group that includes various professionals is essential in to whole care of people with a spinal cord injury (Baer, 2003;Cardoso, 2006;Maia, 2011;Major, 1988).Sexual rehabilitation work should be comprehensive, considering emotional, labor issues, medical and disabling conditions, economic and social conditions, gender questions and sexual identity, ultimately other conditions need to be met by diverse professional if we hope to reach the person's overall sexual satisfaction.
Conclusion
Disability and sexuality are famous social phenomena, that's to say, they depend on social and historical representations about their conditions.Being disabled or dysfunctional manifests itself in the forms of personal and social normality that are socially constructed.Given these forms, feelings of maladjustment are common among people with and without disabilities.In the case of people with physical disabilities, these sentiments are common, because the disability is visible and stigmatizes the subject a deviant, which ends up being generalized for their sexuality.The sexuality of people with physical disabilities reflects many social myths that were wrongfully put on these people such as having an atypical and unhappy sex life.However, despite possible organic difficulties, it's psychosocial questions that most reflect these difficulties, especially in the sexual area.In this sense, the sexual rehabilitation of people with physical disabilities should include organized dysfunction treatment with the use of behavioral treatment and medication associated with sexual or psychotherapy that includes reflection on social models of normality, corporal difficulties, aesthetics and sexual function.It's important to consider manifestations such as problems with desire, excitement, orgasm or fertility, low self-esteem etc., result in internalized prejudice, in other words, the root is in the permanence of stigmatizing and prejudiced representations within society.We should join forces, ensuring teamwork (doctors, psychologists and other professional) and work the injured patients, family and/or spouse together. | 6,752.2 | 2011-12-22T00:00:00.000 | [
"Psychology",
"Philosophy",
"Medicine"
] |
A Review on State-of-the-Art Power Converters: Bidirectional, Resonant, Multilevel Converters and Their Derivatives
: With the rapid development of modern energy applications such as renewable energy, PV systems, electric vehicles, and smart grids, DC-DC converters have become the key component to meet strict industrial demands. More advanced converters are effective in minimizing switching losses and providing an efficient energy conversion; nonetheless, the main challenge is to provide a single converter that has all the required features to deliver efficient energy for different types of modern energy systems and energy storage system integrations. This paper reviews multilevel, bidirectional, and resonant converters with respect to their constructions, classifications, merits, demerits, combined topologies, applications, and challenges; practical recommendations were also made to deliver clear ideas of the recent challenges and limited capabilities of these three converters to guide society on improving and providing a new, efficient, and economic converter that meets the strict demands of modern energy system integrations. The needs of other industrial applications, as well as the number of used elements for size and weight reduction, were also considered to achieve a power circuit that can effectively address the identified limitations. In brief, integrated bidirectional resonant DC-DC converters and multilevel inverters are expected to be well suited and highly demanded in various applications in the near future. Due to their highlighted merits, more studies are necessary for achieving a perfect level of reducing losses and components.
Introduction
The detrimental effect of electricity generation from conventional fossil fuel sources has led to the need to shift to renewable and clean energy, such as solar energy, wind energy, hydropower, and geothermal energy; this has now become more prevalent than ever [1,2]. The energy that the earth receives per hour from the sun is estimated to be equivalent to the whole energy that humans consume in a year [3]. Thus, tapping into this readily available resource should gradually eliminate the dependency on conventional energy sources and help in reducing global warming to ensure a cleaner and safer environment. By using these renewable energy sources as distributed energy resources (DERs), the implementation of costly transmission and distribution systems in hilly and rural regions can be avoided. By operating DERs as standalone renewable energy systems (SARES), the delivery of electric power to remote areas at reasonable costs can be ensured [4]. In the coming years, renewable energy sources, such as photovoltaic (PV) systems, fuel cells (FC), and wind energy farms [5,6], will lead power generation. A major characteristic of PV sources is their low DC voltage, which makes them inappropriate for direct microgrid use. Photovoltaic modules are typically connected in series to achieve higher voltages; this demands a huge number of PV components and physical space. Efficient DC-DC converters are required to transform this low voltage to high voltage for better utilization of renewable energy sources. The converter must meet certain requirements, such as low cost, light weight, low voltage switching tension, and high power density [7]. Therefore, AC-DC/DC-DC conversion techniques have gained research attention nowadays for achieving better power conversion efficiency [8]. Figure 1 shows the application of different power electronic converters in renewable energy system applications. Many researchers have addressed different types of DC-DC converters for renewable energy applications and storage to improve and enhance efficiency whilst overcoming the weaknesses of converters. DC-DC converters are categorized into three major technologies based on their operation modes, as shown in Figure 2 [9][10][11]. They are linear mode, hard switching mode, and soft switching mode. The linear mode has features such as simplicity, low noise with good regulation, and quick response. On the other hand, its drawback is low efficiency due to power losses in various working conditions. Hard switching mode converters can be sub-categorized into non-isolated and isolated converters based on galvanic isolation. The buck, boost, buck-boost, and Cuk converters are typical examples of the hard switching mode topologies that are without galvanic isolation (non-isolated) and identified as chopper circuits. However, galvanic isolation (transformer) is necessary for safety causes when the converters are supplied by the utility grid. Power converters (PCs), with their control techniques, help regulate voltages of nodes in microgrids with different types of loads such as resistive, inductive, nonlinear, constant power, or critical loads. However, constant power loads (CPLs) affect the stability of the voltage in the output of PCs and are usually difficult to regulate with traditional control techniques [12,13].
Another type under the switching mode is the isolated converter; this converter utilizes more than one switching method converter, including half-bridge, full-bridge, dual half-bridge, flyback, and push-pull converters. The drawbacks of hard switching mode converters include high electromagnetic interference (EMI), high switching losses, and huge size and weight, which affects the switching frequency. The third classification of DC-DC converter is soft switching, also known as resonant converters; these were developed to overcome the issues in hard switching as revealed by several researches, especially related to their industrial applications [14,15]. Soft switching can either be zero current switching (ZCS) or zero voltage switching (ZVS). Compared to liner regulators, these two have better efficiency and ability to work at high switching frequencies, which permit the use of a small ferrite transformer core; they can also work in a wider range of DC input voltages compared to linear regulators. Bidirectional DC-DC converters are receiving much attention for both academic and industrial applications; they are mainly used to maintain the reliability of systems and as an interface between the battery and supercapacitors as storage devices [17]. Bidirectional DC-DC power converters are increasingly being used in a variety of applications that demand power flow in both directions. These include, but are not limited to, energy storage systems, uninterruptable power supplies, electric vehicles, and renewable energy systems, to name a few. The classification and different types of bidirectional converters are presented in detail in the following section [18]. Currently, resonant DC-DC converters are the preferred option of power conversion for many low-and high-voltage applications. Resonant DC-DC converters typically contain characteristics that reduce switching losses at the inverter switches and output rectifier diodes, allowing them to operate at higher switching frequencies and yield higher efficiency, resulting in smaller converters. Resonant DC-DC converters come in a variety of topologies, including series resonant DC-DC converters (SRCs), parallel resonant DC-DC converters (PRCs), and series-parallel resonant DC-DC converters (SPRCs) [19]. Due to their simplicity and popularity, many researchers have worked on and recommended resonant converters in various applications [16,[20][21][22].
In addition to DC-DC conversion, DC-AC conversion is also required to supply AC loads as well as grid integration of DERs. Multilevel inverters are one of the most researched power converter topologies in industrial and residential applications. In recent years, multilevel inverters have received a lot of attention in the applications of medium-voltage and high-power ranges owing to their numerous advantages, and some of the advantages of multilevel inverters over the traditional two-level inverters include lower EMI, less harmonic distortion, and lower voltage stress on semiconductor elements. Their drawback is that they require a large number of semiconductor elements. When compared to two-level inverter topologies with the same power ratings, multilevel inverters are more effective in eliminating the harmonic component of voltage and current waveforms. Multilevel topologies are classified into different types, such as cascaded h-bridge, diode-clamped, and capacitor-clamped inverters [23,24]. In applications requiring high-power converters, multilevel inverters are vital. They are also widely used in clean energy sources where they serve as a connection between renewable energy sources (RESs), such as PV modules, and high-power loads [25]. Among the common application areas of multilevel inverters (MLI) are Flexible Alternative Current Transmission System (FACTS) devices, power converters, and reactive power compensation systems for high-power AC motors [25][26][27]. Apart from individual topologies, recently the hybridization of topologies is becoming popular. For an example, resonant converters are enabled to operate in bidirectional mode for power transfer both ways. Bidirectional resonant converters are relatively easy to integrate amongst other components [28,29]. Thus, they are broadly used in battery chargers, supercapacitors, electric vehicles, high-voltage power supply applications, and renewable energy systems [30].
Multilevel inverters are hybridized with DC-DC converters to enable compact stepup DC-AC conversion, and topologies such as multilevel bidirectional converters and multilevel bidirectional resonant converters are proposed. The key targets and purpose of multilevel bidirectional converters are to achieve and meet the requirements of: (i) a remarkable reduction in switching losses, (ii) a reduction in harmonic distortion (HD), and (iii) high efficiency with low component count to provide efficient and convenient size.
This article focuses on different topologies of bidirectional and resonant DC-DC converters, as well as multilevel inverters. The hybrid structures of these converter topologies are also presented. Different attributes, comparisons, advantages, and disadvantages of each topology and their applications are critically analyzed. The challenges and future prospects of the latest converter topologies are also elaborated. The remaining part of this article is organized as follows: Section 2 reviews bidirectional converters, their classification, and comparison between isolated and non-isolated converters. Section 3 explains resonant converter structures and their classifications, while Section 4 focuses on multilevel inverter topologies with respect to diode-clamped, capacitor-clamped, and cascaded topologies. Section 5 highlights the combined topologies of each two of these converters, while Section 6 describes the application of all these converters in grid connection and energy storage systems. The challenges and future perspectives are summarized in Section 7, while the last section (Section 8) presents the conclusion of this study.
Bidirectional DC-DC Converters
The continuous flow of power is an important concern when it comes to renewable energy systems; therefore, bidirectional DC-DC converters are employed to interface storage systems with the energy resource and load by reducing or eliminating the fluctuation in the output of renewable energy systems as a result of variations in climate conditions. They are also used between the energy source and motor supplied by batteries [17]. In mediumpower rank devices where familiar and efficient energy storages are supercapacitors and batteries, the energy exchange between the storage device and the other components of the system requires the presence of a DC-DC converter, and such converters must have a bidirectional power flow capability and should have an adaptable control in all operation modes [31][32][33].
Only one-directional power flow management can be achieved with a conventional buckboost converter when compared with bidirectional power, which can flow in two directions (forward (FW) and backward (BW)). Bidirectional DC converters (BDCs) are a device for either stepping up or stepping down voltage level; it can facilitate two-directional power flow (both forward and backward power flows). Bidirectional DC-DC converters are mainly used to manage the flow (forward and backward) of power in DC bus voltage where power flow is in both directions, as shown in Figure 1. The conversion of the conventional DC-DC converter into a bidirectional converter can be achieved using a bidirectional switch with a diode, with the current flow in both paths being accepted by anti-parallel with an insulated-gate bipolar transistor (IGBT) or metal-oxide-semiconductor field-effect transistor (MOSFET) employing a controlled switching procedure. Bidirectional DC/DC converters are of two kinds based on existing galvanic isolation between the input and the yield; they are isolated bidirectional DC (IBDC) and non-isolated bidirectional DC (NIBDC) [34,35]. The adaptability of the energy storage system can be improved by using a high-frequency isolated DC-DC converter to replace the line-frequency transformer. The circuits of most DC-DC converters are arranged asymmetrically to couple the two DC connections to various voltages, from tens of volts to hundreds of volts [36]. Bidirectional converters have become more popular, as opposed to traditional unidirectional converters, as they permit power flow in both directions. They are mostly used in hybrid electric vehicles (HEVs), electric vehicles (EVs), uninterruptible power supplies (UPS), smart grids, renewable energy systems (RESs), and aerospace applications; they are also used in other systems that require batteries [18].
Classification of Bidirectional DC-DC Converter
As mentioned in the preceding section, bidirectional DC-DC converters are of two types-IBDC and NIBDC-as they been classified in Figure 3, based on galvanic isolation; these are further described below. Due to safety concerns, this type of converter cannot be employed in high-frequency transformers to cater for any electrical isolation. NIBDCs are considerably more efficient in low-power applications due to their ease of control and light weight [37]. Non-isolated bidirectional DC-DC converters have been evaluated in terms of ease of control, simplicity of circuit configurations, low EMI, and high steeping ratio by several researchers [29,38,39].
Bidirectional Buck-Boost Converter
This type is considered the fundamental circuit of bidirectional DC-DC converters. Figure 4 illustrates the structure of this converter [17,31]. It is a combination of the buckboost converter in parallel but moving or oriented in opposite directions. Power flows from the high voltage (Vb) side to the low voltage (Va) side (buck approach) and operates in the reverse manner as a boost converter [18]. In the buck mode, switch Q1 is ON by duty cycle control, while Q2 is OFF in this mode. Similarly, when step-down Q2 is ON, Q1 is OFF. Cross conduction can be avoided by setting a dead time between both switches to ensure safe operation [40]. This topology is basic and has noteworthy effectiveness [41]. Bidirectional SEPIC-ZETA DC to DC Converter This topology has two modes of power flow; these are positive and backward modes. In the forward power flow, this converter works as SEPIC, while in the negative mode, the converter works as ZETA. The structure of a SEPIC-ZETA DC-DC converter is shown in Figure 5; it serves in the adjustment of Cuk converters so that their output will not have inverse polarity, as is the case with Cuk converters. The operation of this converter relies on both buck and boost techniques [42]; positive power flow is achieved by switching ON Q1 and turning OFF Q2 such that it can work as a buck converter (SEPIC mode). However, in the reverse power flow mode, Q2 is switched OFF so that the system will work as a boost converter (ZETA mode). The use of two inductors, L1 dc and L2 dc , can reduce the ripple from voltage yield and voltage rating overwork on switches.
Bidirectional Cuk Converter
By using a MOSFET instead of a diode, the bidirectional Cuk converter is an improvement of the unidirectional Cuk converter, as shown in Figure 6. This converter is a better choice for interfacing supercapacitors and batteries in circuits since it produces less ripple from its output compared to the cascaded buck-boost and bidirectional buck-boost converters [43][44][45]. Reduced ripple yield current can be achieved by combining L1 dc and L2 dc inductors. Q1 acts as the control switch in the positive power flow mode while Q2 is switched OFF and the body diode of switch Q2 serves as the main diode. Contrarily, Q2 serves as the active switch in the backward power flow while Q1 is switched OFF, leaving the body diode of switch Q1 to work as the primary diode. Cascaded Bidirectional Buck-Boost Converter Figure 7 displays another converter topology that can be achieved by cascading a buck converter with a boost converter. This topology is the outcome of cascading two buck-boost BDCs [18]. Each of the four quadrants is conceivable with this topology; thus, this topology works in buck and boost modes in both directions. The four-quadrant activity of this topology makes this topology generally adaptable. Despite this, it has some weaknesses, such as the use of a higher number of switches, which causes higher switching losses; it also relies on complex control algorithms and experiences additional operating losses due to the inverse diode recovery [38,46,47]. In ESSs, the cascaded buck-boost (CBB) converter is frequently utilized. In comparison to the combined half-bridge (CHB) converter, the CBB converter is smaller and more efficient at converting power because it only utilizes one inductor [48].
Switched Capacitor Bidirectional DC-DC Converter
In bidirectional DC-DC converters, capacitor exchange is achieved by employing integrated circuits (ICs) in DC-DC converter circuits (see Figure 8). This integration of ICs is less complex due to the lack of any need for magnetic equipment in the non-isolated DC-DC converter. Due to the huge number of inactive elements that account for electromagnetic interfacing (EMI), such converters suffer from high ripple at the output. This issue can be prevented by employing a control plan, for example, voltage and current control techniques, but this can compromise the intricacy of the system and increase the related cost [18,49]. This topology involves the combination of two or more converters in parallel; it has a relative phase shift of 360 0 /n. Some of the advantages of interleaved converters include solving the issue of output current ripples, current splitting (I/n), better system productivity, higher power density, and better thermal capacity. Owing to the current splits in the parallel routes, there is a lower rate of conduction losses, and fewer switches are needed. The presence of the interleaving strategy reduces the current and voltage ripple at the input of the DC-DC converter without increasing the switching losses; thus, the efficiency of the system is higher [50]. There is only one type in this topology, which is a two-phase interleaved non-isolated bidirectional DC-DC converter; it comprises two output stages, with 180 • out of phase. A simplified two-phase interleaved DC-DC converter circuit is shown in Figure 9. Note that the interleaved half-bridge converter is the commonly used topology [31,51,52] as it offers better voltage transformation even when the size of the converter is small; it also reduces switching losses. However, this converter has some drawbacks, such as high cost as a result of the higher number of elements and its complex control technique [53]. There are no secure galvanic isolation standards in the non-isolation BD converter; hence, most applications use IBDCs rather than NIBDCs. IBDCs rely on a great frequency transformer to offer galvanic isolation. In numerous applications and with regard to overburden for the safety of the source, galvanic isolation is vital, in addition, to reduce commotion and voltage coordination between conditions [55]. Most IBDCs have similar structures, as shown in Figure 10. This converter works in two stages: great frequency switching by the DC-AC converter and the utilization of a high-frequency transformer to maintain the galvanic isolation between two sources, as well as the utilization of transformers to coordinate the voltage between various stages for the best possible plan and enhancement of various stages [31,56]. Isolated bidirectional DC-DC converters with a basic structure were proposed by [31] to function as a flexible interface for power processing between the energy storage system and the other system components. The IBDC has some benefits, such as having no need for active or passive elements in soft switching. Secondly, the structure of the transformer is simple; therefore, maintenance and designing tasks are also simple. Additionally, both parts face the same issue of stresses in the switch currents. This approach also relies on the average current control method or peak current mode control. The absence of extra inactive components ensures quicker dynamic conduct. On the other hand, this converter has drawbacks, such as losing soft switching in light load conditions, and this control is sensitive to a slight variety of flux, particularly when bus voltages are high. An additional weakness is that currents flowing in DC buses hold great ripple content; this demands fitting filtering circuits, which makes the circuit complex [56,57]. An isolated converter has many topologies, such as the push-pull IBDC, forward IBDC, fly-back IBDC, dual half-bridge IBDC, Cuk IBDC, and dual active full-bridge IBDC [58][59][60][61]. The efficiencies of the full-bridge and half-bridge topologies endear them to many applications [62,63].
Dual Half-Bridge (DHB) IBDC
One of the most commonly used isolated bidirectional DC-DC converters is the dual half-bridge (DHB) converter (see Figure 11) [18,64]. Isolated DHB bidirectional DC-DC converters have great power density, soft switching, and simplicity in control; thus, they are suitable for EV application. They have an approximate efficiency of 92 to 94%; they have two sides-a low and a high voltage side-where voltage is fed to the half-bridge converter on the right side (A side) and modified current feeds the half-bridge on the low voltage side. They are also called boost half-bridge (B side) and are usually on the lower side since they consist of a capacitor or battery DC energy source in which a low current ripple is preferred [65]. Additionally, the battery sector of the circuit contains an inductor L dc and two halfbridges, each on either side of the main transformer. For each switching device, there is a small parallel capacitor that enables soft switching. The boost style working mode of the circuit is established when there is power flow from the low to high voltage side (HVS); this maintains the HVS voltage at the expected high level. Contrarily, the circuit works in the buck mode when recharging the battery from any RES. The HVS switch is performed with IGBTs while the low voltage side (LVS) switch is performed with MOSFETs. Note that the inductor and the LVS half-bridge are arranged uniquely, with the LVS half-bridge having dual roles, which include (i) serving as a boost converter to improve voltage; and (ii) serving as an inverter to improve the frequency of the AC voltage [66]. More current is drawn by the LVS boost converter from the load voltage resource when compared to the full-bridge voltage source inverter. A boost function is obtained by merging the LVS half-bridge and the inductor.
Dual Active Full-Bridge (DAFB) IBDC One of the most common topologies is to employ back-to-back bidirectional techniques that are isolated by a high-frequency transformer. Back-to-back converters can be voltage-fed or current-fed, half-bridge, or full-bridge configurations. Figure 12 illustrates the configuration of the full-bridge IB DC-DC converter, which utilizes two full-bridge techniques in both sides of the transformer. The power transmission of bidirectional converters is proportional to the number of switches, and the high productivity and high power density of this topology make it appealing to hybrid energy systems [67]. The above description of power capacity with switches generally suggests that the DAFB-IBDC has the greatest power capacity (efficiency of around 95%); thus, this converter is well suited for high power applications with similar hybrid energy systems. In this structure, full-bridge is employed at either end of the isolation transformer, while a soft switched phase shift approach is used to implement the control. To provide an approximate square wave AC voltage, in every converter, the diagonal switching couples are switched on at once, with fifty percent duty cycle (excluding a small dead time) and with the two legs of the crossways transformer terminal having a 180 • phase shift. An important factor, represented as ϕ (Phi), denotes the observed phase shift between the two AC voltages; it is important in deciding the quantity and direction of power transfer between the DC buses. This parameter can be modified to achieve a fixed frequency activity with full control [68]. Figure 13 illustrates the contrast between half-and full-bridge topologies. The half-bridge topology has all the switch-related voltage stresses as twice the DC input voltage (2V dc ), while the current stress and load current I ac are equivalent. In the full-bridge topology, each switching device relies on the voltage stress and is equivalent to the DC input voltage (V dc ), while the current stress is equivalent to the load I ac . Figure 13 shows the total device rating (TDR) in half-and full-bridge topologies. For the full-bridge topology, the TDR is calculated as: where P o is the output power. Meanwhile, the TDR of the half-bridge topology is estimated as: Figure 13. Comparison of (a) full-bridge topology and (b) half-bridge technique [69].
Hence, the TDR is the same for the dual full-bridge and dual half-bridge topologies at the same output power. Additionally, it is true that half-bridge devices are exposed to twice the DC input voltage; this is beneficial for both EVs and HEVs, as well as for fuel cell usage due to the low value of the DC input voltage (12 Volt battery). The dual half-bridge topology also uses fewer devices compared to the full-bridge topology.
Half-Bridge-Full-Bridge Bidirectional DC-DC Converter Considering dual active bridge, in the case of a UPS design, an isolated bidirectional DC-DC converter was proposed, which uses a voltage-fed half-bridge topology in the primary side and a voltage-fed full-bridge topology in the secondary side of the transformer (see Figure 14) [18]. It enables easier control requirements than DAB because of the lower number of switches. Particularly, it is well suited to the integration of a two-switch buck-boost converter in the half-bridge side for obtaining a complete UPS topology. Other variations of this configuration have been proposed, such as a full-bridge-half-bridge bidirectional DC-DC converter paired with impedance networks to improve performance [70].
Bidirectional Flyback
In the case of the magnetic isolation requirement, the well-known flyback converter is realized when a transformer replaces the inductor of the buck-boost converter. A bidirectional isolated buck-boost converter can be constructed using the bidirectional evolution technique in non-isolated topologies, as shown in Figure 15. The gain of the converter in forward power flow is obtained by applying the volt-second and chargesecond balancing, which is the same as the voltage gain ratio of the flyback converter as expected. It is worth noting that the transformer design technique must be considered, and a voltage clamp snubber is required to control the flyback transformer's leakage current [71].
Push-Pull Bidirectional DC-DC Converter
Based on the unidirectional push-pull converter, the bidirectional push-pull converter (see Figure 16) was suggested in order to allow power to flow in both directions. Same as unidirectional push-pull converters, bidirectional push-pull converters use a multiwinding transformer to convert the power. A three-phase bidirectional push-pull converter was also proposed to run this approach in high-power applications [73].
Comparison of NIBDCs and IBDCs
In this section, a comparison between the two main general groups of configurations, isolated and non-isolated topologies, was made in terms of the advantages and disadvantages, as illustrated in Table 1. Table 1. Pros and cons of NIBDC and IBDC bidirectional DC-DC converters.
Type
Advantages Disadvantages
(2) The ripple current is low on both sides.
(3) It has short circuit safety.
(4) It can operate with a wide range of voltages and different voltage levels. (5) It uses only two switches to simplify the circuitry of the driver and to reduce the driving power.
(1) Works only in one direction, either in buck mode or in boost mode. (2) When the voltage ratio is raised, the design (structure) is impractical. (3) There is less galvanic isolation between the two sides.
IBDC
(1) There are almost equivalent current switch stresses on both sides. (2) Soft switching can be achieved without the need for active or passive components. (3) Has a simple construction that simplifies the process of design and maintenance. (4) Has a fast-dynamic action feature due to the lack of additional passive components. (5) Has average current mode control or peak current mode control.
(1) Under light load conditions, the converter might lose soft switching.
(2) The current that flows in DC buses contains a high ripple content, which requires suitable filtering circuits, complicating the circuit. (3) This control is extremely sensitive to slight flux variation, particularly when bus voltages are high. (4) A relatively high number of components, leading to greater driver volume, gate losses, and cost, in contrast to topologies with low switch count.
Resonant Converter Families
Studies have been devoted to resonant converters since the 1980s to meet most of the industrial requirements, such as efficient energy conversion, higher power density, smooth waveforms, etc. Initially, the idea was to integrate resonant tanks into converters to generate oscillatory voltage and/or current waveforms that will ensure ZVS or ZCS conditions for power switches [23]. To reduce switching costs and current and voltage pressures as well as EMI, soft-switching strategies are introduced. The switching frequency can be increased by soft-switching conditions, and thus the size and volume of the converter could be decreased. Soft-switching converter topologies are considered an improvement or enhanced generation of hard-switching types [74][75][76][77]. Throughout the 1990s, new generations of soft-switching converters had been created, which integrate the advantages of traditional PWM converters with resonant converters. Compared to traditional PWM converters, soft-switching converters have moving waveforms except for the smooth nature of the rising and falling waveform edges, which are devoid of transient spikes. New softswitching converters usually use resonance in a controlled style, unlike resonant converters. To create ZVS and ZCS conditions, resonance will occur just before and during turn-on and turn-off operations. They also act as conventional PWM converters [23].
Structure of Resonant Power Converter
As shown in Figure 17, the structure of a resonant converter consists of three stages [16,26,78,79]; the first stage is the control switching network (CSN), the second step is the resonant tank network (RTN), and the third is the network of the rectifier with a low-pass filter (DR_LPF). Each stage is assigned a specific task to accomplish the major goal of the resonant converter; for instance, the DC source enables the quick mode switching of the controlled switching network (CSN) based on the working frequency to generate the output voltage to be fed into the next step. To reduce THD in the second stage, sinusoidal voltages and current signals are generated using the stage of the high-frequency resonant tank network (RTN) that consists of two or more reactive components [80]. This stage will be defined by the frequency selective network as energy buffering between the load and the CSN. The impedances of both inductance and capacitance are the same in resonance conditions particularly, which will produce the resonant frequency. Then, a rectifier network rectifies and filters the incoming signal, then passes the filter to generate the correct DC output voltage [81].
Control Switching Network (CSN)
The CSN is a setup that facilitates the conversion of DC power into AC power. The most familiar switching networks are full-and half-bridge as their utilization is dependent on the required power. The full-bridge inverter is mainly used in high-power applications, while half-bridge inverters can offer only 50% of the active switch input voltage. As the latter has low levels of voltage transition, it is ideal for high input voltage applications [82]. Most times, resonant power converters and conventional DC-DC converters seem alike in terms of achieving soft switching and the chance of working at a high switching frequency. This is due to the possibility of DC-DC power conversion from DC to AC using inverters. The AC can either be stepped up or down using an electromagnetic component, which then goes through a rectifier network to be supplied as DC power to the load. Typically, resonant converters are employed with full-or half-bridge inverters, together with each full-bridge or center-stapled rectifier [83,84]. The CSN, as depicted in Figure 18, can generate a square waveform voltage V S (t) (Volt) of the switching frequency Fs (ω S = 2π fs), as represented by the Fourier series in Equation (3). Considering the response of the resonant tank that has been noted to overpower the basic component fs of the voltage waveform Vs(t), the infinitesimal response exhibits harmonic frequencies nfs, n = 3, 5, 7, . . . Therefore, the power that corresponds to the basic voltage waveform Vs(t) component is propagated to the resonant tank, as shown in Equation (4). This basic component is a sinusoidal waveform of peak amplitude equal to (4/π) times the DC source voltage. The basic component is in the same phase as the initial waveform. Turning ON S1 produces a positive sinusoidal switched current (t), while the negative version is produced by turning OFF S2 since the two switches work at the same time and its peak Is 1 amplitude with phase is equivalent to ϕs. In the meantime, DC to CSN input current is obtained by dividing the sinusoidal switched current over half the switching duration, as expressed in Equation (5) [9,85,86].
Resonant Tank Network (RTN)
The second stage of the resonant converter is a resonant tank, also termed a resonant circuit. It is considered one of the most critical parts of a resonant power converter. This network contains an inductive and capacitor (LC) circuit (reactive element) that stores electricity, which oscillates with the frequency of the resonant circuit. The movement of energy between the capacitor yields resonance for the inductor in the LC circuit. The electromagnetic frequency is beneficial in various applications, which is generated by the repetitive back and forth electrical energy movement between a fully charged inductor and a capacitor (for instance, in telecommunications technology). In addition, the tank can be charged to a particular resonant frequency by arranging the values of the reactive component. There are various types of RTNs, all of which can be categorized by three major factors [87,88]. The first classification is based on the correlation method of the tank elements, such as a series resonant converter (SRC), parallel resonant converter (PRC), and series-parallel resonant converter (SPRC) [89], as shown in Figure 19. The second type is based on the number of two components of the reactive elements (the number of the transfer function order). For the third type, the categories are based on the structure of a single element or resonant tank with multiple elements [86,90]. The third classification has two, three, and multi-element resonant tanks. There are several resonant power converter topologies with two elements (see Figure 19), but types a and b are regarded as second-order resonant tanks. As a result of their basic analysis, they are the most common and simplest topologies. Some RTNs are only suitable for the voltage sources, while others are good for the current source. In some cases, SRCs and PRCs are not ideal for use in certain applications, such as in contactless energy transfer devices [91] and high-voltage applications [16].
The development of third-order resonant tanks was conceived as a solution to the problems of the two-component (two-element) RTN; a third element was added to the two-element resonant network to create a three-element resonant network. It can be viewed as a combination of the strengths of the two most popular SRC and PRC resonant elements, ignoring their weaknesses. There are thirty-six separate third-order RTN tanks [80,89,92]; some of them consist of two capacitors and one inductor, while others are opposite shaped, as shown in Figure 20a-d. Three-element RTN resonant converters have been widely used in many industries, such as in LLC, LCC, CLL, LCL, and hybrid series-to-parallel resonant converters (RCs) [93][94][95][96]. LLC and LCC are the commonest forms of third-order RTN converters. For the LLC RTN, they are considered as three-element parallel RCs because they exhibit the features of the series resonant converter RC by integrating a parallel inductor that is placed before loading. Having four or more elements, as shown in Figure 20d, means the tank is a multielement resonant tank. Generally, the resonant tank relation exists beyond the same number of RTN elements. The relationship between various tank orders indicates that a resonant tank network with multiple elements is equivalent to a lower number of tank elements [89].
Rectifier Network with Low-Pass Filter
DR-LPF is mainly employed to resolve and filter the AC waveform as the last stage of the process of the structure of the resonant network converter, after generating sinusoidal current and voltage waveforms by the RT network, to attain the required DC output waveform. There have been many studies on resonant power converters that considered the DR-LPF as a center-tapped or full-bridge rectifier. The high voltage stress on the diodes restricts the practical suitability of center-tapped rectifiers; therefore, a low-pass channel has been preferred for two instances of capacitance or inductance [86,97,98].
•
Diode rectifier network with capacitive low-pass filter (DR-LLPF): The evaluation of the DR action when entering the current i R (t) can serve as a basis for testing the corresponding circuit of DR with a capacitive output filter and in-parallel load resistor; this is expressed in Equation (7) due to the series correlation [10].
Consequently, this current i R (t) is rectified by DR, as described in Figure 21. Due to the condenser C f filtering operation, only the DC portion flows into the load, generating the current Io and the related DC voltage Vo. The DC portion must be equivalent to the current Io at a steady state. Voltage V R (t) at the DR input can be obtained by noting that the conducting diodes of the DR change when current i R (t) crosses null. The value of V R (t) is equal to Vo when i R (t) is positive, and equal to −V o at negative i R (t) [10].
• Diode rectifier with inductive low-pass filter (DR-CLPF) ( Figure 22): Here, a sinusoidal voltage provides the DR, and the i R (t) input current exhibits a square waveform that is equivalent to [10,16].
Classification of Resonant Converter
Resonant converters are classified into three main categories, which are conventional, quasi, and multi-resonant converters, as shown in Figure 23. Conventional resonant converters can be classified into two categories: phase-shift modulated and load resonant converters (LRCs). The other type, quasi-resonant converters (QRCs), is considered as a combination of resonant and PWM converters, where the underlying principle is to replace the power switch with a resonant switch [79]. Studies by [16,80,99,100] presented brief and useful explanations about quasi-resonant converters (QRCs), as well as different types of multi-resonant converters that have been projected to overcome the weaknesses of QRCs. It should be noted that two resonant capacitors with a resonant inductance (called a multi-resonant network) can be used to achieve zero voltage switching [86]. The following subsections elaborate different conventional resonant converters, including series, parallel, and series-parallel load resonant converters.
Series Resonant Converters
Load resonant converters (LRCs) have several distinct characteristics over conventional power converters. LRCs are particularly appropriate for use in high-voltage applications because they allow operation at high frequency to reduce the equipment size without affecting the power conversion efficiency or further stressing the switches. There are three different combinations of LRCs; these are series resonant converters, series-parallel resonant converters, and parallel resonant converters [79]. The DC-DC (SRC) series resonant converter has been utilized in a wide range of voltage and energy applications [101]. The tank is mounted in series with the load network and the rectifier in a series resonant converter (SRC); hence, the generation of the load and resonant tank voltages is dependent on the low voltage divider. The modification of the value of the resonant tank impedance is reliant on the moving voltage frequency to the tank [85]. The loads in SRCs are connected in series with the resonant tank circuit, as designed by Lr and Cr. Figure 24 shows a half-bridge configuration; when ILr is in the positive mode and T 1 is active, the current of the resonant inductor flows via T 1 , else, it will flow via the D2 diode. On the other hand, when ILr is in the negative mode and T 2 is active, the current will flow through T 2 , else, it will flow through the D1 diode. Both active switches work in a complementary mode in steady-state symmetrical service. The converter has several potential working modes based on the ratio of the switching frequency Fs to the converter resonant frequency Fr [102][103][104]. The impedance value of the resonant tank, as shown in Figure 25, varies as a function of the driving voltage frequency to the tank. However, the impedance of the resonant tank can be modified through adjustment of the moving voltage frequency to the resonant network (see Figure 25); the proper resistive value to the RTN, as provided by the rectifier, can be estimated using Equation (15), while the input voltage is fed to the tank impedance and the effective resistance [104]. It is expected that the voltage gain of the SRC should be <1 as the configuration framework of the SRC circuit can be estimated using Equation (16). In a light load scenario where the value of the load resistance is higher than the impedance of the resonant network Z O , all the input voltages will be fed to the load; hence, it will be hard to regulate the production at light load [87,105].
Voltage gain (16) where Q is the quality load factor, which can be calculated using the relation: As mentioned above, there are several potential working modes of the SRC converter, which are conductive mode, frequency switching, and ranges of soft switching. These modes and their limit have been investigated in previous studies [106,107]. The SRC modes can also be classified under discontinuous conduction (DCM) and continuous conduction (CCM) modes based on the ratio of the switching frequency, Fs, to the converter resonant frequency, Fr.
Parallel Resonant Converters
The parallel resonant converter is classified as a two-element tank converter, as shown in Figure 19b. The resonant capacitor Cr must be in parallel with the load and the diode rectifier network DR. In the case of effective load resistance R ac , the value is obviously larger compared to resonant capacitor reactance Cr, meaning that the resonant current is independent from the load. Furthermore, the voltage across the parallel resistance R ac and the resonant capacitor can be increased by reducing the load. Due to its voltage filtering, the RTN delivers a sinusoidal voltage V R and the DR-LPF set must be of an inductive nature. The equivalent resistance R ac seen from the input terminals of the DR-LPF set is given by Equation (18), the load quality factor by Equation (19), and PRC voltage gain is given by Equation (20). PRCs can step the output voltage down or up depending on the variation in the control switching system frequency. The voltage output can be adjusted with load states, whereas the resonant current is restricted to resonant inductor data; this causes the PRC to be appropriate for open and short circuit applications [80]. AC-equivalent circuits are shown in Figure 26.
Series-Parallel Resonant Converters
Series-parallel resonant converters (SPRCs) combine the advantages of the SRC and PRC. The SPRC has an additional capacitor or inductor linked in the resonant tank circuit [23]. Figure 20a shows an LCC-type SPRC, in which an additional capacitor is placed in series with the resonant inductor. LCC offers both load-independent output voltage and output current. This topology is dominated by parallel resonant frequency, and subsequently it presents the same shortcomings as the PRC. LCC cannot operate safely with an open circuit or a short circuit. Figure 20b indicates an LLC-type SPRC, in which an additional inductor is connected in parallel with the resonant capacitor in the SRC. LLC has the ability to achieve no load regulation that was not possible in the SRC by employing an inductor in parallel to the resonant capacitor. Through such modification, the LLC topology allows the regulation of the output voltage from zero to maximum under any load condition with relatively small switching frequency variation. LLC also has some restrictions, for instance, startup and short-circuit protection are difficult to achieve due to a flat gain above the series resonant frequency f rs . Then, as with LCC, the LLC topology cannot operate safely with an open circuit at frequencies close to f rs and cannot operate safely with a short circuit at frequencies close to parallel frequency f rp [80,108]. However, there are many possible combinations of the resonant tank circuit. Detailed analysis can be found in [109]. In this context, the type of LLC was discussed as one of the types of this technology, as shown in the equivalent circuit in Figure 27. The LLC equivalent circuit of Figure 27 has a capacitor C r connected in series with the inductor L r , and the load is connected in parallel to the inductor L p . VAB is the fundamental component of a square waveform Vin that drives the switches of the CSN. The inductor L p can be replaced by the magnetizing inductance if a transformer is used in the circuit. The topology has two resonant frequencies: the series resonant frequency f rs due to the resonant element LsCs and given by Equation (21), and the parallel resonant frequency f rp due to all the RTN elements and given by Equation (22). Note that f rs > f rp and f rp = 0.
Multilevel Inverters (MLIs)
In recent years, a growing number of industrial applications have necessitated the use of higher-power equipment. Medium-voltage and megawatt power level converters are needed in some medium-voltage motor drives and utility applications. It is difficult to link only one power semiconductor switch directly to a medium-voltage grid; as a result, in high-power and medium-voltage conditions, a multilevel power converter structure has been implemented as an alternative [23]. There are many MLC topologies in existence; thus, in this study, only the most common multilevel converters have been assessed. The pros and cons of these multilevel inverter topologies are discussed in detail in this section. Besides being employed as high-power converters, multilevel converters are extensively utilized in renewable energy sources (wind, fuel cells, etc.) by associating the PV modules on one side and high-power loads on the other side [25]. Multilevel converters began as a three-level inverter introduced by [110]. The main difference between the two-level source voltage inverter (VSI) and the MLC is in the levels of the voltages; for the two-level VSI, they normally generate only two levels of voltages, while an infinite number of voltage levels can be generated by the MLI. The MLI has at least three voltage levels; the output of a power converter is determined based on the consistency of its current and voltage waveforms [111]. The multilevel inverter concept (MLI) for generating AC signal does not only rely on two voltage levels, as shown in Figure 28. Alternatively, a variant of the reinforced smooth wave is applied to most voltage levels to the other, with a small dv / dt and less harmonic distortion. This produces a smoother waveform with more inverter voltage levels, but with many design rates, it becomes more complicated, involves more parts, and needs a more complex inverter controller [112]. Multilevel converters are more often considered due to their high voltage operation ability, great efficiency, small losses from switching, operation at both fundamental and high switching frequency of pulse width modulation (PWM), and low output of EMI [113]; therefore, the system as a whole will be more costly. On the contrary, the emphasis is to raise the circuit complexity via a reduction in the number of switches and gate driver circuits [113,114]. Figures 28 and 29 show two, three, five, and seven levels, respectively. Table 2 below lists the differences between traditional and multilevel inverters. It should be noted that, in this context, the term "conventional converter" refers to a converter with fewer than three levels. Table 2. Comparison between multilevel and traditional converters.
Harmonic
The output has high harmonics The output has low harmonics.
Voltage
It is not allowed for use in high-voltage applications.
It can be used in high-voltage applications.
Level of voltage It is not possible to generate high voltage levels.
It is possible to generate high voltage levels.
Stresses
The voltage stresses on switching are greater.
The voltage stresses on switching are less. Losses of switching Increasing switching losses. Decreasing switching losses. Switching frequency Switching frequency is high. Switching frequency is low.
Rate of change
The rate of voltage change is high.
The rate of voltage change is low.
Classification of Multilevel Inverters
MLIs are important for medium and high voltage usage due to their capability of synthesizing a sinusoidal voltage at different DC levels [115]. Although there are many MLI topologies, they are generally classified into neutral-point-clamped (diode-clamped), cascaded, and flying capacitor (capacitor-clamped) types, as shown in Figure 30 [116][117][118][119].
Diode-Clamped Multilevel Inverter
The neutral-point-clamped (NPC) PWM topology is the first practical generation of MLI ( Figure 31); this multilevel inverter is clamped by a diode using clamping diodes. It helps to reduce the electronic devices' voltage tension. It was considered the first-generation MLC, called the three-level NPC, and was first introduced in 1981 by Nabae et al. [110,111]. Nonetheless, this topology suffers technical problems if used as high-power converters. It needs diodes with high-speed clamping that are subject to the stress of reverse recovery. Due to the series connection of the diodes, the design complexity is a key concern. Half of the input DC voltage is the maximum output voltage. This problem is simply eliminated by increasing the number of components, such as switches and diodes [113,[120][121][122]. This topology, like other techniques, possesses advantages and drawbacks; one of the benefits is that the control technique is basic, and it utilizes back-to-back inverters. Another positive point is that the switch voltage is just half the DC-link voltage. An additional benefit is that material declines when the number of levels rises along with the distortion. Furthermore, at the fundamental frequency, the efficiency is great, aside from capacitor capacity being small and preloaded. Despite these positive effects, the diode-clamped multilevel inverter has some weak points, such as the need for more clamping diodes when the number of levels increases, and when control and monitoring are not correct, a considerably large amount of DC will be discharged [113,120].
Capacitor-Clamped (Flying Capacitor)
This is another type of multilevel inverter [123,124]; it is like the diode-clamped inverter as in [125,126] and uses capacitors to clamp the voltage of the unit instead of diodes [123]. The flying capacitor (FLC) converter was introduced in the 1990s by [124,127] as another modification of the multilevel inverter topology. Its fabrication involved the serial connection of capacitors' clamped switching cell, thereby allowing the transfer of limited voltages to the electrical devices through the capacitors. It differs from the diodeclamped MLI by having capacitors and not diodes; the work of the capacitors is to separate the DC supply voltages. However, VDC represents the voltage across each switch and capacitor. An inverter of m level flying cap needs (2 m-2 ) switches, with ( m-1 ) number of capacitors [113].
The DC side of this topology has a ladder structure capacitor, and the voltage passing through each neighboring capacitive branch differs. This voltage difference is considered to determine the size of the output voltage phase, which is equivalent to VDC. In contrast to the diode-clamped multilevel converter, the major benefit of the flying capacitor inverter is that it has redundancies for internal voltage levels. In other words, to obtain a similar output voltage, there is a need to combine two or more different valid switches. Furthermore, the flying capacitor topology has phase redundancy, while only a line redundancy is observed in the diode-clamped inverters. This is an important feature because it is the basis for using capacitive branch voltage balancing control techniques. The output voltage level determines the number of redundant states; in this kind of MLC, a balance in the voltage of the capacitors must be maintained. This can be achieved by using the appropriate sequences of regulation so that the internal capacitors recharge and discharge evenly over time. Nonetheless, there are some risks, as follows [25,128]:
•
Voltage level control is difficult for all capacitors; it is also a complex task to pre-charge all the condensers at the same voltage level.
•
The efficiency of switching is low.
•
In a diode-clamped MLI, several capacitors are used, and these are mostly expensive and bulkier compared to the clamping diodes. A flying capacitor multilevel topology is shown below in Figure 32.
Cascaded H-Bridge
Baker and Bannister [129] patented the first converter topology in the mid-1970s, which was capable of producing multilevel voltages through the source of specific DC voltage. A cascaded H-bridge converter was suggested by [130] for the elimination of the drawbacks of FLC and NPC topologies, such as the additional clamping diodes and capacitors. The cascaded H-bridge MLI requires fewer parts per switching stage when compared to diode-clamped and flying capacitor inverters. The group of switches and condensers in a cascade H-bridge MLC is termed as H-bridge, comprising isolated DC voltage sources [120,131]. This topology utilizes more than one DC source in the H-bridge inverter. Each converter achieves output at various levels, as shown in Figure 33 [114], due to the connection of different power conversion cells. The H-bridge is made up of a pair of switches and condensers. For each H-bridge, a separate input DC voltage is achieved as it generates a sinusoidal voltage output. Series of attached H-bridge cells are used in the inverter, with each having three different levels of DC voltage, which are zero, negative DC voltage, and positive DC voltage. Each H-bridge cell has its average voltage output as the sum of all the produced voltages. If the number of cells is m, the number of output voltage levels will be (2 m+1 ). Figure 33 depicts the structure of a five-level H-bridge inverter [113]. The study by Lai and Peng (1997) focused on the peculiarities of the NPC and FLC topologies; the study was later patented in 1997. Cascaded H-bridge MLCs (CHBMLCs) have since attracted great interest in several applications due to their attractive characteristics, such as:
•
Ease of packing and storing. • Producing voltage in common mode, reducing stress. • Small distortions in the input current. • Functioning at both simple switching frequencies.
•
The THD of the output waveform being very small without any filter circuit.
However, the inverter has drawbacks, such as: • Each module needs different DC sources or capacitors.
•
The large quantity of capacitors requires a more complex controller.
Renewable energy battery system, motor drive system, power factor compensators, electric vehicle drives.
Structural Hybrid of Reviewed Converters
There are several hybrid topologies reported in the literature. The aim of developing hybrid topologies is to ensure the achievement of an efficient system that combines the features of both converters. This section reviews the combination of each two of the three converters highlighted in the previous sections. The hybrid structures discussed here are: multilevel bidirectional, multilevel resonant, and bidirectional resonant converters. The applications of these hybrid converters, especially for energy storage and grid integration, are discussed in the next section.
Bidirectional Resonant Converters
These converters are normally used as a linkage between the energy storage components and the grid utilities or DC bus [132]; they can also be used in EVs, microgrids, and vehicle-to-grid (V2G). BDCs are normally connected with energy storage components such as supercapacitors or batteries, with a wide range of voltage variation during operation. Hence, BDCs are not considered efficient over a wide range of voltages [133,134]. More attention has been given to the dual active bridge (DAB) converter because of its simple nature, high efficiency, safety, and wide range of soft-switching capability [135]. BDCs can also be achieved with a current-fed inverter on the transformer side with low voltage and with a voltage-fed inverter on the side of the transformer with high voltage. However, conventional converters are associated with certain issues, such as switching losses and not being able to achieve soft switching under wide load changes. Hence, the proposed resonant-based bidirectional DC-DC converter was suggested to address these issues of conventional converters [136,137]. Various studies have focused on the alteration of resonant network structures to improve the performance of resonant-type IBDCs. Most of the studies have concentrated on CLLCand CLLLC-type resonant IBDCs because of their symmetrical attributes in both power flow directions. The study by [134] proposed bidirectional designs of CLLC and CLLLC resonant converters that will ensure high-efficiency operation over a wide gain range. The attributes of LLC resonant converters include low cost, high power, and better efficiency, and these made it suitable for industrial application [138]. The most popular bidirectional isolated converters in the industries are series resonant converters (SRCs) because they can realize open-loop bidirectional power flow operation. LLC resonant tanks are considered better options for improving the regulation of output voltage under bidirectional operation because their usage in bidirectional converters demands two different modes of operation based on the power flow direction [139]. The study by [140] presented a light electric vehicle fast charger with a lithium-ion battery and supercapacitor, together with an AC/DC rectifier, full-bridge LLC resonant converter, as shown in Figure 34, and light electric vehicle (LEV) battery. The research suggested a fastcharging system using a soft-switched LLC resonant converter in the ZVS area. Compared to the past, the regular use of lead-acid batteries has been reduced due to emissions into the air and environment; therefore, new LEV systems with fast charging mechanisms have been proposed by using a lithium-ion battery of 800 Watt-hour and a supercapacitor of 50 Wh.
Multilevel Bidirectional DC-DC Converter
Due to the ability of multilevel converters to reduce power losses, the voltage stress of switches, and the total harmonic distortion (THD) of output signals [141], they have been widely developed and tested in various applications to achieve beneficial points, such as good efficiency and low and optimal cost. The study by [142] presented a new multilevel boost-buck DC-DC converter that can be used when similar grounding is needed at both sides of the converter. The topology is comprised of a back-to-back connection of 2n diode-clamped or level active converter legs. The efficiency of this structure for the MLC is higher at a higher number of levels.
The study presented by Shukla focused on the elimination of the imbalance in DC capacitor voltage, which has been the major problem of diode-clamped MLIs with >3 levels. The study proposed the connection of the voltage-balancing circuit based on the buck-boost chopper to the DC-link of diode-clamped MLIs; this was considered effective in addressing the issues of line faults, disturbances, and transients, even though the cost and system complexity are increased by adding such a circuit to the system [143]. Considering that the HBMLC topology demands the use of separate voltage sources, it is suitable for EVs and plug-in hybrid EVs as these systems are dependent on a high number of batteries. The other interesting feature of the MLI is that operations can proceed in a bidirectional mode, which allows the charging and discharging of the batteries [141,144]. The neutral point diode-clamped (NPC) dual active bridge (DAB) converter was designed in [145], and the study applied a capacitor voltage balancing method in order to facilitate voltage balancing. The two NPC legs, as shown in Figure 35, can generate five-level voltages across the primary winding of the medium frequency transformer. The balancing methods are dependent on the direction of power flow through the DAB converter, but in this research the authors suggested a voltage balancing controller that is independent of power flow direction and does not require adjustments of active voltage vectors via the modulator. They concluded that the proposed voltage balancing approach is very and globally effective under all operating conditions of the converter [145].
Multilevel Resonant Converters
To meet the demands for high-voltage operation, more novel structures are needed in high-power applications to achieve better performances. This multilevel structure can reduce the voltage stress of power devices and this made them a good alternative [146]. In DC-DC conversion, resonant converters are potential alternatives owing to their softswitching attributes in a range of operations. In such situations, LLC resonant converters are better options due to their advantages, such as ZCS for rectifier diodes and ZVS for power switches. Another multi-phase multilevel LLC modular resonant converter that can realize ZCS for rectifier diodes and ZVS for primary switches, and can decrease voltage stress to one-nth of the input voltage under a range of load and input voltages, has been proposed to meet the demands for high-power and high-voltage operation. The system was also proposed to improve the power processing ability of the LLC resonant converters in multilevel operations [147]. With this approach, the voltage stress on the power devices will be reduced, making it suitable for use in low-voltage rated MOSFETs. With low-voltage rated MOSFETs, conduction losses will be reduced for a given area and the switching losses due to ZVS will remain low [147]. Achieving ZVS in multilevel operation also requires a reduction in the dead-time and magnetizing inductance current due to the storage of lower energy in the parasitic capacitances of the MOSFETs, thereby increasing the converter efficiency. The study by [148] presented a new two-phase interleaved flying capacitor LLC resonant converter in which multilevel operation and multi-phase operation are combined. The need for the flying capacitor is to reduce the voltage stress on the primary side of the MOSFETs to ensure a balance in the currents generated by the two phases, as well as to improve the power processing capabilities of the system. Hence, the converter exhibited all the positive attributes of conventional LLC converters but retained the low sensitivity attribute to mismatches in the resonant tank parameters without requiring extra components. This made the system suitable for high-output current applications. A multilevel flying capacitor-based single-stage AC-DC LLC topology has been proposed to address the dilemma of the voltage balancing of DC-bus capacitors and to lessen the voltage stress of the switching devices. The suggested three-level inverter configuration ensures zero voltage switching (ZVS) for all the switches, lower circulating currents, less switching stress, and lower losses. For improved efficiency, the converter employs a bridgeless rectification approach, and the power factor is practically unity by operating the source-side inductor in discontinuous current conduction mode. The output voltage of the converter is regulated using variable switching frequency control, while the DCbus voltage is regulated via pulse width modulation. This control technique works well to maintain almost constant DC-bus voltage throughout a wide range of line and load variations. Figure 36 shows the three-level flying capacitor LLC resonant converter [149]. The study by [150] presented a five-level cascaded LLC resonant converter; the design and analysis of the system were presented for use in energy storage systems (ESSs) or grid-connected applications. The proposed converter operates in the same manner as a step-down converter; it also achieves ZCS.
Multilevel Bidirectional Resonant Converters
The unique characteristics of multilevel inverters make them attractive to a variety of medium to high-power, high-voltage applications. Multilevel inverters have several benefits, such as reduced voltage stress, reduced loss of individual semiconductor chips, improved power efficiency, and reduced electromagnetic interference. The most significant challenges in multilevel inverters are voltage balancing, the increased number of semiconductors and capacitors, and more complicated control. The best structure can be chosen from a variety of classic and advanced multilevel inverter configurations. The most appropriate option is determined by the application, load, and requirement [151]. Although many papers have investigated different topologies, modulations, and control techniques of multilevel inverters with bidirectional or resonant converters, only a few studies have introduced the combination of these three converters; this section briefly reviews the existing literature on these structures. The study by [152] introduced the bidirectional multilevel resonant switched capacitor converter; this work presented a method that significantly improves the proposed converter's efficiency through topology modification. In a multilevel topology, the converter can achieve a high voltage ratio, which reduces voltage stress on switches and allows for high-power performance. The converter can be used as a link between DC voltage systems that are used in several applications. Furthermore, the primary objective of the research was to verify the achieved topology improvement using silicon carbide (SiC) and silicon (Si) switches in a four-level bidirectional resonant switched capacitor with a 0.5/2 kV voltage conversion ratio. The proposed converter was composed of a basic switched-capacitor (SC) structure that makes energy transfer in both directions possible. The study concluded that the voltage gain was more stable, and the method was efficient with Si MOSFET; also, the system showed a negligible decrease in efficiency versus power with increases in power. A bidirectional modular multilevel resonant (BMMR) DC-DC converter has been proposed for medium linkage with low-voltage grids; the desirable features of BMMR converters include:
•
Bidirectional power conversion ability. • Modular structure with manufacturable, standardized submodules, and fault-tolerant operation that is simple to achieve.
Owing to these characteristics, the BMMR converter is a strong contender for DC distribution applications. The authors presented an analysis of the converter's operation principles and submodule voltage balancing in a steady state. The proposed topology and control methods were checked using a 500-kW simulation model and experiments on a down-scaled prototype, where the employed structure was a half-bridge modular multilevel structure on the medium voltage (MV) part and a full-bridge structure applied on the low voltage (LV) side [153].
Bidirectional three-level DC-DC converter soft-switching was presented by [154], which, when combined with a simple auxiliary circuit, can effectively minimize switching power losses. To reduce the turn-on switching power losses of main power switches, the researchers utilized double LC resonant circuits. In comparison to previous converters that work without any auxiliary circuits and suffer from high switching losses, the proposed converter reduces switching power losses and improves power performance. The proposed converter outperformed conventional converters in terms of power efficiency. For the rated load state, it achieved power efficiencies of 97.7% in the step-up mode and 97.8% in the step-down mode.
The study made the following conclusions: • The proposed converter can be used in a grid-connected battery energy storage device that needs high power density and performance. High-performance inverters, such as neutral-point-clamped and three-level inverters, can be interfaced with the proposed converter due to their three-level power conversion structure.
•
The proposed converter can be used for both single-phase and three-phase gridconnected applications as a bidirectional DC-DC converter.
•
The proposed converter is intended to be a good fit for the interface between the battery and the grid-connected inverter, allowing for high-efficiency electrical power exchange and energy conversion management.
An isolated multilevel DC/DC converter for a power cell in a solid-state transformer (SST) that converts medium-voltage (MV) AC input to low-voltage (LV) or medium-voltage DC output has been studied. The AC/DC stage uses the three-level diode neutral-pointclamped (DNPC) types to take a higher voltage at AC input terminals. The high frequency and high efficiency design needs to be met with soft switching in the whole operating range. To achieve this aim, the LLC resonant topology is utilized in this configuration for isolated power conversion, owing to its high efficiency and high power density. The (DNPC) is first considered as the primary side topology for an LLC resonant circuit since it is the same structure as the phase legs for the AC/DC stage. The LLC circuit with a three-level DNPC as the primary side and two-level full bridge as the secondary side is shown in Figure 37. The transformer in the DC/DC stage needs to provide medium voltage isolation at a high power rating [155].
Applications
Reducing the impact of greenhouse gases (GHGs) by reducing CO2 (carbon dioxide) and other emissions is one of the most daunting dilemmas that people are currently facing. As electricity generation is one of the key reasons for emissions, it is important to find alternative methods for producing clean electricity. In this regard, renewable energy resources, such as wind and solar energy, are among the most significant replacements for conventional fuel-based energy production. Nevertheless, when these tools are employed alone, the intermittent existence of most renewable sources does not permit a consistent and continuous source of energy. The fluctuations in the generated energy from these sources could hamper the power quality of the associated grid. To overcome such fluctuations, energy storage systems should be used to generate smooth power while maximizing the energy output of the RES. This also minimizes the needed contribution from conventional power systems, thereby keeping CO2 emissions low. The utility grid needs to be technologically advanced in terms of grid stabilization, power quality, frequency and voltage subsidization, load shifting, system reliability improvement, and smoothing the energy output of renewable resources. In addition, restructured power markets offer chances to leverage energy arbitration markets and generate revenue by purchasing off-peak low-cost electricity and selling high-cost peak electricity [156]. Table 4 shows some applications of different converters.
Microgrid energy storage application
A bidirectional DC-DC converter topology relevant to microgrid energy storage, including non-isolated, isolated, and interleaved topologies, has been addressed. While it is critical to select an appropriate bidirectional converter topology to provide efficient power transfer between the DC bus and the storage units, it is also critical to select a suitable control approach to ensure the microgrid's resilience, reliability, and stability. [157] Boost Bidirectional DC-DC Converter.
PV applications
The proposed topology for harnessing and storing PV energy is derived by combining a buck-boost converter with a bidirectional boost converter. It also tries to store energy so that it can be used in the event of a power outage. The suggested converter has a higher conversion gain, is simple to use, and the method of control is adjustable by altering the direction of the power flow. The converter is equipped with two unidirectional ports and one bidirectional port; at the bidirectional port, a battery is installed for energy storage via the bidirectional boost converter. [158] Three-phase Bidirectional DC-DC Converter.
Uninterruptible power supplies (UPS)
Uninterruptible power supplies (UPS) typically require bidirectional DC-DC converters to interface batteries to a DC bus. Three-phase DC-DC converter topologies for higher power and current applications produce less ripple current and less current stress on the devices. Low ripple current is an especially essential attribute since excessive ripple current can shorten the battery life in UPS systems. [159] LLC Resonant DC-DC Converter.
Battery charging
The LLC converter is designed for the output voltage range of 15 V−20 V for a lead-acid battery with the input range of 30 V and with an efficiency range of 88-92%. The circuit is simulated by applying PSIM software and the output voltage is regulated with the help of a PI controller as a feedback path. [160] LC Resonant DC-DC Converter.
Grid-connected renewable energy sources (RESs) An LC resonant converter was proposed for grid-connected renewable energy sources; the system uses a step-up resonant converter for grid-connected RES. The converter delivers power by charging from the input and discharging at the output via the resonant inductor. The resonant capacitor is used to accomplish zero voltage on/off and zero current switching for active switches and ZCS for rectifier diodes. [161] LCC-T Resonant DC-DC Converter.
PV/Fuel cell applications
An isolated soft-switching current-fed LCC-T resonant DC-DC converter for fuel cell/PV applications has been presented. This converter is capable of achieving zero current switching for voltage doubler diodes and zero voltage switching for front-end inverter switches. To obtain the rated output voltage of 380 V, a proof-of-concept prototype rated for 288 W was created and subjected to harsh load and input conditions.
Converter Type Application Area Description Reference
Five-level Cascaded Multilevel Inverter.
Renewable energy (PV and wind system) A five-level cascaded medium-voltage converter has been proposed as a high-frequency link multilevel cascaded medium-voltage converter for direct grid integration of renewable sources to minimize the voltage imbalance and common mode issue. A 1.73-kVA prototype system was created with a modular five-level cascaded converter that converts 210 V DC (rectified generating voltage) to three-phase 1 kV RMS, 50 Hz AC.
Hybrid electric vehicles (HEVs)
The suggested seven-level hybrid cascaded H-bridge enables the use of a single power supply, with the remaining 'n−1' sources consisting of capacitors, and is referred to as a hybrid cascaded H-bridge multilevel inverter (HCMLI). Due to the great number of output levels, the HCMLI produces high-quality output power while maintaining a high conversion efficiency and minimal thermal stress due to its fundamental frequency switching method. By substituting a fuel cell for the capacitance source, which is incompatible with high-temperature operation, and making necessary modifications, this topology can be applied in hybrid electric vehicles (HEVs). [164] Five-, Seven-, and Fifteen-level Cascaded H-bridge Inverter. Electrical vehicle charging/discharging (EV) The proposed system is like an LLC resonant converter but differs by the addition of an additional capacitor and inductor to the circuit's secondary side for bidirectional operation. This makes the resonant network symmetric for operation in both reverse and forward directions. Both ZCS and ZVS can reduce losses and support operation at high frequencies; this reduces the size of filter capacitors and magnetic elements, thereby reducing the volume, weight, and size, as well as increasing the power density of the system. [166] Half-bridge/Full-bridge Bidirectional CLLC Resonant Converter.
Board charger of electric vehicle
The forward and reverse output power capability of the CLLC topology has been investigated. It was discovered that the forward and reverse output power ability of the CLLC converter is asymmetrical, which makes it difficult to fully meet the application requirements of an onboard charger (OBC). An OBC is a power conversion device that converts the grid voltage into a voltage suitable for charging lithium-ion batteries in an electric vehicle. Therefore, a CLLC resonant converter with a modified half-bridge and full-bridge structure attained forward 6.6-kW and reverse 3.3-kW power conversion. Both the highest and lowest battery voltages can achieve an output power of AC 3.3 kW. Among them, 330 V adopts the full-bridge and 480 V adopts the half-bridge topology.
Converter Type Application Area Description Reference
Full-Bridge Bidirectional LLC Resonant Converter.
Light electric vehicle (LEV)
The research offered a fast charger prototype for an LEV equipped with an 800 Wh lithium-ion battery and a 50 Wh supercapacitor. This prototype fast charger is comprised of an AC/DC rectifier and a DC/DC stage. The DC/DC stage incorporates a full-bridge LLC resonant converter for soft switching and stress reduction. Additionally, it is controlled by a pulse frequency modulation (PFM) and PI controller using a constant current and constant voltage transformation method. [140] Five-level Inverter Full-Bridge Bidirectional Converter.
On-board EV battery chargers in smart grids
The development of a new on-board EV battery charger (EVBC) has been proposed based on a bidirectional multilevel topology. The proposed design evolved from the standard full-bridge rectifier by connecting four devices to the split DC-link as the power factor correction (PFC) three-level DC-DC converter. This design is capable of generating five different voltage levels. An AC-DC converter is incorporated in the proposed topology as an interface on the grid side; it also has a DC-DC converter for an interface on the battery side. A split DC-link was used to interface the two converters to ensure different levels of voltage in both converters. The evaluation result allowed for the validation of the bidirectional operation mode of the new EVBC on a multilevel topology. Considering that the aim of controlling the EVBC is to integrate EVs into smart grids, discussions and evaluations are based on the grid-to-vehicle (G2V) and vehicle-to-grid (V2G) modes of operation. [168] Five-level Cascaded H-bridge LCC Resonant Converter.
X-ray applications
High voltages are needed in many medical and industrial applications, such as X-ray generation and electron beam welding; these applications also require large power variations. Regarding X-ray applications, the power available to the X-ray tube normally varies significantly (<1 kW-100 kW) based on the employed radiographic technique or the type and thickness of the material. A new LCC multilevel topology has been proposed to solve the problem that distinguished radioscopy from fluoroscopy. Fluoroscopy requires long-time exposure to low-power radiation, while radioscopy involves short-time exposure to high-power radiation. Regarding the fact that the converter design should consider the maximum power range, the efficiency will be poor when working at low power levels. This problem can be addressed with the proposed new topology. [169] Half-bridge Boost Bidirectional LCC Resonant DC-DC Converter.
Microgrid application
A non-isolated soft-switching bidirectional DC/DC converter for interfacing energy storage in a DC microgrid has been offered. The proposed converter incorporates a half-bridge boost converter at the input port, an LCC resonant tank to aid in soft-switching switches and diodes, and lastly a voltage doubler circuit at the output port to double the voltage gain. Additionally, the LCC resonant circuit provides the converter with a proper voltage gain. Thus, the converter's overall high voltage gain is achieved without the use of a transformer or a large number of multiplier circuits. [170] Two Three-level Full-Bridge CLLC Resonant Converters.
Off-board EV charger
A novel three-level CLLC resonant converter for an off-board EV charger has been suggested in order to accomplish bidirectional power transmission between the DC microgrid and the EV. The proposed converter adapts to the wide voltage range of EVs, from 200 to 700 V, by inserting resonant CLLC components and combining the working modes of the two three-level full bridges. Due to the totally symmetrical structure, this converter can work in two power directions, G2V and V2G. [171]
Challenges and Future Perspectives
Numerous technical, economic, and other challenges must be overcome in order to make DC-DC and DC-AC converters popular and successful in a variety of applications. In this section, the challenges that may face some of the mentioned converters in terms of configuration and design are discussed.
•
The DAB-IBDC is one of the types of bidirectional converters, as mentioned above, and due to the advantages of DAB-IBDCs, such as the ease of implementation of soft switching, bidirectional power transfer capabilities, modular and symmetric structure, and so on, they have garnered increasing attention in recent years. Numerous studies conducted globally so far have concentrated on fundamental characterization, topology and soft-switching solutions, control strategies, and hardware design and optimization. The design and performance optimization of DAB-IBDCs based on silicon carbide and gallium nitride SiC/GaN power devices, as well as system-level DAB-IBDC solutions for high-frequency link (HFL) power conversion systems, will be the trend in the future, with other major concerns including: (1) Electrical optimization design methods of the topology, electrical parameters, and control approach of DAB-IBDCs based on SiC/GaN power devices in order to fully exploit their high-temperature, high-frequency, and low-loss properties. (2) Mechanical optimization design methods of DAB-IBDCs based on SiC/GaN power devices in order to further increase the efficiency, power density, modularity, and reliability of HFL power conversion systems.
Multipurpose, modular, and intelligent HFL power conversion system solutions with high efficiency and high power density, which utilize DAB-IBDCs as the core circuit [172].
• Today's power converters must deliver increased power while maintaining high efficiency over a wide load range. As a result of the zero voltage switching (ZVS) for the primary side's MOSFETs or IGBTs and zero current switching (ZCS) for the secondary side's power devices, the LLC resonant is a type of resonant converter topology that can address these challenges and is advantageous in front-end DC-DC conversion applications. Additionally, it has a narrow switching frequency range to facilitate control, a quick transient response, and low cost because the leakage inductance of the transformer serves as the resonant inductor [173].
•
The series resonant converter has become attractive for switched-mode power supply, not just because running at a high switching frequency increases power density, but also because of the decreased switching losses. In addition to its historical and technological significance, the resonant converter series has created a large family of high-density DC/DC mode resonant converters, all of which have been widely used in several practical applications [174]. In the last few years, the growing interest in resonant power converters suggests more potential exploration of these converters. Although resonant converters have some superior features compared to conventional converters, studies have identified some issues that must be addressed to carry on [16].
To address the challenges facing the resonance part regardless of its type and structure, the assigned and selected parameters must be considered. The voltage gain, frequency ratio, load values, and inductance ratio must be understood. Finally, future studies should focus on the optimization of the control of such topologies. • Power flow. One-directional power flow can be handled with a traditional buck-boost converter, but the power flow in bidirectional converters is in both directions. With bidirectional DC-DC converters, voltage levels can be either stepped up or down based on the flow control capacity in either direction. According to galvanic isolation, there are two types of bidirectional converter: the first is the isolated bidirectional converter, and the second is the empty of isolation or non-isolated converter [35,[175][176][177]. NIBDCs, as converters, require no high-frequency transformers to achieve electrical isolation between the load and the source; their efficiency in low-power applications is high because of the ease of controlling them, as well as their light weight [55]. On the other hand, an isolated bidirectional DC-DC converter is necessary for several applications for the security of the source in case of an overloaded situation, as well as for noise reduction and voltage matching between conditions; therefore, it is employed in place of NIBDCs [37].
•
Many studies have focused on the combination of resonance network type with IBDCs, including, but not limited to, SR-DAB, CLLC, CLLLC-type IBDC, and LLC. CLLC and CLLLC resonant IBDCs have attracted considerable attention due to their symmetrical characteristics in both the forward and reverse power flow directions. On the other hand, this structure faces difficulties because charging efficiency degrades dramatically when the battery voltage varies, and the frequency of the discharging operation varies widely [134]. In the LLC-BDC structure, the LLC resonant converter behaves similarly to a conventional series resonant converter with a very tight gain range, even though this type has better features than the earlier mentioned types, and is also extensively used in industry, especially in reverse power transfer [138]; this challenge must be considered in future perspectives. A bidirectional three-level LLC resonant converter at fixed modulated frequency has been proposed by [178]; the proposed system can achieve a broad gain range even though the accurate analytical gain model is not derived. The system also used twelve switches, and because of this, the achievement of high efficiency over a wide gain range becomes a challenge to consider.
•
Multilevel inverters (MLIs) are added to counter the problems associated with twolevel converters to meet the following criteria: (i) only sinusoidal output voltage shall be produced by the converter; (ii) the converter must have an output current of low THD. The advantage of MLIs is that the device voltage rating and the switching frequency can be significantly lower under the same output voltage compared to those of a conventional two-level converter; consequently, switching losses can be decreased remarkably, thereby improving the efficiency [32,[179][180][181]. • Based on some industrial applications, such as drives, solar inverters, UPS, electric vehicles, and STATCOM, it is apparent that low-harmonic multilevel waveforms could attract the attention of industries to commercialize the most reliable and cost-effective topologies. However, it should be noted, however, that adding extra switches and auxiliary capacitors to the inverter structure increases the production cost. Adding new gate drivers, voltage balancing strategies, and switching algorithms to the control circuit will also complicate things. To be selected by a corporation to commercialize, there should be certain improvements on the topology, such as reduced size of the passive components, reduced switching frequency and switching losses, increased reliability, and efficiency. Industries are constantly attempting to find a balance between the number of levels and the complexity of design and operation [182]. It is seen that industries are looking for simplicity and high performance for their next generation of power electronic converters. The following are some of the possible future research and development challenges: (1) Extending the use of single-DC-source MLI topologies from high-power to medium-and low-power applications, such as aerospace and aircraft, power supply, home products, electrified transportation, solid-state transformers, and so on.
Developing single-DC-source MLI topologies at a higher level (more than five levels). In multiple-DC-source MLIs, one option is to replace the isolated DC sources with voltage-controlled capacitors.
By implementing novel voltage balancing techniques or external circuits, the size of auxiliary capacitors and voltage ripples in single-DC-source topologies can be reduced. (4) Designing resonant converters based on single-DC-source MLIs.
To decrease the complexity of the controllers, new switching techniques with integrated voltage balancing algorithms are being designed.
The selection of the appropriate multilevel inverter (MLI) topology has always been a challenge because it needs fewer switches and isolated DC sources to generate a greater number of voltage levels. In comparison to multiple-DC-source MLI topologies, single-DC-source MLC topologies are now considered more suitable for many power system applications, such as renewable energy (RE) conversion systems and electrified transportation. Furthermore, the use of a single-DC-source MLI to increase the power rating and minimize the switching frequency while maintaining acceptable power quality is an important necessity and a persistent challenge for the industry [182].
•
In combining multilevel converters with bidirectional converters, the selection of a proper MLI topology and balancing the state-of-charges (SOCs) of batteries are the two key challenges in MLI-based battery storage systems (BSSs). MLIs are not only used in large-scale renewable energy-integrated grid applications, but also electric vehicle (EV) charging, vehicle-to-grid (V2G), and EV operation. A challenging aspect of this field is the selection of an efficient and cost-effective MLI topology. The increasing use of MLIs in BSSs over the last ten years has advanced battery technologies, topologies, and control techniques. Voltage and SOC imbalances between battery packs could be a challenge in MLC-based BSSs due to electrochemical variations in battery packs and the use of second-life batteries [183]. The study by [184] presented an up-to-date study of the existing state-of-the-art MLC-based BSSs, taking into account the most recent contributions in the field of battery technology. Because of their long lifespan and high power density, Li-ion batteries are increasingly being used in MLI-based BSSs. However, research on metal-air battery technology indicates that it is a viable candidate for EV applications due to its high energy density and low cost. MLIs are already known to be superior to standard two-or three-level converter schemes due to their low harmonic distortion, small scale, and less reliance on magnetic circuits. This accounts for the increasing interest in MLIs over the last decade [184]. In summary, promising research directions for future work can include, but are not limited to: (1) The creation of new types of MLIs with fewer components. (2) Improving the PWM system and/or SOC balancing control to reduce the negative effects of voltage and SOC imbalances between battery packs.
The incorporation of renewable energy sources, especially wind, into BSSs through MLIs to mitigate the negative effects of RESs' intermittent nature.
• A combination of both converters as previously mentioned in Section 6 can yield better performance, but the challenges remain. The study by [185] focused on the state of the art, challenges, and requirements of multilevel converters in industrial applications. To reduce energy waste and improve power quality, new highly efficient power electronic technologies and control strategies are required. There is enormous potential for energy efficiency improvement in electric motor-driven systems. Mediumvoltage (MV) drives are widely used in several industries, such as the oil and gas industry, manufacturing plants, and process industries.
The topologies and control of line-and motor-side converters, as well as power semiconductor switching devices, are all challenges in the design of powered MV drives. Three topologies have been successfully introduced as standard products for MV industrial drives by [185]: these include 3Level-NPC, 4Level-FLC, and 5 Level-CHB. However, the challenges facing this combination include: (1) The suppression of LC resonance and power quality; for current harmonic reduction or power factor compensation, the LC line side resonant circuit is used. The capacitors form an LC resonant circuit with the system's line inductance. The lightly damped LC resonances can cause unwanted oscillations or overvoltage, which may demolish the power switches or other components in the rectifier circuits due to the low impedance of the MV supply. (2) Lowering the switching frequency generally increases the harmonic distortion of the drive's line and the waveforms of the motor side; hence, the optimal solution is sought. The challenge is to diminish waveform distortion while switching at a very low frequency (even below 100 Hz).
Finally, integrated bidirectional resonant DC-DC converters and multilevel inverters are expected to be well suited for industrial applications in the future. More studies are necessary for achieving a perfect level of reduction in the losses, number of components, size, and cost.
Conclusions
The aspects of bidirectional and resonant DC-DC converters and multilevel inverters have been reviewed in this work to highlight the need to produce a combination of converters for grid-connected and energy storage applications. Each converter has been elaborated in various aspects, such as classifications, advantages, disadvantages, and their abilities to enhance energy conversion in modern energy systems. Based on this review, a soft-switching approach should be employed instead of hard switching to reduce and eliminate switching losses. RPC is a promising solution for EMI that occurs in PWM converters; it can increase efficiency and minimize the number of components. Bidirectional DC-DC converters with galvanic isolation are more suitable for hybrid energy systems. In this study, comparisons have been made between different types of bidirectional converters and between multilevel and conventional inverters; the reviewed converters have also been compared in terms of their abilities to improve energy storage and grid connection integration. Lastly, the challenges in developing an efficient and combined multilevel resonant converter that can work under high voltage integration with soft-switching and bidirectional approaches have been elaborated. This article reviewed the significant contributions in the areas of bidirectional, multilevel, and resonant power converters with the intent of providing insight on the prospects in this study area. Therefore, producing and developing integrated bidirectional resonant multilevel converters are expected to meet the requirements of several industrial applications, such as electric vehicle charging, vehicle-togrid (V2G), and medium-voltage (MV) drives, due to their merits of reducing cost and size and their abilities to operate with high switching frequencies and less harmonic distortions. | 20,815.2 | 2021-10-29T00:00:00.000 | [
"Engineering",
"Physics"
] |
Rotating Disk Galaxies without Dark Matter Based on Scientific Reasoning
The most cited evidence for (non-baryonic) dark matter has been an apparent lack of visible mass to gravitationally support the observed orbital velocity of matter in rotating disk galaxies, yet measurement of the mass of celestial objects cannot be straightforward, requiring theories derived from the known physical laws along with some empirically established semi-quantitative relationship. The most reliable means for determining the mass distribution in rotating disk galaxies is to solve a force balance equation according to Newton’s laws from measured rotation curves, similar to calculating the Sun’s mass from the Earth’s orbital velocity. Another common method to estimate galactic mass distribution is to convert measured brightness from surface photometry based on empirically established mass-to-light ratio. For convenience, most astronomers commonly assumed a constant mass-to-light ratio for estimation of the so-called “luminous” or “visible” mass, which would not likely be accurate. The mass determined from a rotation curve typically exhibits an exponential-like decline with galactrocentric distance, qualitatively consistent with observed surface brightness but often with a larger disk radial scale length. This fact scientifically suggests variable mass-to-light ratio of baryonic matter in galaxies without the need for dark matter.
Introduction
Based on scientific observations, many galaxies (including the Milky Way) appear to have a common visible shape of thin disk as shown in Figure 1. Known as a stellar system of an ensemble of stars and other masses, a disk galaxy (such as the Milky Way) usually contains 10 5 to 10 12 stars distributed in a flattened, roughly axisymmetric structure, rotating around a common axis in nearly circular orbits. Besides stars, the galactic "disk" is also known to contain an interstellar medium such as gases (mostly atomic and molecular hydrogen) as well as relatively small solid "dust particles". The general behavior of stellar systems, including disk galaxies, has been believed to follow Newton's laws of motion and Newton's law of universal gravitation [1].
Since their discovery in the 17 th century, Newton's laws of motion have been used to successfully determine the relationship between a body of mass and the forces acting upon it, and its motion in response to those forces, for a great variety of situations and phenomena. When combined with his law of gravitation, Newton [2] could show (in terms of mathematical expressions) that Kepler's mysterious laws are actually consequences of his laws of motion. To date, there is no direct evidence suggesting a failure of Newtonian dynamics in describing motions of celestial objects in stellar systems, although some relativistic effects may be present at the centers of galaxies [1]. According to Newtonian dynamics, the mass of an object can be determined from its motion such as acceleration in a gravitational field. If the mass distribution in a galaxy cannot be measured directly, it can be derived from the measured rotation curve, expressed in terms of distribution of objects' orbital velocity as a function of galactocentric distance, which may require some mathematical efforts but should be a theoretically rather straightforward exercise. Such a derived mass distribution (from rotation curve according to Newtonian dynamics) should be considered reasonable as long as the value of mass does not appear against any physical laws, e.g., having a negative value or infinity, etc. However, in recent decades we have been told that about 83% mass of our universe is made up by some type of mysterious "dark matter", which cannot be detected by electromagnetic radiation or reaction, in contrast to any known substances of properties determined by available scientific method. The reason for the belief of existence of the dark matter with its mysterious properties is due to inference from its gravitational effects on "visible" matter, radiation, and large-scale structure of the universe [3]. Numerous articles have been published to investigate the elusive dark matter, with many books also written to describe such efforts [3][4][5][6][7][8], yet very few attempted to examine the validity and certainty of the claimed evidence for existence of the so-called dark matter, based on scientific reasoning with rigorous logic [9].
To understand the natural world, scientists acquire knowledge using the scientific method which involves observation, formulating hypotheses via induction, experimental testing with quantitative measurements, and refinement or elimination of the hypotheses based on the experimental findings. If well supported by experimental measurements, a particular hypothesis may be further developed to a general theory. By scientific reasoning we should inquire whether or not evidence is consistent with a claim or theory, or whether the evidence that render a claim could be invalid. Therefore, the claimed existence of the (mysterious) dark matter should have been put under rigorous scientific scrutiny, before making it sound like being well supported by observational evidence. Actually, deficient reasoning for dark matter in galaxies has been pointed out by examining the claimed evidence in the literature [9], though nonmainstream.
It is understandable that as human beings, scientists can be tempted to tell the mystery of dark matter for being much more effective to attract press attention than simply describing the observed astronomical phenomena in terms of the well-known Newtonian dynamics. When discussing the subject of dark matter, few authors bother to question whether the reasoning for dark matter might be invalid, whereas majority would rather present strange models assuming the presence of dark matter. Nowadays, the dark matter is so firmly believed to be present that the finding of "a galaxy lacking dark matter" can become quite a news-making story in the scientific community [10,11]. Yet scientists are expected to have the genuine passion in truth seeking.
In what follows, we first examine the nature of astronomical measurements, and technical challenges as well as certainty or uncertainty associated with them. The methods for determining mass of celestial objects are briefly reviewed next. Then, mass distribution in a rotating disk galaxy, determined with the available measurements, is discussed with explanations based on scientific reasoning without dark matter. Concluding remarks are provided in the final section.
Astronomical Measurements
The behavior of celestial objects, such as stars and galaxies, cannot be described without mentioning their mass, distance and velocity of movement. It turns out that the distance between objects becomes the key for determining the mass and velocity of an object. Once the distance of an object is measured, its variation within a given time interval determines the object's velocity, and its relative position with respect to other objects can be determined, but the measurements of astronomical distance have been quite challenging with considerable uncertainties [9,12]. In fact, a recent analysis has shown a significant difference between a previously indicated distance of 20 Mpc and presently determined 13 Mpc [13] (1 pc = 3.26 light-years = 3.08 × 10 16 m).
Space is actually huge. Astronomical objects are typically scattered in a vast space, separated by distances often measured in units of light-years (1 light-year = 9.46 × 10 15 m, the distance for light to travel in 1 year in vacuum). For example, the nearest star to our solar system is about 4.22 light-years away. The distance between the Sun and Earth is ~1.5×10 11 m, taking about 8.3 minutes for light from the Sun to reach us. Even our next-door neighbor, the Moon, is about 3.8 × 10 8 m away, a lot farther than most people would think. Only objects within our own solar system can our present spacecraft reach. For the most part, the cosmos is out of our reach, except light that travels throughout the universe can bring information about distant objects to us on the Earth.
In astronomy, measurements are carried out almost exclusively by studying and analyzing the "light", or more generally the electromagnetic radiation, emitted or absorbed or reflected or scattered or transmitted by remote objects such as stars, galaxies, and so forth [14,15]. The emission and absorption line spectra can be used to determine the material composition, while the continuous thermal radiation spectrum can tell us the temperature of a remote object. The speed of a celestial object moving toward or away from us can be determined by the Doppler shift in the light spectral lines, which actually became the basis for measuring the rotation curve of galaxies.
For stars, their (surface) temperature, luminosity, and mass are among the most important properties. A star's surface temperature can be obtained fairly easily from its thermal radiation spectrum or even simply its color, which is not influenced by its distance (theoretically). But measurements of a star's luminosity (the total amount of power it emits into space) from the apparent brightness (the brightness of a star as it appear to our eyes, or to a detector like a CCD) relies on the inverse square law for light which directly uses its distance from us. Thus, to determine the distance of a star becomes the key to determine its luminosity.
The most direct way to measure a star's distance is by stellar parallax, which uses the angle due to annual shift of its position relative to distant background stars, as Earth moves from one side of its orbit to the other, to determine its distance. This is why the parsec or pc (corresponding to PARallax of one arcSECond")-a measure of tiny angles in the stellar sphere-becomes the preferred distance unit for astronomers in professional literature. Parallaxes may provide us distances to stars up to a few thousand light years away, i.e., in the solar neighborhood. But even the nearest galaxies and galaxy clusters are millions of light-years away, too far for measurements by just using parallax. So, a system called the cosmic distance ladder has been created based on overlapping methods to calculate successively farther distances [12].
Larger distances rely on the so-called standard candle as well as a technique known as the mainsequence fitting to estimate, based on assumptions over assumptions [12,14,15], yet no astronomical object is a perfect standard candle; the challenge of finding the objects that can serve as the standard candles leads to the challenge of measuring large astronomical distances. In short, uncertainties in astronomical distances can be significant [9], as may not even be easily quantified due to multiple steps of overlapping, each brings in its own uncertainties.
The distance of an astronomical object seems to be difficult to determine, yet it plays the key role in calculating the object's mass from the known physical laws. In other words, masses of astronomical objects cannot be determined without knowing the relative distances among them.
Mass Determined by Newtonian Dynamics-"Gravitational Mass"
Once the relative distance and velocity of motion of objects are known from measurements, each object's mass may be determined from Newton's laws. If we believe the forces among celestial objects are of gravitational nature (according to Newton's universal law of gravitation), the gravitational field of an object (which is proportional to its mass) can be determined by measuring the acceleration of a small nearby "test object". Then the object's mass can be determined from its gravitational field. For example, the Sun's mass can be determined using the Earth as a test object (which has a negligible mass comparing to that of the Sun) by applying the formula of Newton's version of Kepler's third law, with the measured average distance between the Earth and the Sun, a (≈ 1.5×10 11 m) and the Earth's orbital period, p ((≈ 3.15×10 7 s, i.e., 1 year) [14,15]. In other words, having centripetal acceleration of the Earth:
=
(1) equal to the gravitational field of the Sun: where G (= 6.67 × 10 -11 m 3 kg -1 s -2 ) is the gravitational constant and Msun the Sun's mass, yields the value of Msun ≈ 2.0×10 30 kg (= 1 solar mass M⊙). Here V denotes the Earth's (or the test object's) orbital velocity. By the same token, stellar masses in a binary star system-consisting of two gravitationally bound stars orbiting around a common center of mass-can, in principle, also be determined with known separation a or orbital velocity V and orbital period p based on the theory of Newtonian dynamics for a two-body problem. It has been shown that the two-body problem can be treated as an equivalent one-body problem in which the reduced mass m = m1 m2 / (m1 + m2) is orbiting about a fixed mass M = m1 + m2 at a distance a = a1 + a2 where the subscripts "1" and "2" denotes the masses and radii of the star "1" and star "2" [16]. In fact, the value of Msun determined from an equation of (1) = (2) is actually reduced from the solution for two-body Kepler's problem to an extreme case when m1 >> m2 such that m → 0, a1 → 0, and M → m1 (= Msun or 1.0 M⊙). Thus, the value of M (= m1 + m2) in a binary star systems can be determined. With the known M, the values of m1 and m2 can be determined from the relationship of m2 / m1 = a1 / a2 = v1 / v2 (where v1 and v2 are the orbital velocities of the two stars) derived from the two-body problem. In reality, the distances a1 and a2 are not easy to determine accurately. Instead, the values of v1 and v2 can be measured much more reliably based on the measured Doppler shifts, especially for the so-called "eclipsing binaries" with their orbit planes lying very close in the line of sight [14,15].
In a galaxy, a large number (10 5 -10 12 ) of stars, with an interstellar medium of gas and cosmic dust, among others, are distributed in an extensive space such as a thin disk of radius about 10 kpc (or 3.09×10 20 m). Simply adding the "point mass" fields of such a distributed stellar system with ~10 11 stars is impractical to compute the gravitational field in a typical galaxy, so it becomes a common practice, for most purposes, to model the gravitational field or potential "by smoothing the mass density in stars on a scale that is small compared to the size of the galaxy, but large compared to the mean distance between stars" [1], i.e., to treat the distributed mass system as a continuum, as a reasonable approximation.
Observations have shown that many astronomical systems, such as planetary systems, planetary rings, accretion disks, spiral galaxies, etc., appear flat (cf. Figure 1) for a basic reason of the state with lowest energy in a flat disk perpendicular to an axis along which a distribution of angular momentum is given for a system of constant mass [1]. Therefore, it may not be unreasonable to approximately consider a galaxy as an axisymmetric rotating thin disk, shown in Figure 2, consisting of distributed self-gravitating mass (as a function of the galactocentric distance r) in balance with distributed centrifugal force due to distributed circular orbital velocity (as a function of r). Over years, various mathematical methods have been developed for deducing the mass distribution from the measured rotation curve with the axisymmetric thin-disk model at mechanical equilibrium [1,17], each with its own pros and cons. Here a numerical method by Gallo and Feng [18][19][20][21] is briefly described, without loss of generality. At any point in a rotating axisymmetric disk galaxy with negligible disk thickness effect, the centripetal acceleration of (related to the centrifugal force on) a test object: is expected to be equal to the gravitational field from the distributed mass in the entire disk: where Rg denotes the galactic radius (or the galactocentric distance of the galaxy edge, taken as the cut-off radius in rotation curve beyond which the detectable signal diminishes), Mg the total mass of the galaxy, and V0 the characteristic rotation velocity as a representative value of the flat part of rotation curve. All the variables in (3) and (4) (3) and (4) has the exactly same physical meaning as equating (1) and (2), as the force balance equation for determining the rotation velocity from a known gravitational field source-the amount of mass or mass distribution, or vice versa. For example, the Sun's mass can be determined from the known Earth's orbital velocity as shown with equating (1) and (2). Similarly, when the galactic rotation curve V(r) is available from measurements, the mass distribution ρ(r) can be determined by solving the force balance equation from equating (3) and (4), which will involve some mathematical manipulations.
Among many different approaches to solution of the force balance equation (3) = (4), Gallo and Feng [18][19][20][21] showed that with slight algebraic arrangements an equation can be obtained of the form: where K(m) and E(m) denote the complete elliptic integrals of the first kind and second kind, with: The dimensionless parameter A in (5), called the galactic rotation parameter, is defined as: which can be determined by introducing a constraint equation for mass conservation, Equations (5) and (7) can be discretized by dividing the problem domain 0 ≤ r ≤ 1 into a large number, e.g., N -1, of small line segments called (linear) elements, leading to a linear algebraic problem in the matrix-vector form, for solving N nodal values of ρ plus A in (6) with N equations for individual nodes from (5) and an equation based on (7), with known V(r) [18][19][20][21]. Conversely, the same matrix-vector equation can also be used for calculating the rotation curve V(r) if the mass distribution ρ(r) is known or assumed given. The matrix-vector approach described here as well as the implemented computational code was validated by producing the same results as the known analytical solutions for the Mestel disk and Freeman disk [20]. It also yielded mass distribution for NGC 4736 based on measured rotation curves, comparable to that obtained using an iterative spectral method with Bessel functions by other authors [21]. Similar results from the equation of (3) = (4) were also shown with a model using lognormal mass distribution function [22]. By adding a spherical core at the galactic center, as can easily be implemented with this matrix-vector approach, mass distribution can be computed without central singular mass density for rotation curves with nonzero velocity at r = 0 [21]. Thus, the mass distribution ρ(r) (actually the surface mass distribution [ρ(r) h]) can be determined for any galaxy from its measured rotation curve V(r), according to Newtonian dynamics.
Since the rotation curve in a spiral galaxy can be measured with reasonable certainty [23], it has been accepted to provide the most reliable means for determining the distribution of gravitating matter therein [17]. However, a rotation curve of V(r) implies axisymmetry and negligible variation across the thickness and is at best a piece of approximate information about the behavior of a rotating thin-disk galaxy, which is usually not exactly axisymmetric with detailed asymmetric structures such as spiral arms. Hence, the mass distribution ρ(r) determined from a rotation curve V(r) with a thindisk model provides the value only in a sense of averaging over the ring of radius r which may not be the same as the local mass density at a specific position on this very ring. The galaxies, though appearing like a thin disk, also have "vertical" structures across the visible thickness. Therefore, taking the predicted mass density with an axisymmetric thin-disk model out of context to compare some measured value at a specific location inside a galaxy (e.g., the solar neighborhood in the Milky Way) could naturally lead to substantial discrepancy. If splitting hairs with such an expected discrepancy to discredit the thin-disk model, the outcomes can be confusing with technically immature arguments. Only with a thorough understanding of the assumptions and approximate nature in using the rotation curve to determine mass distribution in a disk galaxy, can the model results be interpreted correctly for enhancing scientific knowledge.
The computational results based on measured rotation curves for many galaxies of various types had shown more or less exponential decrease of (surface) mass density with galactocentric distance such as ρ(r) = ρ(0) exp(-r / Rd), i.e., the computed ρ verses r in a log-linear plots appear to be nearly straight lines with negative slopes, for the most part when the abruptly varying ends at r = 0 and 1 are trimmed out [18][19][20][21]. Hence, the "gravitational mass"-determined from rotation curves-distributions in disk galaxies qualitatively agree with the measured radial distributions of surface brightness for a large number of disk galaxies [24][25][26], but the disk radial scale length, Rd, determined from rotation curve based thin-disk model appears to be larger than that from fitting the brightness data (e.g., 4.5 kpc versus 2.5 kpc for Milky Way [20]). A straightforward interpretation of such a discrepancy would be an indication of increasing mass-to-light ratio with galactocentric distance, namely the (baryonic) matter becomes less luminous in regions further away from the galactic center. This is consistent with typical edge-on views of disk galaxies that often revealing a dark edge against a bright background central bulge (cf. the image of NGC 891 in Figure 1).
Mass Determined by Mass-to-Light Ratio-"Luminous Mass"
Astronomical measurements rely on the analysis of signals carried by electromagnetic waves, or the "light". Considerably efforts have been made in correlating the light signal to characteristic physical properties of stars and galaxies. Anything that may be derived from the light signal are taken seriously and used for describing the behavior of celestial objects.
For example, the stellar (gravitational) mass M determined from binary stars has become the key component for establishing the so-called "mass-to-light" ratio M/L, by correlating the luminosity L of stars to their masses. As the apparent brightness of a star is measured from a detector, it is expected to relate to the star's luminosity and distance based on the inverse square law. With the given luminosity L and (surface) temperature T determined from its thermal radiation spectrum (based on Stefan-Boltzmann law and Wien's displacement law), the surface area and size (i.e., the radius R) of a star may be calculated from the relationship [1,14,15]: where σ = 5.67×10 -8 W m -2 K -4 is the Stefan-Boltzmann constant for black-body radiation. However, it should be kept in mind that the emissions of galaxies may not exactly follow that of a blackbody with emissivity equal to 1, in the presence of dust and gases. With the measured stellar properties (from nearby stars in the solar neighborhood), Hertzsprung and Russelll independently developed a stellar classification system by plotting luminosities versus surface temperatures-now called Hertzsprung-Russell (H-R) diagram or the color-magnitude diagram. This kind of diagram has become "the primary point of contact between observations and the theory of stellar structure and evolution" [1]. Most stars, including our Sun, fall somewhere along the streak from the upper left (high luminosity and temperature corner) to the lower right (low luminosity and temperature corner) on the H-R diagram; they are called the main-sequence stars. There are also larger and brighter stars located above those of the main sequence, called giants and supergiants, whereas smaller high temperature stars, located below those of the main sequence, are called white dwarfs, because they appear white in color. All stars along the main sequence are fusing hydrogen into helium in their cores, like the Sun. However, the main sequence stars differ in surface temperature and luminosity because the rate of hydrogen fusion depends strongly on the stellar mass, i.e., a star with more massive outer layer must sustain a higher nuclear fusion rate in order to maintain gravitational equilibrium. When astronomers measured the masses of main-sequence stars in binary star systems, they found that a star's position along the main sequence is very closely related to its mass [14,15]. For stars with both mass and luminosity determined, the values of their stellar mass-to-light ratio M / L become known, which may be used for estimating the masses of similar stars either not belonging to the binary systems or being too remote to measure directly.
Surface photometry-a technique to measure the surface brightness distribution of extended objects, such as galaxies-has shown that galaxies typically have luminosity profiles decreasing approximately as an exponential function of the galactocentric distance r [24][25][26]. Many authors simply assumed the same exponential function for the surface mass density distribution, implying an assumption of constant M / L [24] for deriving the "luminous mass".
But among all the observable galaxies, individual stars can only be observed in the few closest ones. A galaxy is a composite of millions and millions of stars of differing ages and masses over a wide spectrum. To evaluate the overall mass-to-light ratio of a galaxy involves, its stellar population must be studied with stellar population synthesis modeling, etc. Such a challenging sophisticated endeavor with severely limited means of direct observation and measurements is expected to lead to results of questionable certainties, which might be only useful for order-of-magnitude rough estimate of the amount of mass. Nonetheless, luminous masses from observed brightness based on estimated mass-to-light ratios have been taken so seriously with overwhelming confidence that their apparent difference from the gravitational masses (determined from rotation curves) became a primary evidence for dark matter in galaxies [3][4][5][6][7][8].
Galaxy Rotation Curves Described without Dark Matter
Celestial objects cannot be brought to the Earth and weighed on a balance to measure their masses. But their motion can be observed with movement velocity measured from the Doppler shift of their light spectral lines. The motions of astronomical objects are believed as their responses to the gravitational interactions, according to Newtonian dynamics [1,14,15] (as had been tested and confirmed numerous times over hundreds of years).
Newton's laws of motion describe the relationship among force, acceleration and mass. Newton's law of universal gravitation relates the gravitational force to the distribution of masses and relative positions of interacting objects. Once the masses and relative positions of interacting celestial objects are known, the gravitational force on each of them and their motions (in terms of velocity, acceleration) can be determined. Conversely, their masses can be determined from their relative positions and motions, if made available by measurements.
For example, the mathematical form of Newtonian dynamics for a rotating thin disk galaxy of gravitationally bound objects, e.g., stars, gases, dust, etc., can be expressed (approximately by assuming the axisymmetry) as (3) equal to (4), relating the mass distribution ρ(r) to measured rotation curve V(r). Thus, the rotation curve V(r) can be determined if mass distribution ρ(r) is known, by calculating the integral in (4); conversely, ρ(r) in (4) can be determined from the known V(r) in (3), which usually takes more mathematical effort though. As a matter of fact, various methods for solving ρ(r) or [ρ(r) h] from the known V(r) have been developed by different authors, with pros and cons pointed out and discussed from various perspectives [1,[17][18][19][20][21][22]. Despite the apparent differences in calculation procedures among different authors, the end results should be the same theoretically because the solution to equation (3) = (4) is unique due to its linear nature.
With available modern technologies, the rotation curves have been measured for many (disk) galaxies [23]. Using a measured rotation curve V(r), the calculated surface mass density [ρ(r) h], e.g., by solving the linear algebra matrix problem based on (5) and (7), appears to decrease linearly in the log-linear plot excluding the small regions around the galactic center and disk edge (where the measured rotation curve terminates), for various galaxies [20,21]. This indicates that the surface mass density in a thin-disk galaxy declines exponentially with the galactocentric distance, in general, as consistent with what have been measured by surface photometry for surface brightness profiles of many galaxies [24][25][26], qualitatively at least. Because the galactic rotation parameter A, defined in (6), is also determined as part of the solution to the linear algebra problem based on (5) and (7), the total mass of Mg can be calculated from the predicted value of A as Mg = V0 2 Rg / (G A). For example, the Milky Way total mass is determined as 1.41×10 11 M⊙ from predicted A = 1.6365 with V0 = 220 km/s and Rg = 20.55 kpc [21], very close to the Milky Way star counts of about 100 billion [1,27]. The numerical approach for solving (5) and (7) can also account for the effect of a spherical bulge at galactic center with slight mathematical manipulation [21], with results illustrated in Figure 3 for the Milky Way. It has been shown that even for a bulge of mass as large as 7.57×10 10 M⊙, the Milky Way total mass would only change to 1.52×10 11 M⊙ (i.e., a less than 8% increase [21]). The total mass of the Andromeda (NGC 224) can also be calculated as 2.76 × 10 11 M⊙ from A = 1.6450 with V0 = 250 km/s and Rg = 31.25 kpc [21], about twice that of the Milky Way, as commonly being anticipated. Figure 7 of Ref. [21]). Noteworthy here is that the portion of mass density profile (shown with the thick line, as roughly a combined mass density profile from both disk and bulge) for r in the interval [0.1, 0.9] appears nearly linear in the semi-log plot (when when the abruptly varying ends around r = 0 and 1 are trimmed out), indicating an approximately exponential decline of mass density with galactocentric distance.
Interestingly, the value of A defined in (6) seems to be around 1.70 for various rotation curves, when the characteristic velocity V0 is taken as a representative value of the flat part of rotation curve [18][19][20][21]. For Mestel's disk of constant rotation velocity with available analytical solution, the value of A is determined as π/2 = 1.5707063 [20]. A proportionality constant from a lognormal density distribution model results of 38 galaxies for Mg versus Vmax 2 Rg [28] indicates a value of A equal to 1.57063 if Vmax and V0 happen to be the same as with Mestel's disk. Comparing with the scalar virial theorem, Mg = <V 2 > rg / G with <V 2 > and rg denoting the mean-square speed of the system's stars and the gravitational radius [1], this suggests that <V 2 > rg = V0 2 Rg / A , i.e., rg ~ 0.59 Rg if <V 2 > = V0 2 is assumed. Given the fact that <V 2 > also includes the velocity dispersion in addition to the rotational orbital part, it seems that rg = 0.5 Rg (corresponding to <V 2 > ~ 1.18 V0 2 ) could be a reasonable approximation for rough estimate of the total mass in a disk galaxy from the measured rotational velocity based on the virial theorem.
Besides the region of 0.1 < r < 0.9 for near-exponential decline of mass distribution, Figure 3 also shows a much sharper increase of mass density toward the galactic center for r < 0.1, which seems to be fairly typical for many galaxies [21]. Rapid increasing luminosity toward the galactic center has also been commonly shown in the surface brightness profiles which some authors would categorize as the "bulge"region [26]. To seriously include the bulge effect in the disk model may introduce more variables to the result, unless the bulge mass distribution is known a priori for uniquely determined mass in disk with a given rotation curve [21].
However, if the mass distribution is assumed to follow exactly that of surface luminosity by a constant mass-to-light ratio, the surface mass density [ρ(r) h] becomes known and the predicted rotation curve V(r) determined from the equation of (3) = (4) might not match the measured one. The mass converted from the mass-to-light ratio is usually found to decrease at a higher rate with galactocentric distance (corresponding to a smaller disk radial scale length) than that determined from rotation curve [20]. Such an apparent discrepancy has been called the "galactic rotation problem", as a subject for various scientific interpretations. Most astronomers and astrophysicists would take this as a "compelling evidence" for (nonbaryonic) dark matter in galaxies [3][4][5][6][7][8]. Only a few authors would want to consider uncertainties in mass-to-light ratio as well as generally questionable accuracy in astronomical measurements, as the root cause [9]. Recently, the notion of nonbaryonic dark matter has also been challenged from the perspective of dynamical evolution of galaxies [29].
First of all, the values of stellar mass-to-light ratio, as determined from measurements, can vary substantially depending upon the nature of light emitting objects (as shown by the H-R diagram for stars [14,15]). For galaxies, Tully and Fisher [30] proposed an empirical relation between (intrinsic) luminosity and (maximum) rotation velocity (inferred from the "hydrogen profile width"), which might be used for estimating galaxy (total) mass from measured rotation velocity with a mass-tolight ratio. However, the luminosity was subsequently shown not to be a perfect predictor of mass, as the stellar mass-to-light ratio can vary with galaxy type and the Tully-Fisher relation can have different slopes depending on the luminosity bandpass [31][32][33][34].
Galaxies are known to contain matter other than stars. For example, the rotation curves measured with the 21 cm wavelength signals emitted from atomic hydrogen (H I) extends far beyond the starlight in galaxies, indicating certain amount of H I exists at least as far as where the rotation curve can be measured. Actually, H I may not be considered as totally dark; it is luminous at the 21 cm wavelength (in a different photometric bandpass from that of stars), which could be detected for estimating its mass. In fact, the "column" mass density of atomic hydrogen had been estimated using emission in the 21-cm line in terms of integral of the brightness temperature over the velocity width of the line, suggesting an atomic hydrogen number density of order 1 cm -3 in the galactic plane [35,36], but apparently without an independent method to validate. There are also hydrogen molecules (as molecular hydrogen) found in molecular clouds and in the Interstellar Medium (ISM), which appear to be literally dark when cold as majority of them are (e.g., around 10-20K [37,38]). The amount of "dark" hydrogen molecules could only be estimated by assuming a constant ratio from the luminosity of carbon monoxide, with unknown uncertainties, of course. Furthermore, condensed baryonic matter in the form of dust and debris is expected to have minimal effect on optical extinction and can easily avoid detection due to the small optical cross section [39]. The presence of those optically undetectable components (with Μ / L approaching to infinity) makes it certain that there must be more baryonic matter than what can be represented in terms of mass-to-light ratio. Indeed, the baryonic Tully-Fisher relation was shown to be optimally improved when the HI mass is multiplied by a factor of about 3 [40]. Therefore, estimating mass in a galaxy simply based on a constant mass-to-light ratio can be seriously flawed, though convenient. Its usage may provide some order-of-magnitude rough idea about the amount of mass, but should not be considered for serious comparison and verification in scientific analysis. In view of the technical difficulties in detecting astronomical matter and evaluating celestial mass, the existence of invisible baryonic matter that cannot be accounted for by a simple M/L is naturally expected following scientific logic.
Some authors would like to use multicomponent models, composed of a bulge, a disk, and a (dark matter) halo extending to a very large virial radius, for estimate of galaxy mass [41,42]. While the central bulge and circular disk are commonly observed, visible in photographic images of galaxies [1], whether there should be a dark matter halo has been a debatable subject [9]. Even to this day, "the shape of dark matter halos remains a mystery" [7]. The reason for having a massive invisible (spherical) dark matter halo came from an argument that it offers a plausible way to stabilize the Galaxy against the bar instability [43], but some galaxies have been shown to have rotation curves that do not satisfy the so-called sphericity condition (based on the fact that mass cannot have negative value), indicating a significantly massive spherical halo cannot be present at large galactocentric distance beyond the central bulge [44,45]. There are also N-body simulations showing that a disk galaxy with flat rotation curve can be stabilized by dense central bulge without the dark matter halo [46]. Actually, there has not been clear scientific justification for having a massive dark matter halo around the disk galaxies.
It was stated in a recent report that the Milky Way "mass estimates can vary markedly based on the types of data used, the techniques used, and the assumptions that go into the mass estimate …" [42]. Nevertheless, a value of the Milky Way mass could be derived as ~1.5 × 10 12 M⊙ from an assumed composition of a nucleus, bulge, disk, and a halo of a virial radius over 200 kpc [42]. Interestingly, the mass within 21.5 kpc (where the Gaia rotation curve terminates) was estimated about 2.1 × 10 11 M⊙ [42], quite comparable to 1.52 × 10 11 M⊙ or 1.41 × 10 11 M⊙ with or without a central bulge as numerically determined from a measured rotation curve up to 20.55 kpc [21].
In fact, the mass in a galaxy determined from measured rotation curve, according to Newtonian dynamics, seems to be fairly consistent regardless of the sources of rotation curve measurement data, which could vary somewhat. Further calculation shows that the predicted surface mass density in the solar neighborhood around 8 kpc should be ~144 M⊙/pc 2 using a pure disk model or ~74 M⊙/pc 2 when a sizable bulge is included in the computation [21]. As a reference, the current textbook value of surface mass density for solar neighborhood is ~49 M⊙/pc 2 based on estimates from observations [1]. In view the fact that an axisymmetric disk model describes a surface mass density only meaningful in terms of averaging over the entire circular ring of radius ~8 kpc, while the local mass density may actually vary significantly along the ring (as shown in the photographic images), shouldn't we consider the Newtonian dynamic model to be reasonably accurate? Moreover, a surface mass density of 100 M⊙/pc 2 in the Milky Way model [21] corresponds to equivalently ~20 hydrogen atoms or ~10 hydrogen molecules per cm 3 for an assumed disk thickness of 200 pc [9], extremely tenuous by the terrestrial standards and well within the reported range of estimated gas density in the Interstellar Medium [1,47]. If the typical density of cold molecular clouds to enable star formation ranges from 10 2 to 10 6 molecules per cm 3 [47], it is not difficult to realize the possible magnitude of variations in mass density just within a ring containing the solar neighborhood.
Thus, the measured galactic rotation curves generally coincide with exponentially declining mass density (distributed in thin disk) with increasing galactocentric distance, according to Newtonian dynamics. The assumption of mass distribution mostly in a circular thin disk comes from the optical images of disk shaped galaxies, based on a belief that luminosity correlates to mass, somewhat roughly if not exactly. The total mass in a rotating disk galaxy, determined from measured rotation curve according to Newtonian dynamics, appears to match the star counts reasonably well (at least for the Milky Way). Adding a spherical central bulge with substantial amount of mass to the thin-disk model may only change the total mass by a few percent, but can have noticeable effect on local mass density in the solar neighborhood [21]. A recent examination of a large number of galaxies yielded a universal fitting formula with one fit parameter-the "acceleration scale"-to relate observed radial (centripetal) acceleration (as determined from rotation curves) to that from the "baryons" (i.e., that determined from measured luminosity profile with an assumed constant massto-light ratio) [48]. While a popular interpretation of this finding had attributed the difference between the observed radial acceleration and that due to the baryons to the non-baryonic dark matter [48], it could also be much more straightforwardly explained by having a non-constant, variable (e.g., increasing) mass-to-light ratio as a function of galactocentric distance [9], possibly with one fit parameter. By virtue of scientific understanding about distributed matter, it is actually natural to expect the mass-to-light ratio, if meaningful at all, to be a non-constant variable in different regions of a galaxy, considering the fact that certain forms of baryonic matter in the universe can exist with very small optical cross section and become invisible to detection except by gravitational effects [39]. Then, the value of total mass associated with the mass distribution corresponding to measured rotation curve, with or without a central bulge being accounted for, seems also to be consistent with astronomical observations and well-established Newtonian dynamics. In other words, galaxy rotation curves can be supported with reasonable amount of mass (consistent with star counts) according to Newtonian dynamics, without involving mysterious non-baryonic dark matter or modification of the known laws of Newtonian dynamics (e.g., MOND).
Concluding Remarks
The scientific method involves observation, formulating hypotheses via induction, experimental testing with quantitative measurements, and refinement or elimination of the hypotheses based on the experimental findings. Numerous astronomical observations have shown spiral galaxies exhibiting the common configuration of a bright circular disk with a relatively small central bulge (cf. Figure 1), suggesting that mass therein would likely distribute in a similar configuration. Further measurements have indicated that matter in those disk galaxies is generally moving in circular orbits in the disk, with quantified description known as the rotation curves (with the circular orbital velocity given as a function of galactrocentric distance). From well-established Newtonian dynamics, matters moving in circular orbits are expected to have their centripetal acceleration balanced by the gravitational force of distributed mass. Then a logically induced model for theoretical understanding could consist of an approximate axisymmetric disk, with or without a central bulge, wherein the mass distribution and measured rotation curve are consistent with Newtonian dynamics, as that determined by solving the (3) = (4) equation.
Astronomical measurements are generally challenging for accuracy due to limited means with obvious technical difficulties. Most quantities cannot be measured directly, but must be inferred via assumptions over assumptions with raw data presented in graphs often scattering over orders of magnitude for an anticipated point. Although the efforts in searching for independent test of theory should not be discouraged, findings of quantitative discrepancies between observational data and theoretical prediction ought to stimulate serious interrogations of the measurement accuracy as well as simplifying assumptions in theoretical calculations.
In the case of galactic rotation, the surface mass density determined from measured rotation curve exhibits an exponential-like decline with the galactocentric distance (for the most part) similar to that based on observed luminosity in a qualitative sense. The overall amount of mass consistent with the rotation curve matches the known star counts. An apparent discrepancy (e.g., in regard to local mass density in the solar neighborhood) appears to be within a factor of two to three, while in general the uncertainties in astronomical measurements have not been clearly quantified and could become well over an order of magnitude. The estimated mass using an assumed constant mass-tolight ratio is scientifically expected to be flawed, because it cannot adequately account for the optically undetectable baryonic "dark" components that can exist in the universe such as molecular hydrogen, dust, etc. On the other hand, the rotation curve itself is not error free, with an implication of axisymmetry in the galactic disk which can only be an approximation at best. Therefore, the predicted mass distribution based on Newtonian dynamics using measured rotation curve cannot be an exact prediction, especially for the asymmetric features such as bars and spiral arms, not because of any fundamental shortcomings in Newtonian dynamics but rather due to limited means for accurate, comprehensive measurements. By examining the historical evidence with scientific logic, the so-called "galactic rotation problem" becomes very likely a consequence of misinterpreted measurement data with poorly examined, underestimated intrinsic uncertainties and misunderstood theoretical model description, rather than an indication of mysterious nonbaryonic dark matter.
The lack of means for accurate evaluation of amount of matter at different astronomical scales leaves plenty of room and freedom for theoretical speculations; "there is always the possibility that one or all of the estimates could be wrong" [7]. Inability of directly and reliably measuring the amount of all baryonic matter in the observable universe with the available technology is scientifically expected and should not be regarded as a mystery. For disk galaxies, the measured rotation curves provide the most reliable information for deriving the mass distribution base on Newtonian dynamics [17,23]. When the Newtonian dynamics model suggests more mass than that indicated by luminosity based on some value of mass-to-light ratio, there is likely to exist some "invisible" baryonic matter that is "dark" and undetectable to the available instrument. If the mass obtained by other (less reliable) methods, such as that from a mass-to-light ratio, does not match that determined from rotation curve, it should be naturally understandable as a consequence of inevitable uncertainties of an inaccurate estimate, instead of being considered as primary evidence for mysterious dark matter. The validity of indirect observational evidence for (nonbaryonic) dark matter remains generally questionable, because of decades-long efforts with again and again failed direct detection. Yet numerous research papers published in scientific literature rather faithfully presumed the existence of nonbaryonic dark matter (for the convenience of obtaining seemingly "selfconsistent" results), without serious consideration alternative possibilities. This appears to be a common "snowball" effect in modern astronomy and cosmology for scientists to conform to the mainstream ideas so as to obtain research funds and observing time on major telescopes [49,50]. But the progress of science generally relies on truth-seeking people who look critically at the contemporary schemes, and continuously point out flaws in the established, especially unsubstantiated, views based on more reliable evidence. Without inquisitive doubt and skepticism, science cannot thrive and will stagnate. | 10,682 | 2020-02-01T00:00:00.000 | [
"Physics"
] |
On Constructing Approximate Convex Hull
—The algorithms of convex hull have been extensively studied in literature, principally because of their wide range of applications in different areas. This article presents an efficient algorithm to construct approximate convex hull from a set of n points in the plane in O ( n + k ) time, where k is the approximation error control parameter. The proposed algorithm is suitable for applications preferred to reduce the computation time in exchange of accuracy level such as animation and interaction in computer graphics where rapid and real-time graphics rendering is indispensable.
I. INTRODUCTION
T HE construction of planar convex hull is one of the most fundamental problems in computational geometry. The applications of convex hull spread over large number of fields including pattern recognition, regression, collision detection, area estimation, spectrometry, topology, etc. For instance, computer animation, the most crucial section of computer gaming, requires fast approximation for real-time response. Consequently, it is evidential from literature that numerous studies focus on fast approximation of different geometric structures in computer graphics [1], [2]. Moreover, the construction of exact and approximate convex hull is used as a preprocessing or intermediate step to solve many problems in computer graphics [3], [4].
Convex hull for a given finite set P ⊂ R d of n points where R d denotes the d-dimensional Euclidean space, is defined as the smallest convex set that contains all the n points. A set S ⊂ R d is convex if for two arbitrary points a, b ∈ S, the line segment ab is entirely contained in the set S. Alternatively, the convex hull can be defined as the intersection of all half-spaces (or half-planes in R 2 ) containing P . The main focus of this article is limited on the convex hull in Euclidean plane R 2 .
II. PREVIOUS WORK Because of the importance of convex hull, it is natural to study for improvement of running time and storage requirements of the convex hull algorithms in different Euclidean spaces. Graham [5] published one of the fundamental algorithms of convex hull, widely known as Graham's scan as early as 1972. This is one of the earliest convex hull algorithms with O(n log n) worst-case running time. Graham's algorithm is asymptotically optimal since Ω(n log n) is the lower bound of planar convex hull problem. It can be shown [6] that Ω(n log n) is a lower bound of a similar but weaker problem of determining the points belonging to the convex hull, not necessarily producing them in cyclic order.
However, all of these lower bound arguments assume that the number of hull vertices h is at least a fraction of n. Another algorithm due to Jarvis [7] surpasses the Graham's scan algorithm if the number of hull vertices h is substantially smaller than n. This algorithm with O(nh) running time is known as Jarvis's march. There is a strong relation between sorting algorithm and convex hull algorithm in the plane. Several divide-and-conquer algorithms including MergeHull and QuickHull algorithms of convex hull modeled after the sorting algorithms [8] and the first algorithm Graham's [5] scan uses explicit sorting of points.
In 1986, Kirkpatrick and Seidel [9] proposed an algorithm that computes the convex hull of a set of n points in the plane in O(n log h) time. Their algorithm is both output sensitive and worst case optimal. Later, a simplification of this algorithm [9] was obtained by Chan [10]. In the following year Melkman [11] presented a simple and elegant algorithm to construct the convex hull for simple polyline. This is one of the on-line algorithms which construct the convex hull in linear time.
Approximation algorithms for convex hull are useful for applications including area estimation of complex shapes that require rapid solutions, even at the expense of accuracy of constructed convex hull. Based on approximation output, these algorithms of convex hull could be divided into three groupsnear, inner, and outer approximation algorithms. Near, inner, and outer approximation algorithms compute near, inner, and outer approximation of the exact convex hull for some point set respectively.
In 1982, Bentley et al. [12] published an approximation algorithm for convex hull construction with O(n + k) running time. Another algorithm due to Soisalon-Soininen [13] which uses a modified approximation scheme of [12] and has the same running time and error bound. Both of the algorithms are the inner approximation of convex hull algorithm. The proposed algorithm in this article is a near approximation algorithm of O(n + k) running time.
III. APPROXIMATION ALGORITHM
Let P ⊂ R 2 be the finite set of n ≥ 3 points in general position and the (accurate) convex hull of P be CH(P ). Kavan, Kolingerova, and Zara [14] proposed an algorithm with O(n + k 2 ) running time which partitions the plane R 2 into k sectors centered in the origin. Their algorithm requires the origin to be inside the convex hull. (It is possible to choose a point p ∈ P and translate all the other points of P accordingly using additional steps in their algorithm). Conversely, we partition the plane R 2 into k vertical sector pair with equal central angle α in the origin and for our algorithm the origin O could be located outside of the convex hull. The sets represent the vertically opposite sectors that form the vertical sector pairs where, i = 0, 1, . . . , k − 1 and the central angle α = π/k. Then, the sets s ⊕ i and s i denote the points belonging to the set P in sectors S ⊕ i and S i respectively. Formally, A pair of unit vectors u ⊕ i and u i obtain in ith vertical sector pair as The maximum projection magnitudes in the directions of u ⊕ i and u i are The definition of max function is extend to return −∞ for The vectors that produce the maximum magnitude in the directions of u ⊕ i and u i for some points in the ith vertical sector pair are The magnitude of the vectors v ⊕ i and v i could be ±∞ for the ith vertical sector pair containing less than two points. The sets V ⊕ and V containing all the finite vectors in the ranges [0, π) and [π, 2π), are Let, V = V ⊕ ∪ V and V contains at least three terminal points of the vectors in general position to construct the convex hull. The convex hull approximation of k vertical sector pairs according to the proposed algorithm in this article is The input of the algorithm P ⊂ R 2 is a set of n ≥ 3 points in general position. For simplicity, we assume that the origin O / ∈ P and k ≥ 2. (This assumption can be achieved by taking a point arbitrarily close to the origin instead of the origin itself, within the upper bound of error calculated in Section V). We also assume that at least two vertical sector pairs together contains minimum three points (where none of these two are empty). The assumption can be reduced to one of the requirements of minimum three points input (i.e., |P | ≥ 3) of convex hull. To illustrate that, let us consider p and q to be two points in P such that ∠pOq ≤ π − α where O is the origin. Such two points do exist if no three points are collinear in P (i.e., the points of P are in general position). If Ot is the bisector of ∠pOq, then adding the angle of Ot from positive x -axis as an offset to every vertical sector pair ensures that all the input points cannot be in the same vertical sector pair. Thus, the assumption is satisfied. Alternatively, if less than three absolute values in M are finite, then for each M i ∈ M , assign M i cos α to M i−1 and M i+1 where these are infinite. (The next paragraph contains details about M .) Therefore, the number of points in V must be at least three.
A circular array U is used to contain the k pairs of unit vectors of all the k vertical sector pairs and another circular array M is used to hold the number of k pairs of maximum projection magnitude in all the k vertical sector pairs. Both circular arrays have the same size of 2k and use zero based indexing scheme. The function atan2 is a variation of function arctan with point as a parameter. The function returns the angle in radians between the point and the positive x -axis of the plane in the range of [0, 2π). The function anglex searches sequentially for the index of maximum angular distance between two consecutive positive finite vectors (computed using projection magnitude with index referring angle). If the index is i such that maximum angle occurs in between i and j, the anglex function returns j. The final convex hull is constructed using Melkman's [11] algorithm from set of V points which are the terminal points of finite vectors computed in steps 14 and 15. If the first three points of V are collinear, displacing one of these points within the error bound solves the problem.
Since the vertices of the convex hull produced by the proposed algorithm are not necessarily in the input point set P , the algorithm cannot be applied straight away to solve some other problems. Let us consider another circular array Q of 2k size which used to contain the points generating the inner products of M . Adding the point Q j instead of M j U j to the sequence V in steps 14 and 15 ensures that the vertices of the convex hull will be the points from P . These modifications of the algorithm allow us to solve some problems including approximate farthest-pair problem but increase the upper bound of error (described in Section V) to r sin(π/k).
V. ERROR ANALYSIS
There are different schemes for measuring the error of an approximation of the convex hull. We measured the error as distance from point set of accurate convex hull CH(P ). The distance of an arbitrary point x from a set S is defined as Formally the approximation error E can be defined as It is sufficient to determine the upper bound of error E of the approximate convex hull CH k (P ). Let, Q be be a point lying outside of the convex hull CH k (P ) and O be the origin. Suppose that, AB is an edge of the approximate convex hull (as shown in Figure 4). Therefore, the distance of the point Q from the CH k (P ) is It follows that the minimum distance d directly depends on k which is denoted as function d(k). Thus, the upper bound of approximation error E is r sin(π/2k). If k approaches to infinity, the CH k (P ) converges to CH(P ). The upper error bounds r sin(π/k) and max(r tan(π/k), 2r sin(π/k)) are calculated in this article and in [14] (i.e., KKZ Algorithm) respectively where r is unit in the graph.
VI. CORRECTNESS Theorem 1: The approximation algorithm produces the convex hull from a set of points in R 2 correctly within the prescribed error bound.
Proof: Since, Melkman's algorithm can construct the convex hull correctly for points on a simple polygonal chain, it suffices to prove that the sequence of points V denotes a simple polygonal chain. (Melkman [11] published the on-line algorithm of convex hull with formal proof of correctness in 1987). Suppose that, the plane R 2 is partitioned into Fig. 6.
The proof of correctness of the algorithm that consider both the simple and non-simple variation of polygonal chain i refer to the same point of V (e.g., h i+2 contains only one point in the Figure 6). Let ∠h i h j denotes the angle from h i to h j where h i and h j are half-lines from the origin. Since the angle between two consecutive half-lines ∠h i h i+1 ≥ π/k and O / ∈ V (because t > 0 for our assumptions O / ∈ P and k ≥ 2 in the algorithm), no two line segments The equation of ∠Ov ⊕ i v i+1 (derived using the law of sines and basic properties of triangle) also illustrates this fact mathematically for Ov ⊕ i v i+1 (as shown in the Figure 6) The solution with minimum magnitude of the above equation If the maximum angle between two consecutive half-lines is ∠h i h i+1 for some i, then anglex function returns the index i + 1 that ensures the construction a simple polygonal chain
VII. CONCLUSION
Geometric algorithms are frequently formulated under the non-degeneracy assumption or general position assumption [15] and the proposed algorithm in this article is also not an exception. To make the implementation of the algorithm robust an integrated treatment for the special cases can be applied. There are other general techniques called perturbation schemes [16], [17] to transform the input into general position and allow the algorithm to solve the problem on perturbed input. Both symbolic perturbation and numerical (approximation) perturbation (where perturbation error is consistent with the error bound of the algorithm) can be used on the points of P to eliminate degenerate cases.
APPENDIX
The article describes a near approximation algorithm for convex hull however it is possible to extend the concept for inner as well as outer approximation algorithms for convex hull. An illustration of inner approximate convex hull algorithm is shown in Figure 7.
V ← V ∪ sort(T ) 18. return MELKMAN-CONVEX-HULL(V ) Fig. 7. The proposed algorithm to compute an inner approximate convex hull in O(n + k) time from inputs P and k where P ⊂ R 2 is a set of n points in the plane and k is the number of vertical sector pair partitioning the plane. | 3,283 | 2013-04-30T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Microstructure Evolution in ZrCx with Different Stoichiometries Irradiated by Four MeV Au Ions
ZrCx ceramics with different stoichiometries were irradiated under a four MeV Au ion beam in doses of 2 × 1016 ions/cm2 at room temperature, corresponding to ~130 dpa. Grazing incidence, X-ray diffraction and transmission electron microscopy were performed to study the radiation damage and microstructure evolution in ZrCx ceramics. With the decrease in C/Zr ratio, the expansion of ZrCx lattice became smaller after irradiation. Some long dislocation lines formed at the near-surface, while, in the area with the greatest damage (depth of ~400 nm), large amounts of dislocation loops formed in ZrC, ZrC0.9 and ZrC0.8. With the increase in carbon vacancy concentration, the size of the dislocation loops gradually decreased. Few dislocation loops were found in ZrC0.7 after irradiation, and only black-dot defects were found in the area with the greatest damage. For the non-stoichiometric ZrCx, with the increase of the intrinsic vacancies, the number of C interstitials caused by irradiation decreased, and the recombination barrier of C Frenkel pairs reduced. The above factors will reduce the total number of C interstitials after cascade cooling, suppressing the formation and growth of dislocation loops, which is significant for the enhancement of the tolerance of radiation damage.
Introduction
Due to the combination of the high temperature, high neutron irradiation dose and extremely corrosive environment of Generation IV nuclear reactor systems, the development of advanced nuclear materials with good radiation resistance, corrosion resistance and high thermostability is urgent [1]. Silicon carbide (SiC) is considered as a potential material for nuclear materials due to its extraordinary resistance to irradiation [2]. However, there are some limitations for the use of SiC in Generation IV nuclear reactor systems. SiC could transform from a β-SiC to α-SiC under the accident conditions, which could result in failure of the materials and the release of fission products [3]. In addition, the SiC is susceptible to attack by palladium, which can potentially compromise the retention of fission products [4,5]. Zirconium carbide (ZrC) has been considered as a Tri-structural iso-tropic (TRISO)-coating fuel particle, fuel cladding or an inert matrix material, due to its high melting point, high thermal conductivity, low neutron absorption cross-section and excellent resistance to attack by fission products [6][7][8]. As a member of the family of transition metal carbides, ZrC has an NaCl crystal structure which is stable over a relatively wide compositional range of 0.6 to 1.0 [9]. The properties of ZrC x ceramics are generally sensitive to the C/Zr atom ratio [10,11]. As is known, stoichiometry is a critical factor for the properties of ZrC x [12]. Thus, the carbon vacancies will likely be an important factor affecting the irradiation behavior of ZrC x .
Many studies on the microstructure evolution of stoichiometric ZrC under irradiation have been carried out. In situ irradiations of ZrC 1.01 were performed by Gan et al. [13] using Kr irons to 10 and 30 displacements per atom (dpa) at room temperature, and 10 and 70 dpa at 800 • C, and observing the formation of a high density of black-dot defects at room temperature and dislocation segments at 800 • C. Yang et al. [14] conducted ion irradiations of ZrC up to 0.7 and 1.5 dpa at 800 • C using 2.6 MeV protons and found the formation of Frank loops. Additionally, Gosset et al. [15] found a high density of defects of a certain size, which evolved into a dislocation network when the dose was increased by using four MeV Au ions at room temperature. Single crystals ZrC were irradiated with 1.2 MeV Au ions for various doses in the range 2 × 10 14 -3 × 10 16 ions/cm 2 by Pellegrino et al. [16] at room temperature, and dislocation loops for doses above 10 15 ions/cm 2 were observed. Agarwal et al. [17] performed three MeV He + ion irradiations up to 5 × 10 20 ions/m 2 and high-temperature annealing (1000-1600 • C) and found that, underneath blister caps, the microstructure of ZrC evolved into ultra-fine nano-scale grains consisting of numerous nano-cracks at 1500 • C. Snead et al. [18] conducted fast neutron irradiations of ZrC for the fluences of 0. [8][9].38 × 10 25 neutrons/m 2 at temperatures ranging from 635 to 1496 • C, using the High Flux Isotope Reactor. It was found that the dislocation loops transitioned from Frank to prismatic loops in ZrC at higher temperatures. Some investigations on the effects of stoichiometry on the irradiation response in ZrC x have been carried out using proton irradiations to 1-3 dpa at 800 • C [19] and 2 dpa 1125 • C [20]. In our previous study [21], we found that the superstructure modulation of the ordered carbon vacancies for Zr 2 C in ZrC 0.6 was destroyed under Au ion irradiation. However, this is limited for the understanding of the effects of stoichiometry of ZrC on radiation damage, and no systematic investigation has been performed on the effect of stoichiometry on the radiation damage in ZrC x .
In the present studies, the effect of stoichiometry on damage resistance and microstructure evolution of ZrC x under irradiation were investigated using a four MeV Au ion beam in fluences of 2 × 10 16 ions/cm 2 at room temperature. Grazing incidence X-ray diffraction (GIXRD) and transmission electron microscopy (TEM) were performed to study the radiation damage and microstructure evolution. The influence mechanism of C vacancies on irradiation defects was also discussed. The fundamental understanding of the microstructure evolution over a range of stoichiometry will provide a baseline for the applications of ZrC x in Generation IV nuclear reactor systems.
Materials
Non-stoichiometric ZrC x ceramics were prepared by two-step reactive hot pressing in our previous study [11]. Commercially available powders of ZrC (purity >99.5 wt.%, particle size 1-5 µm, Changsha Weihui Materials Company, Changsha, China) and ZrH 2 (purity >99.6 wt.%, particle size 2-10 µm, Jinzhou Haixin Metal Materials Company, Jinzhou, China) were chosen as starting powders. Mixed powders with appropriate ratios were ball-milled in ethanol by ZrO 2 milling balls with a speed of 300 rpm for 24 h. Then, a rotary evaporator was used to dry the slurry, and the dried powders were sieved by a 200-mesh screen. The ZrC x ceramics were sintered by a two-step reactive sintering method, including a low-temperature reaction (1300 • C for 30 min) for the decomposition of ZrH 2 and outgassing of H 2 and then a high-temperature densification by hot pressing. The following reactions occur at a relatively low temperature: The reactions of Equations (1) and (2) will form ZrC x phase with a composition parameter x of 0.7 to 1.0 after completion. The basic properties of ZrC x ceramics used in this study are shown in Table 1.
Au Ions Irradiation
Specimens of dimensions of 3 × 4 × 5 mm 3 were cut from the ceramics. The surfaces of specimens were polished before irradiation. The irradiations of ZrC x were performed on the 5SDH-2 accelerator (Peking University, Beijing, China) using four MeV gold ions of a fluence of 2 × 10 16 ions/cm 2 at room temperature, with the beam current held below 1 µm·cm -2 in order to avoid significant bulk heating. The chamber was maintained in vacuum with a pressure <10 -3 Pa during irradiation, and the ion beam direction was set perpendicular to the irradiated surface. The number of dpa of the ceramics after irradiation along the depth was calculated with the software of the stopping and range of ions in matter version 2013 (SRIM-2013) in full cascade mode, using displacement energies of 37 and 16 eV for Zr and C [22], respectively. The input parameters for SRIM simulation are shown in Table 1.
Characterization
Grazing incidence X-ray diffraction (GIXRD, Empyrean, PANaytical Corp., Almelo, The Netherlands) using CuK α radiation was chosen to analyze the changes of crystal structure during irradiation. The diffractometer was operated with a glancing angle of 0.7 • and a scanning speed of 1 • /min.
Focused ion beam (FIB) lift-out transmission electron microscopy (TEM) samples were prepared using HELIOS NanoLab 600i (FEI, Hillsboro, OR, USA). Firstly, an electron beam with 30 keV assisted deposition of platinum was applied at a position of interest with a thickness of 0.5 µm, followed by an ion beam-assisted deposition of platinum with a thickness of 1-2 µm, aiming to reduce the gallium ion contamination of the top face of the sample. Then, a sheet with a thickness of 0.5 µm was cut out with a beam current of 20 nA. When the desired thickness was cut out, the sheet was welded to a micro-mechanical hand, and then thinned to 300 nm with an ion beam of 30 kV and 100 pA. Next, an ion beam of 10 kV and 50 pA was used to obtain a thickness of 150 nm, and finally a thickness of 80 nm was cut by an ion beam of 5 kV and 10 pA. TEM (FEI Talos F200x, Eindhoven, Netherlands) was used for a more detailed analysis of the microstructure.
Irradiation Damage Simulation
The SRIM estimation of damage of ZrC is shown in Figure 1, which shows the distribution of Au ions, displacement atoms and vacancies in irradiated ZrC. It can be seen that the numbers of displaced Zr atoms and C atoms are much higher than that of Au ions. These displacement atoms consist of Zr atoms and C atoms that vacated their lattice positions. It is worth noting that the peaks of displaced Zr atoms and C atoms are shallower than the concentration peaks of Au ions. This is because, near the end of Au ion track, the ion energy is insufficient to generate a large amount of displacement damage. Figure 1b shows the distribution of Zr vacancies and C vacancies along the depth from the surface of ZrC, caused by Au ion irradiation. The total number of Zr vacancies is slightly higher than that of C vacancies, and the concentrations of vacancies reach their highest value in the depth range of 300-400 nm, where they withstand the most serious damage.
Irradiation Damage Simulation
The SRIM estimation of damage of ZrC is shown in Figure 1, which shows the distribution of Au ions, displacement atoms and vacancies in irradiated ZrC. It can be seen that the numbers of displaced Zr atoms and C atoms are much higher than that of Au ions. These displacement atoms consist of Zr atoms and C atoms that vacated their lattice positions. It is worth noting that the peaks of displaced Zr atoms and C atoms are shallower than the concentration peaks of Au ions. This is because, near the end of Au ion track, the ion energy is insufficient to generate a large amount of displacement damage. Figure 1b shows the distribution of Zr vacancies and C vacancies along the depth from the surface of ZrC, caused by Au ion irradiation. The total number of Zr vacancies is slightly higher than that of C vacancies, and the concentrations of vacancies reach their highest value in the depth range of 300-400 nm, where they withstand the most serious damage. The SRIM estimation for damage production and implanted Au distributions for ZrCx with different stoichiometries are shown in Figure 2. The distribution of Au ions along the depth from the surface of ZrCx agrees with the Gaussian distribution, and the incident depth of Au ions was less than 900 nm. Comparing the depth of radiation damage for ZrCx with different stoichiometries, the depth of radiation damage gradually increases as the C/Zr ratio decreases. The damaged depth in irradiated ZrC was about 850 nm, while that in irradiated ZrC0.7 was slightly deeper, reaching 900 nm. This is mainly because the atomic density of ZrCx gradually decreases as the C/Zr ratio decreases: from 7.69 × 10 22 atoms/cm 3 of ZrC to 6.48 × 10 22 atoms/cm 3 of ZrC0.7. The steric hindrance and energy loss of Au ions could be reduced by a lower atomic density. Additionally, the highest value of radiation damage in ZrC was ~135 dpa, while that in ZrC0.7 was slightly lower, at 130 dpa. This is because the simulation of the irradiation process is based on the ratio of the probability of Au ions colliding with Zr atoms and C atoms. The displacement energy of the C atoms given by the SRIM program was lower than that for Zr atoms, and the ratio in the ZrC0.7 sample is 10:7. Thus, it is predicted by the simulation that ZrC0.7 had a lower dpa, as shown in Figure 2a. C interstitials concentration along the depth in ZrCx, irradiated by a beam of 10,000 four MeV Au ions, is shown in Figure 3. It can be seen that, as the intrinsic C vacancies increase, the concentration of C interstitials generated by irradiation decreases. Overall, as the C/Zr ratio decreases, the probability of the collision of Au ions with Zr atoms in ZrCx becomes higher, which slightly decreases the highest radiation damage value. The SRIM estimation for damage production and implanted Au distributions for ZrC x with different stoichiometries are shown in Figure 2. The distribution of Au ions along the depth from the surface of ZrC x agrees with the Gaussian distribution, and the incident depth of Au ions was less than 900 nm. Comparing the depth of radiation damage for ZrC x with different stoichiometries, the depth of radiation damage gradually increases as the C/Zr ratio decreases. The damaged depth in irradiated ZrC was about 850 nm, while that in irradiated ZrC 0.7 was slightly deeper, reaching 900 nm. This is mainly because the atomic density of ZrC x gradually decreases as the C/Zr ratio decreases: from 7.69 × 10 22 atoms/cm 3 of ZrC to 6.48 × 10 22 atoms/cm 3 of ZrC 0.7 . The steric hindrance and energy loss of Au ions could be reduced by a lower atomic density. Additionally, the highest value of radiation damage in ZrC was~135 dpa, while that in ZrC 0.7 was slightly lower, at 130 dpa. This is because the simulation of the irradiation process is based on the ratio of the probability of Au ions colliding with Zr atoms and C atoms. The displacement energy of the C atoms given by the SRIM program was lower than that for Zr atoms, and the ratio in the ZrC 0.7 sample is 10:7. Thus, it is predicted by the simulation that ZrC 0.7 had a lower dpa, as shown in Figure 2a. C interstitials concentration along the depth in ZrC x, irradiated by a beam of 10,000 four MeV Au ions, is shown in Figure 3. It can be seen that, as the intrinsic C vacancies increase, the concentration of C interstitials generated by irradiation decreases. Overall, as the C/Zr ratio decreases, the probability of the collision of Au ions with Zr atoms in ZrC x becomes higher, which slightly decreases the highest radiation damage value.
Lattice Parameter Changes
The damage of ZrCx ceramics under Au ions is mainly due to the large number of displacement atoms and vacancies generated by irradiation. Although most of the displacement atoms can return to inherent lattice vacancies, unreacted displacement atoms and vacancies will cause corresponding changes to the crystal structure and lattice parameter. Therefore, GIXRD can be used to evaluate the degree of radiation damage of ZrCx by calculating the change in lattice parameter before and after irradiation. Figure 4 shows GIXRD patterns of ZrCx with different stoichiometries before and after irradiation. It can be seen that no new peak appears in the GIXRD pattern of ZrCx after irradiation, indicating that no decomposition and amorphous transformation of ZrCx occurred during irradiation.
Lattice Parameter Changes
The damage of ZrCx ceramics under Au ions is mainly due to the large number of displacement atoms and vacancies generated by irradiation. Although most of the displacement atoms can return to inherent lattice vacancies, unreacted displacement atoms and vacancies will cause corresponding changes to the crystal structure and lattice parameter. Therefore, GIXRD can be used to evaluate the degree of radiation damage of ZrCx by calculating the change in lattice parameter before and after irradiation. Figure 4 shows GIXRD patterns of ZrCx with different stoichiometries before and after irradiation. It can be seen that no new peak appears in the GIXRD pattern of ZrCx after irradiation, indicating that no decomposition and amorphous transformation of ZrCx occurred during irradiation.
Lattice Parameter Changes
The damage of ZrC x ceramics under Au ions is mainly due to the large number of displacement atoms and vacancies generated by irradiation. Although most of the displacement atoms can return to inherent lattice vacancies, unreacted displacement atoms and vacancies will cause corresponding changes to the crystal structure and lattice parameter. Therefore, GIXRD can be used to evaluate the degree of radiation damage of ZrC x by calculating the change in lattice parameter before and after irradiation. Figure 4 shows GIXRD patterns of ZrC x with different stoichiometries before and after irradiation. It can be seen that no new peak appears in the GIXRD pattern of ZrC x after irradiation, indicating that no decomposition and amorphous transformation of ZrC x occurred during irradiation.
It can be found that the peaks of ZrC x are broadened with weakened intensities after irradiation. The weakening of the peak intensity indicates that the defects caused by Au ion irradiation cause some damage to the crystal structure. The broadening of the diffraction peak may be caused by the micro-strain and point defects and dislocations in the ZrC x lattice caused by irradiation [14,[23][24][25][26].
In addition, the position of the diffraction peak of ZrC x after irradiation also changed. In the magnified patterns at high angles very small shifts of peaks from high angles to low angles in GIXRD patterns were observed. The offsets become smaller as the C/Zr ratio decreases. The decrease in the 2θ angle indicates an increase in the corresponding d-spacing [27]. Hence, the ZrC x lattice was expanded after irradiation. The lattice parameters of ZrC x before and after irradiation were calculated from the diffraction patterns and are shown in Table 2. The lattice parameter of ZrC increased from 4.6815 ± 0.0013 Å to 4.6870 ± 0.0007 Å, while that of ZrC 0.7 increased only from 4.6776 ± 0.0011 Å to 4.6785 ± 0.0024 Å. With the decrease in the C/Zr ratio, the lattice expansion of ZrC x after irradiation becomes smaller, the lattice expansion of ZrC is 0.117%, and the lattice swelling of ZrC 0.7 is only 0.019%. According to the previous SRIM simulation results, a large number of interstitial atoms were generated during Au ion irradiation. Defects formed by these interstitials cause lattice distortion, causing the lattice swelling of ZrC x . With the increase in the concentration of intrinsic C vacancies in ZrC x , a large number of intrinsic C vacancies can interact with the interstitials generated by irradiation, meaning that the interstitials return to the intrinsic C vacancies, inhibiting the formation of irradiation defects and reducing the lattice distortion, which can restrain the lattice expansion. It can be found that the peaks of ZrCx are broadened with weakened intensities after irradiation. The weakening of the peak intensity indicates that the defects caused by Au ion irradiation cause some damage to the crystal structure. The broadening of the diffraction peak may be caused by the micro-strain and point defects and dislocations in the ZrCx lattice caused by irradiation [14,[23][24][25][26].
In addition, the position of the diffraction peak of ZrCx after irradiation also changed. In the magnified patterns at high angles very small shifts of peaks from high angles to low angles in GIXRD patterns were observed. The offsets become smaller as the C/Zr ratio decreases. The decrease in the 2θ angle indicates an increase in the corresponding d-spacing [27]. Hence, the ZrCx lattice was expanded after irradiation. The lattice parameters of ZrCx before and after irradiation were calculated from the diffraction patterns and are shown in Table 2. The lattice parameter of ZrC increased from 4.6815 ± 0.0013 Å to 4.6870 ± 0.0007 Å, while that of ZrC0.7 increased only from 4.6776 ± 0.0011 Å to 4.6785 ± 0.0024 Å. With the decrease in the C/Zr ratio, the lattice expansion of ZrCx after irradiation becomes smaller, the lattice expansion of ZrC is 0.117%, and the lattice swelling of ZrC0.7 Figure 5a shows a cross-section bright field (BF) TEM image of ZrC after irradiation, in which a red arrow highlights the implant direction and white lines show the surface and depth of damage. It can be observed that the depth of the distinct damage layer is 900 nm, which matches with the results of SRIM simulations. Figure 5d shows the selected area electron diffraction (SAED) pattern obtained from the zone axis [011] of the damaged area. The diffraction pattern indicates that the ZrC crystal structure is intact after irradiation and no amorphization occurs. A two-beam condition was chosen in TEM analysis because the diffraction vector and the defect image can be easily correlated to determine the defect type under the condition. From the BF image of the irradiated ZrC (Figure 5b) taken from a two-beam condition with g = 200 near the zone axis [011], it can be observed that the dislocation loops were generated after irradiation. Dislocation loops with a high density were also found in the BF image taken from the two-beam condition with g = 200 near the zone axis [001], as shown in Figure 5e, indicating that the dislocation loops with high-density form in the area with the greatest damage (~400 nm from the surface) after Au ion irradiation, and the average size of the dislocation loops is 18.1 ± 5.6 nm. Figure 5a shows a cross-section bright field (BF) TEM image of ZrC after irradiation, in which a red arrow highlights the implant direction and white lines show the surface and depth of damage. It can be observed that the depth of the distinct damage layer is 900 nm, which matches with the results of SRIM simulations. Figure 5d shows the selected area electron diffraction (SAED) pattern obtained from the zone axis [011] of the damaged area. The diffraction pattern indicates that the ZrC crystal structure is intact after irradiation and no amorphization occurs. A two-beam condition was chosen in TEM analysis because the diffraction vector and the defect image can be easily correlated to determine the defect type under the condition. From the BF image of the irradiated ZrC ( Figure 5b) taken from a two-beam condition with g = 200 near the zone axis [011], it can be observed that the dislocation loops were generated after irradiation. Dislocation loops with a high density were also found in the BF image taken from the two-beam condition with g = 200 near the zone axis [001], as shown in Figure 5e, indicating that the dislocation loops with high-density form in the area with the greatest damage (~400 nm from the surface) after Au ion irradiation, and the average size of the dislocation loops is 18.1 ± 5.6 nm. Further, the microstructure of the defect generated by irradiation in the area with the greatest damage is analyzed. Figure 6a shows the high-resolution (HR) TEM images of the damaged area's interior obtained from the zone axis [011] in irradiated ZrC. The lattice fringes were found to be distorted, which was attributed to the lattice stress caused by the irradiation damage, indicating that there is not only a larger lattice constant but also a distortion of its lattice in ZrC after irradiation. Figure 6b shows the corresponding Fourier transform pattern. According to the Fourier deconvolution, Fourier-filtered (111) Further, the microstructure of the defect generated by irradiation in the area with the greatest damage is analyzed. Figure 6a shows the high-resolution (HR) TEM images of the damaged area's interior obtained from the zone axis [011] in irradiated ZrC. The lattice fringes were found to be distorted, which was attributed to the lattice stress caused by the irradiation damage, indicating that there is not only a larger lattice constant but also a distortion of its lattice in ZrC after irradiation. Figure 6b shows the corresponding Fourier transform pattern. According to the Fourier deconvolution, Fourier-filtered (111)
ZrC0.9
In order to investigate the effect of C vacancies on the defects of ZrCx after irradiation, TEM analysis was carried out on the cross sections of ZrC0.9, ZrC0.8 and ZrC0.7 along the depth directions after irradiation. Figure 7a and d show the cross-section BF-TEM images and SAED patterns in the damaged area of ZrC0.9 after irradiation. The depth of the damaged region of ZrC0.9 is close to that of ZrC, which is about 900 nm, and the crystal structure of ZrC0.9 after irradiation is also intact. However, the microstructure of defects in the near-surface area is different from that in the damaged area after irradiation. According to the BF-TEM images, under a two-beam condition within the near surface, as shown in Figure 7b, long dislocation lines were observed, and the amount of dislocation was relatively low. Figure 7e shows the defect morphology under the two-beam condition within the interior of the damaged layer, which is similar to the morphology of the defect in ZrC. More dislocation loops appear in the area with the greatest damage, and the average size of the loops is 11.3 ± 2.8 nm. In order to investigate the effect of C vacancies on the defects of ZrC x after irradiation, TEM analysis was carried out on the cross sections of ZrC 0.9 , ZrC 0.8 and ZrC 0.7 along the depth directions after irradiation. Figure 7a and d show the cross-section BF-TEM images and SAED patterns in the damaged area of ZrC 0.9 after irradiation. The depth of the damaged region of ZrC 0.9 is close to that of ZrC, which is about 900 nm, and the crystal structure of ZrC 0.9 after irradiation is also intact. However, the microstructure of defects in the near-surface area is different from that in the damaged area after irradiation. According to the BF-TEM images, under a two-beam condition within the near surface, as shown in Figure 7b, long dislocation lines were observed, and the amount of dislocation was relatively low. Figure 7e shows the defect morphology under the two-beam condition within the interior of the damaged layer, which is similar to the morphology of the defect in ZrC. More dislocation loops appear in the area with the greatest damage, and the average size of the loops is 11.3 ± 2.8 nm. ,d show the cross-section BF-TEM images and SAED patterns in the damaged area of ZrC0.8 after irradiation. Similar to ZrC and ZrC0.9, the crystal structure of ZrC0.8 is maintained after irradiation, and no amorphization occurs. The microstructure of defects at the near-surface area and the damage area after irradiation are similar to those in ZrC0. 9 The defects in the near-surface area are mainly composed of some long dislocation lines (Figure 8b), while a large amount of dislocation loops were observed within the interior of the damaged layer (Figure 8e) with an average size of 11.1 ± 2.6 nm, which is smaller than those in ZrC and ZrC0.9. This is because the existence of the C vacancies can absorb the C interstitial generated by the irradiation, effectively suppressing the nucleation and growth of the dislocation loop, and, as the concentration of C vacancies increases, the suppression effect increases, resulting in a slight decrease in the size of the dislocation loops [23]. In addition, some clear-faceted cavities were seen in Figure 8a. We are not sure of the reason for the clear-faceted cavities, it may be noted that impurities could result in the cavities during the preparation of FIB lift-out TEM samples.
The defect structures in ZrC, ZrC0.9 and ZrC0.8 are similar after irradiation. The NaCl type crystal structure of ZrC is maintained after irradiation, and some long dislocation lines are formed at the near surface. In the area with the greatest damage (depth of ~400 nm), large amounts of dislocation loops formed, and, as the concentration of C vacancy increased, the size of the dislocation loops gradually decreased from 18.1 ± 5.6 nm to 11.1 ± 2.6 nm. Figure 8a,d show the cross-section BF-TEM images and SAED patterns in the damaged area of ZrC 0.8 after irradiation. Similar to ZrC and ZrC 0.9 , the crystal structure of ZrC 0.8 is maintained after irradiation, and no amorphization occurs. The microstructure of defects at the near-surface area and the damage area after irradiation are similar to those in ZrC 0.9 The defects in the near-surface area are mainly composed of some long dislocation lines (Figure 8b), while a large amount of dislocation loops were observed within the interior of the damaged layer (Figure 8e) with an average size of 11.1 ± 2.6 nm, which is smaller than those in ZrC and ZrC 0.9 . This is because the existence of the C vacancies can absorb the C interstitial generated by the irradiation, effectively suppressing the nucleation and growth of the dislocation loop, and, as the concentration of C vacancies increases, the suppression effect increases, resulting in a slight decrease in the size of the dislocation loops [23]. In addition, some clear-faceted cavities were seen in Figure 8a. We are not sure of the reason for the clear-faceted cavities, it may be noted that impurities could result in the cavities during the preparation of FIB lift-out TEM samples.
The defect structures in ZrC, ZrC 0.9 and ZrC 0.8 are similar after irradiation. The NaCl type crystal structure of ZrC is maintained after irradiation, and some long dislocation lines are formed at the near surface. In the area with the greatest damage (depth of~400 nm), large amounts of dislocation loops formed, and, as the concentration of C vacancy increased, the size of the dislocation loops gradually decreased from 18.1 ± 5.6 nm to 11.1 ± 2.6 nm. However, for ZrC0.7, the defect structure after irradiation is different from those of ZrC, ZrC0.9 and ZrC0.8. It can be seen from the change of lattice parameter in Section 3.1 that the lattice swelling of ZrC0.7 is only 0.019%, which is much smaller than those of ZrC, ZrC0.9 and ZrC0.8, indicating that ZrC0.7 has a better radiation resistance. According to Figure 9a,d, after irradiation, the crystal structure of ZrC0.7 was intact and no amorphization occurred. Few dislocation loops were found in the BF images with g = 200 and g = 111 in the damage region, as shown in Figure 9b,e, and only black-dot defects were found in in the area with the greatest damage. This is because there are more intrinsic C atom vacancies in ZrC0.7. Most of the interstitials generated by Au ion irradiation will combine with vacancies after cascade cooling, and the presence of intrinsic C vacancies will increase interstitials. The probability of combining with vacancies reduces the presence of interstitials and inhibits the formation of dislocation loops. Point defects, such as a large number of vacancies and a small number of interstitials, appear as "black-dot" defects in the TEM images.
In summary, there are similar microstructures of defects in ZrC, ZrC0.9 and ZrC0.8 after irradiation. The NaCl type crystal structure of maintained after irradiation, and some dislocations formed at the near surface. In the area with the greatest damage (depth of ~400 nm), large amounts of dislocation loops formed, and as the C vacancy concentration increased, the size of the dislocation loops gradually decreased from 18.1 ± 5.6 nm to 11.1 ± 2.6 nm. No obvious dislocation loop was found in ZrC0.7 after irradiation. In the area with the greatest damage, only "black-dot" defects were found. The existence of a large number of intrinsic C vacancies increases the probability of the combination of interstitials and vacancies generated by irradiation, resulting in a reduction in the existence of interstitials and the inhibition of the formation of dislocation loops. However, for ZrC 0.7 , the defect structure after irradiation is different from those of ZrC, ZrC 0.9 and ZrC 0.8 . It can be seen from the change of lattice parameter in Section 3.1 that the lattice swelling of ZrC 0.7 is only 0.019%, which is much smaller than those of ZrC, ZrC 0.9 and ZrC 0.8 , indicating that ZrC 0.7 has a better radiation resistance. According to Figure 9a,d, after irradiation, the crystal structure of ZrC 0.7 was intact and no amorphization occurred. Few dislocation loops were found in the BF images with g = 200 and g = 11 − 1 in the damage region, as shown in Figure 9b,e, and only black-dot defects were found in in the area with the greatest damage. This is because there are more intrinsic C atom vacancies in ZrC 0.7 . Most of the interstitials generated by Au ion irradiation will combine with vacancies after cascade cooling, and the presence of intrinsic C vacancies will increase interstitials. The probability of combining with vacancies reduces the presence of interstitials and inhibits the formation of dislocation loops. Point defects, such as a large number of vacancies and a small number of interstitials, appear as "black-dot" defects in the TEM images.
In summary, there are similar microstructures of defects in ZrC, ZrC 0.9 and ZrC 0.8 after irradiation. The NaCl type crystal structure of maintained after irradiation, and some dislocations formed at the near surface. In the area with the greatest damage (depth of~400 nm), large amounts of dislocation loops formed, and as the C vacancy concentration increased, the size of the dislocation loops gradually decreased from 18.1 ± 5.6 nm to 11.1 ± 2.6 nm. No obvious dislocation loop was found in ZrC 0.7 after irradiation. In the area with the greatest damage, only "black-dot" defects were found. The existence of a large number of intrinsic C vacancies increases the probability of the combination of interstitials and vacancies generated by irradiation, resulting in a reduction in the existence of interstitials and the inhibition of the formation of dislocation loops.
Discussion
Ion beam irradiation is commonly used to simulate radiation effects in materials used in advanced nuclear energy systems [28]. The ZrCx lattice was expanded after Au ion irradiation, and this was also found in ZrC irradiated in proton irradiation (2.6 MeV) [14]. With the decrease in C/Zr ratio, the lattice expansion of ZrCx after irradiation decreases. The effects of stoichiometric variation on swelling of ZrC lattice during irradiation are reported. The defect structures in ZrC, ZrC0.9 and ZrC0.8 are similar after irradiation. The NaCl type crystal structure of ZrC was maintained after irradiation. No irradiation-induced voids were observed, which is consistent with the results of Yang et al. [14], Gan et al. [13,29], and Gosset et al. [15].
Generally, ion irradiation affects the microstructure and properties of materials according to the interstitials and vacancies generated by the irradiation. First, different kinds of defects are formed by the interstitials and vacancies clustering under certain conditions; further, the defects have an impact on the properties of materials. In order to investigate the influence mechanism of C vacancies on the irradiation damage behavior of ZrCx ceramics, the following questions should be clarified: (1) what are the kind, quantity and distribution of interstitials and vacancies generated by Au ions in irradiated ZrC? (2) How do these interstitials and vacancies move after cascade cooling? How many interstitials and vacancies will eventually be left? In what way will the remaining interstitials and vacancies form defects? (3) How does the intrinsic C vacancy affect the above two behaviors? Detailed analysis was carried out for the above three questions, as follows.
(1) The type, quantity and distribution of interstitials and vacancies generated by Au ion irradiation can be clearly decoded by SRIM software simulation. Irradiation produces Zr interstitials, C interstitials, Zr vacancies, and C vacancies. These point defects are presented in the form of Frenkel pairs (FP). Figure 1 shows the distribution of the Zr interstitials, C interstitials, Zr vacancies and C vacancies induced by ion irradiation along the irradiation direction. It can be found
Discussion
Ion beam irradiation is commonly used to simulate radiation effects in materials used in advanced nuclear energy systems [28]. The ZrC x lattice was expanded after Au ion irradiation, and this was also found in ZrC irradiated in proton irradiation (2.6 MeV) [14]. With the decrease in C/Zr ratio, the lattice expansion of ZrC x after irradiation decreases. The effects of stoichiometric variation on swelling of ZrC lattice during irradiation are reported. The defect structures in ZrC, ZrC 0.9 and ZrC 0.8 are similar after irradiation. The NaCl type crystal structure of ZrC was maintained after irradiation. No irradiation-induced voids were observed, which is consistent with the results of Yang et al. [14], Gan et al. [13,29], and Gosset et al. [15].
Generally, ion irradiation affects the microstructure and properties of materials according to the interstitials and vacancies generated by the irradiation. First, different kinds of defects are formed by the interstitials and vacancies clustering under certain conditions; further, the defects have an impact on the properties of materials. In order to investigate the influence mechanism of C vacancies on the irradiation damage behavior of ZrC x ceramics, the following questions should be clarified: (1) what are the kind, quantity and distribution of interstitials and vacancies generated by Au ions in irradiated ZrC? (2) How do these interstitials and vacancies move after cascade cooling? How many interstitials and vacancies will eventually be left? In what way will the remaining interstitials and vacancies form defects? (3) How does the intrinsic C vacancy affect the above two behaviors? Detailed analysis was carried out for the above three questions, as follows.
(1) The type, quantity and distribution of interstitials and vacancies generated by Au ion irradiation can be clearly decoded by SRIM software simulation. Irradiation produces Zr interstitials, C interstitials, Zr vacancies, and C vacancies. These point defects are presented in the form of Frenkel pairs (FP). Figure 1 shows the distribution of the Zr interstitials, C interstitials, Zr vacancies and C vacancies induced by ion irradiation along the irradiation direction. It can be found that the number of Zr interstitials and Zr vacancies generated by irradiation is slightly higher than the number of C interstitials and C vacancies, and both reach peaks at a depth of~350 nm. The distribution of interstitials and vacancies is consistent with the distribution of dpa along the depth.
(2) How do the large number of Zr interstitials, C interstitials, Zr vacancies and C vacancies generated by irradiation move? There are two ways: one way is that the interstitials migrate to the atomic position, and the vacancies migrate to the vacancy position; the other way is that the interstitials recombine with the vacancies to form a combination of Frenkel pairs.
The first type of migration is mainly determined by the migration barrier of interstitials and vacancies. Morgan et al. [22] used the first-principles method to calculate the migration barrier of interstitials and vacancies in ZrC, as shown in Table 3. The C vacancies and the Zr vacancies have very high migration barriers of 4.41 eV and 5.44 eV, respectively, which mean that they find it difficult to migrate at room temperature, meaning that the C vacancies and Zr vacancies generated by irradiation find it difficult to aggregate and grow. This therefore explains why no void is found in the cross-section TEM images of ZrC after irradiation. For C or Zr defects, interstitials have higher diffusivity than vacancies. The migration barrier of C interstitial is the lowest, at 0.27 eV; the migration barrier of Zr interstitial is 0.45 eV. Compared with vacancies, interstitials have higher diffusivity than vacancies, indicating that the defects formed after irradiation are mainly determined by interstitials. In the second way, interstitials recombine with vacancies to form a recombination of Frenkel pairs, which is mainly determined by the recombination barrier of FP. The recombination barrier of Zr FP is about 0.32 eV [22], which is slightly lower than the migration barrier of the Zr interstitials (0.45 eV), indicating that most of the Zr interstitials tend to recombine with the Zr vacancies after irradiation. The recombination barrier of C FP in ZrC is as high as 1.66 eV, which is much higher than the migration barrier of C interstitials (0.27eV), indicating that most C interstitials find it difficult to recombine with C vacancies after irradiation and tend to migrate along a certain crystal plane to form defects such as dislocations/dislocation loops. Therefore, a large number of dislocations and dislocation loops were observed in the TEM photograph of ZrC after irradiation, indicating that the dislocation loops formed in ZrC belong to a carbon core interstitial-type loop.
(3) How does the intrinsic C vacancy affect the above two behaviors? From the SRIM simulation results (Figure 3), it is known that as the intrinsic C vacancies increase, the concentration of C interstitials generated by irradiation decreases. In addition, Morgan et al. [22] found that the intrinsic C vacancies in ZrC x can significantly reduce the recombination barrier of C FP. Therefore, compared with ZrC, C interstitials are more likely to recombine with C vacancies as the intrinsic C vacancies increase. In addition, the total C vacancy concentration increases due to the presence of the intrinsic C vacancies in ZrC x , increasing the probability of the C interstitials recombining with C vacancies. Therefore, for the non-stoichiometric ZrC x , the decrease in the number of C interstitials caused by irradiation, the low recombination barrier of C FP, and the high concentration of C vacancy can reduce the total number of C interstitials after cascade cooling, suppressing the formation and growth of the dislocation loop. Thus, the size of the dislocation loop of ZrC 0.9 and ZrC 0.8 in the TEM images after irradiation is smaller than that in ZrC. With the further increase in the intrinsic C vacancy concentration in ZrC x , the recombination barrier of C FP can be reduced to 0.2 eV [22], which is lower than the migration barrier of the C interstitial (0.27 eV). Therefore, most of the C interstitials after irradiation tend to recombine with C vacancies, making it difficult for them to form dislocation loops along the specific crystal plane. This explains why no dislocation loops caused by irradiation were observed in ZrC 0.7 with higher intrinsic C vacancies.
As the intrinsic vacancies increased, the lattice expansion of ZrC x after irradiation decreases, and the formation and growth of dislocation loops is suppressed, which is significant for the enhancement of the tolerance of radiation damage, which yields non-stoichiometric ZrC x a promising candidate in the TRISO-coating fuel particle, which would be used to replace, or in addition to, the currently used SiC.
Conclusions
In the present work, ZrC x ceramics with different stoichiometries were irradiated with a four MeV Au ion beam at room temperature in doses of 2 × 10 16 ions/cm 2 , corresponding to~130 dpa. The ZrC x lattice was expanded after irradiation. With the decrease of the C/Zr ratio, the lattice expansion of ZrC x after irradiation decreases. The lattice swelling of ZrC 0.7 was only 0.019%. The defect structures in ZrC, ZrC 0.9 and ZrC 0.8 are similar after irradiation. The NaCl type crystal structure of ZrC maintained after irradiation. Some long dislocation lines are formed at the near surface, while in the area with the greatest damage (depth of~400 nm), large amounts of dislocation loops formed, and, as the C vacancy concentration increased, the size of the dislocation loops gradually decreased. Few dislocation loops were found in ZrC 0.7 after irradiation, and only black-dot defects were found in the area with the greatest damage. For the non-stoichiometric ZrC x , as the intrinsic vacancies increased, the decrease in the number of C interstitials caused by irradiation, the low recombination barrier of C Frenkel pairs and the high concentration of C vacancies can reduce the total number of C interstitials after cascade cooling. Therefore, the formation and growth of dislocation loops was suppressed, which is significant for the enhancement of tolerance of radiation damage. | 10,270.4 | 2019-11-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Computationally Analyzing Biometric Data and Virtual Response Testing in Evaluating Learning Performance of Educational Setting Through
Due to construction costs, the human effects of innovations in architectural design can be expensive to test. Post-occupancy studies provide valuable data about what did and did not work in the past, but they cannot provide direct feedback for new ideas that have not yet been attempted. This presents designers with something of a dilemma. How can we harness the best potential of new technology and design innovation, while avoiding costly and potentially harmful mistakes? The current research use virtual immersion and biometric data to provide a new form of extremely rigorous human-response testing prior to construction. The researchers’ hypothesis was that virtual test runs can help designers to identify potential problems and successes in their work prior to its being physically constructed. The pilot study aims to develop a digital pre-occupancy toolset to understand the impact of different interior design variables of learning environment (independent variables) on learning performance (dependent variable). This project provides a practical toolset to test the potential human impacts of architectural design innovations. The research responds to a growing call in the field for evidence-based design and for an inexpensive means of evaluating the potential human effects of new designs. Our research will address this challenge by developing a prototype mobile brain-body imaging interface that can be used in conjunction with virtual immersion.
INTRODUCTION
The idea of studying behavioral patterns to investigate human responses to architectural design has been around for many years, but it is only recently that this approach has consolidated into the widely recognized paradigm known as evidence-based design (EBD).This approach to architectural design relies on the careful empirical study of human responses and outcomes to inform design decisions (Cama, 2009;Edelstein & Macagno, 2012;Hamilton & Watkins, 2009).Many previous investigations have provided evidence that EBD practices can successfully improve the overall perceived quality of the architectural environment as well as specific measures of building performance (Ulrich, 2001(Ulrich, , 2006;;Hamilton & Watkins, 2009;Sailer, 2009;Lawson, 2010).The EBD approach has become particularly influential in healthcare settings, where it has been associated with improvements in the quality of care, greater patient satisfaction, and a decrease in the number of medical errors (Ulrich et al., 2008).
Current technology encourages designers to introduce more innovation into their work.While this innovation often leads to exciting and effective results, it also takes us away from tried-and-true solutions, into relatively uncharted territory.Scholars have demonstrated that the characteristics of the built environment can have a significant effect on human well-being.Specific design components have been strongly correlated with health outcomes (Truong & Ma, 2006;Wheaton et al., 2015), as well as with human efficiency and productivity (Day, 2017).Renewed interest in human-centered design in recent decades has led researchers to document the contributions of architectural design for reducing stress, improving mood, and enhancing visual memory, among other benefits (Ulrich et al., 1991;Sallis et al., 2006).Numerous studies have investigated different architectural styles and design-choices and how they affect human experiences (Choo et al., 2017;Vecchiato et al., 2015;Vartanian et al., 2013;Roe et al., 2013;Banaei et al., 2015;Shin et al., 2014;Küller et al., 2009).
Unfortunately, when innovative designs are created, it is difficult to accurately evaluate their full human effects, positive or negative, until after the buildings are constructed and put into use.This presents contemporary designers with a dilemma.How can we harness the best potential of the innovation allowed by today's technology, while avoiding costly and potentially harmful mistakes?
The goal of this research was to examine the effects of building-design on human factors (stress, anxiety, visual memory, etc.), by measuring the responses of participants as they interact with different architectural designs using Virtual Reality technology.The researchers' hypothesis was that virtual "test runs" can help designers to identify potential problems and successes in their work prior to its being physically constructed.
The learning environment includes social, cultural, temporal, physical (built and natural), and sometimes virtual aspects (McGregor, 2004;OECD, 2014).Student performance has been shown to have a significant relationship to the quality of the learning environment (Chan & Richardson, 2005).Poor-quality environments can create barriers such as impaired concentration, boredom, and claustrophobia (Mendell & Heath, 2005), and thereby lead to poorer educational outcomes.A highquality learning environment, in contrast, supports engagement and inquiry, and accounts for a diverse range of developmental needs, learning styles, and abilities (Martin, 2010).Despite the well-established link between learning environments and student outcomes, the specific elements within these environments that affect students have not been rigorously broken down and empirically investigated.This is especially true in relation to the architectural environment.Temple (2007) notes that, "Where connections between the built environment and educational activities are made, the basis for doing so tends to be casual observation and anecdotes rather than firm evidence."Further research is needed to help identify the individual elements of the physical environment that might be important from a design perspective in order to help support student achievement.(Kaup et al., 2013;Barret et al., 2015).The work that has been done in this area suggests, at best, a number of general themes regarding the optimal design of learning spaces.Perhaps the most dominant theme is that these spaces need to be flexible, both pedagogically and physically, so that they can be adjusted to reflect the nuances of different knowledge areas and specializations, as well as different learning styles (Butin, 2000).This awareness reflects the growing understanding among teachers of the importance of active and collaborative learning, student-faculty interaction, enriching educational experiences, and opportunities for intellectual creativity.Along with this emerging new pedagogy comes an increased interest in transforming traditional classrooms to a new learning environment that can more easily accommodate collaborative and active learning in a technology-rich setting (Brooks et al., 2012).
Other specific factors that have been associated with higher student performance in the existing literature include the incorporation of naturalness (in light, sound, temperature, air quality, and links to nature) (Crandell & Smaldino, 2000;Daisey et al., 2003;Wargocki & Wyon, 2007;Barret et al., 2015); learning environments that create a greater sense of individuality, ownership, and flexibility (Zeisel et al., 2003;Ulrich, 2004;Barret et al., 2015); and environments that provide greater stimulation and sensory impact (Kuller et al., 2009;Fisher et al., 2014;Barret et al., 2015).As can be seen in the dating of these citations, this is a relatively new area of study, and there is a lot of hope in the literature that future investigations can help to further isolate the relevant factors and contribute to learning outcomes by implementing these concepts and techniques.
EXPERIMENTAL DESIGN AND PROCEDURE
The researchers' primary objective was to create a standardized and intuitive toolset that can be used by designers to help evaluate their work.Electroencephalography (EEGs) will be used, along with other noninvasive biophysical measurements and selfreporting, to objectively analyze the participants' conscious and subconscious responses to different building designs.
We collected brain activity and relative spatial location from the participants that elect to wear the EEG headset.We also collected voluntarily self-reported non-identifiable information such as age, gender, race, ethnicity, whether the participants consume caffeine, alcohol or recreational drugs, whether they have, or have had in the past stroke, concussion, seizures, movement disorders or other neurological or physical conditions, the participant's current occupation.
Our intention with the various measurements was twofold: to quantify the human stress response and assess performance on a number of cognitive tasks.Based on previous studies examining the first of these (Healey and Picard, 2005), we incorporated three biometrics associated with the highest correlation to self-reported stress-electrocardiography (ECG), galvanic skin response (GSR), and electroencephalography (EEG).Additionally, an accelerometer and electrooculography (EOG) sensors were attached to or near the EEG cap to track head and eye movement in each environment.All data was collected at 500 Hz and synchronized using the 64-channel ActiCHamp module (Brain Products GmbH, Germany) with Ag/AgCl active electrodes.A total of 63 electrodes were used (57 for EEG arranged according to the international 10-20 placement system, 4 for EOG and 2 for ECG).The impedance of each electrode was kept below 50 kΩ, and often below 20 kΩ, at all times.This was ensured throughout the study with careful placement of the virtual reality headset.Figure 2 shows the electrode and equipment placement on a study participant.
The data was recorded using the BrainVision Recorder software (Brain Products GmbH, Germany) and synchronized to the participants' responses and the virtual reality environment using the Lab Streaming Layer program (Kothe 2014).Prior to entering each new room iteration or segment of the experiment, participants were prompted to press a specific button, programmed to act as a marker on the recorded biometric data.Screen recordings were also collected throughout the study.
Following an introductory survey and neutral baseline recordings, the study was segmented into two main parts.Experiment "A" shown in blue in Figure 2 consisted of five memory tasks-the Benton Test, Visual Memory Test, Stroop Task, Digit Span Task and a mathematical problem-solving task-followed by self-reported stress and mental fatigue on a 10-point Likert scale.Each task was either consistently timed or a pre-determined number of questions to ensure homogeneity between room conditions.Instructions were provided prior to beginning the study for participants unfamiliar with the tasks.Participants were asked to complete the same tasks in a real classroom, a VR representation of it, and in nine other classroom iterations.Experiment "B" shown in red in Figure 2 initially asked subjects to navigate along a preselected path through a cityscape featuring __ buildings with unique facades, after which they were asked __ questions regarding what they remember of the path they took.This was repeated once more so that participants could navigate the cityscape knowing the type of memorization questions that would be asked of them.Finally, they were instructed to design the façade of the "ideal landmark in their favorite city" by modifying characteristics such as height, base geometry and twist The ten classroom designs were selected variations of significant interior features such as color, height, width, roundness and incorporation of natural elements.The first perfectly replicated the real classroom participants experienced at the beginning of the study.
All analysis of collected signals was conducted using open source EEGLAB software (Delorme and Makeig, 2004) and other MATLAB functions related to LSL.The H-∞ filtering program (Kilicarslan 2016) was used to initially pre-process the EEG and EOG signals and eliminate ocular artifacts.The data was subsequently preprocessed following a modified PREP pipeline (Bigdely-Shamlo 2015) and band-pass filtered between 0.1 and 100Hz before processing according to independent component analysis and dipole fitting.
Each additional biometric signal collected was individually band-pass or high-pass filtered.From there, values such as heart rate, heart rate variability, average GSR power and average magnitude of acceleration were calculated to be compared.
RESULTS
At this stage, we have completed a pilot study with eight individual and are working on collecting the data for the main study.In this section we briefly explain out first data analysis comparing three learning environment.Our independent variables in design of this room were height of classroom, view to nature, and room texture.We analyzed the effect of these variables on learning performance of the participants during the scanning session separately.The SAM Test demonstrated that change in the design element had a significant effect on learning performance, Z = -1.32,P < 0.05.Specifically, participants were more likely to have better learning performance if they had natural light with view to nature comparing to the room without windows Z = -1.27,P = 0.021.Following the completion of EEG recording, participants were presented with all of the stimuli that they had viewed in the scanner once again, and asked to rate each stimulus on pleasantness (using a five-point scale with anchors "very unpleasant" and "very pleasant") and on learning (using a five-point scale with anchors "not learning-friendly at all" and "very learning-friendly").
Nonparametric partial correlation was computed to determine the relationship between design and the learning performance whilst controlling for pleasure.There was a positive significant partial correlation between classrooms with more simple environment comparing to the one with full texture (p = 0.037).However, classroom with higher ceiling did not show an impact on theta activity.
IMPACT ON SOCIETY AND ON THE FUTURE OF DESIGN
This study has the potential to provide designers, educators, and psychologists with an important toolset for evaluating the relationship between architectural form and human experience.It can also provide valuable data to help neuroscientists understand cognitive reactions to spatial experience.Sociologists may be interested in using our data to evaluate relationships between demographic variables (ethnicity, nationality, socioeconomic background, etc.) and cognitive responses to architecture.Engineers may be interested in viewing our prototype as inspiration for the design of nextgeneration, context-aware, brain-body imaging (MoBI) technology.
Last but not least, the construction of a broad, multi-modal amalgamated dataset based on comparative design studies using our system could contribute significantly to the optimization of architectural design and improvements in the human quality of our built environment.The ultimate benefit to the public will be in the form of improved health, creativity, productivity, and a more satisfying architectural experience that can come from better human-centered design (Kalantari, 2017).By including demographic variables in the analysis, designers can become more aware of the effects of the built environment on specific populations, including disabled individuals, women, and other minority groups.This project provides a practical toolset to test the potential human impacts of architectural design innovations.The research responds to a growing call in the field for evidence-based design and for an and for a and for an inexpensive means of evaluating the potential human effects of new designs.Our research will address this challenge by developing a prototype mobile brainbody imaging interface that can be used in conjunction with virtual immersion.This allows participants' conscious and unconscious reactions to new architectural designs to be evaluated before the buildings' physical construction.
CONCLUSION
To test the idea, we have conducted several pilot studies.In these experiments, we evaluated biometric data obtained from participants who "walked" through an architectural space in a Virtual Reality construct.We analyzed the data (which included participant demographic information), to determine if any broad and useful conclusions could be drawn about human responses to different building designs on the basis of a virtual experience.The results of the experiment indicated a significant relationship between different virtual architectural forms and measured stress levels.
Current information technology has allowed many fields to benefit from "big data" analysis in their optimization of resources.However, design fields are somewhat lacking in this area, due to the difficulty of obtaining quantitative data about human responses to design and the tremendous investment required to construct and test new architectural ideas.This study has the potential to provide designers, educators, and psychologists with an important toolset for evaluating the relationship between architectural form and human experience.The construction of a broad, amalgamated data-set based on these evaluations could contribute significantly to the optimization of design and the quality of our built environment.By including demographic variables in the analysis, designers can become more aware of the effects of the built environment on specific populations, including disabled individuals, women, and other minority groups.
Figure 1 :
Figure 1: the description should be located below the figure, Arial 8. Lee and Lee (1991) rated the paper as "the best in its field" (p.14)
Figure 2 :
Figure 2: The experimental setup for data recording: (a) EEG electrode locations, (b) EOG electrode locations, and (c) all sensor equipment as worn by a study participant.Source: authors.
Figure 3 :
Figure 3: Experimental timeline with distinct segments of the study indicated by color.Source: authors.
Figure 4 :
Figure 4: The research participants completed learning tasks in (a) the real classroom and (b) a virtual rendering of the classroom.Source: authors.
Figure 5 :
Figure 5: Complete data pipeline per sensor.
Figure 7 :
Figure 7: Data obtained from the pilot study participant for five conditions (indicated in columns): baseline eyes open, baseline eyes closed, real classroom, virtual classroom, and virtual classroom with added windows.The figure rows show the initial 5s of data from selected EEG, EOG, EKG and head-acceleration channels.
Figure 8 :
Figure 8: Data obtained from the pilot study participant for five conditions (indicated in columns): baseline eyes open, baseline eyes closed, real classroom, virtual classroom, and virtual classroom with added windows.The figure rows show (top) total al pha (8-12 Hz) and theta (4-8 Hz) power in all EEG channels, and (bottom) raw and tonic GSR (skin conductivity) signals. | 3,775.8 | 2018-11-01T00:00:00.000 | [
"Education",
"Computer Science",
"Engineering"
] |
A machine learning tool for future prediction of heat release capacity of in-situ flame retardant hybrid Mg(OH) 2 -Epoxy nanocomposites
In this work
Introduction
Epoxy resins are widely used in several industrial applications, due to their peculiar properties in terms of thermal stability and chemical resistance [1,2].Nevertheless, epoxy resins need the addition of suitable flame retardants to fulfill specific standards in the aerospace and automotive industry.The flame retardancy of epoxy resin can be enhanced by either the chemical modification of the polymer matrix, or the curing agent introducing functional moieties [3][4][5].In this context, it is well known that metal hydroxides represent sustainable flame retardants able to work in the condensed phase.Metal hydroxides lower the heat transfer by endothermic decomposition and convert into refractive metal oxides acting as thermal shield and oxygen barrier during the combustion [6,7].Among the available metal hydroxides, magnesium hydroxide (Mg(OH) 2 ) has shown good potentialities for improving the fire behavior of epoxy resins [8,9].To be effective, large amounts (7-20 wt%) of Mg(OH) 2 are usually required [10][11][12], which represents a significative downside in terms of costs and mechanical properties of the final products.Recently, an in-situ sol-gel synthesis route was designed to obtain Mg(OH) 2 nanocrystals (at about 5 wt% loading) in a bisphenol A diglycidyl ether (DGEBA)-based epoxy resin via an eco-friendly solvent-free one-pot process [13].This methodology allows for the generation of finely and well-distributed nanocrystals, embedded in an organic-inorganic (hybrid) network, able to increase the thermal stability of epoxy, leading to the formation of an abundant and quite stable char in inert atmosphere [13,14].By this strategy, in-situ hybrid epoxy nanocomposites showing good fire behavior can be prepared, even with low nanofiller loadings.
The development of new functional polymer-based materials with excellent fire performances often demands the investigation of several compositions by means of destructive tests.The possibility to predict the results of small-scale combustion tests (e.g.pyrolysis combustion flow calorimetry, PCFC) based on several physical and chemical properties of polymers would contribute to reduce the research efforts, especially during the design of a new material.In the literature, many research groups have used publicly available data about polymer properties to predict or optimize new polymer properties and fire parameters.For example, the molar group contribution method can predict the flammability parameters of many polymers [15,16].However, this method is error-prone, due to the arbitrary assignment of chemical groups, and shows strong limitations in the case of new materials, for which molar contributions are not available [17,18].Machine learning allows for overcoming the limits linked to arbitrary assignments or human failures, as it uses algorithms or computational methods to learn information directly from data without relying on a predetermined equation as a model [19][20][21].Artificial neural network models involve the application of different types of algorithms on input data [22].Among them, a locally weighted regression algorithm is a non-parametric regression technique used to fit a surface or curve to a set of data points.Global regression models assume a single relationship between the response variable and the predictors, while local regression ones adapt to the local behavior of the data, allowing for more flexible and localized modeling.The local regression model fits a regression function to a subset of the data for each target point in the dataset.The subset consists of data points within the bandwidth of the target point.The weights assigned to each data point in this subset depend on their proximity to the target point, determined by the kernel function [23].Overall, the local regression model gives more importance to nearby data points while downweighting or ignoring distant points: therefore, it adapts to the local patterns in the data.Thanks to that, the model can consider even complex relationships that may vary across different regions of the predictor space.Local regression is extremely useful when dealing with nonlinear relationships or when the underlying relationship between the predictors and the response variable is expected to vary across different regions of the data.Local regression provides a flexible and data-driven approach to modeling that can capture local behaviors and improve prediction accuracy [24].Machine learning enables the prediction of fire performances related to new products and allows for assessing their suitability for a specific application [25,26].Despite the potential of machine learning, overfitting can limit the accuracy of neural network models in predicting flammability parameters.The overfitting usually occurs when a model is extremely reliable on training data, but it fails when applied to never seen (i.e., not used for training the model) data.There are several approaches to control the overfitting and improve the accuracy of neural network models, for example increasing the size of the training dataset, applying early stopping, employing cross-validation, or simplifying the model architecture [27,28].Parandekar et al. predicted the heat release capacity (HRC), the amount of char residue, and the total heat release (THR) of a set of different polymers using genetic function algorithms [29].In particular, they correlated the flammability of the polymer to its chemical structure by quantitative structure-property relationships methodology, finding a good correlation between the polymer repeat unit structure and the flammability parameters [29].Lately, Asante-Okyere et al. proposed a generalized regression neural network and a supervised learning algorithm-based feed-forward back propagation neural network to predict PCFC results, such as HRC, THR, peak of the heat release rate, heat release time, and related temperatures of polymethyl methacrylate using such parameters as heating rate and sample mass [30].Both neural network models gave high correlation coefficients, despite some differences in the performances during training or testing according to the target value.With a similar approach and starting from the same input data, Mensah et al. predicted PCFC results of extruded polystyrene using feed-forward back-propagation neural networks and the group method of data handling [31].The comparison of the HRC and heat of combustion prediction revealed an excellent accuracy and repeatability of the latter method, with a mean deviation of about 4. Recently, Pomázi and Toldy developed an artificial neural network-based system to predict the THR, the peak of the heat release, the time to ignition, and the char residue from structural properties and results from flammability tests (i.e.limiting oxygen index and UL-94 vertical burning test) [27].The average absolute deviation between predicted and validated data was below 10% in most cases.They also carried out a sensitivity analysis of the output parameters in order to rank the input parameters based on their impact on the output parameters [27].
In this work, for the first time, the fire behavior and glass transition temperature of epoxy nanocomposite containing Mg(OH) 2 nanocrystals, in-situ generated by sol-gel at mild operative conditions, without any use of surfactants and following a one-pot procedure, were investigated.The fire performances were evaluated by means of forced-combustion tests and pyrolysis combustion flow calorimetry measurements to collect the main parameters (e.g., HRC, THR, char residue) and shed light on the flame retardant action of Mg(OH) 2 nanocrystals during the combustion of the nanocomposite.Also, an artificial neural networkbased system was developed to predict the HRC value of the prepared in-situ hybrid Mg(OH) 2 -epoxy nanocomposite from the physicochemical properties of polymers and their PCFC parameters.A machine learning approach was used to estimate the HRC of a novel material and a sensitivity analysis was performed to prove the accuracy of the implemented algorithm.Finally, the cross-validation method was employed during the hyperparameter tuning phase (with the "X-Partitioner" node) to avoid overfitting and find the best hyperparameters for each subset of the input data.
Synthesis of Epoxy/Mg(OH) 2 nanocomposite
The in-situ synthesis of the hybrid Mg(OH) 2 -epoxy nanocomposite was performed according to the procedure elsewhere reported [13], for which the main steps are briefly displayed in Fig. 1: (i) 20 g of DGEBA and 5.9 g of magnesium ethoxide were mixed overnight at 80 • C; (ii) g of DGEBA and 3.5 g of APTS were stirred at 80 • C for 2 h, then this mixture was added, under stirring, to the first one at 80 • C for 30 min; (iii) ethanol (1.12 mL), distilled water (3.37 mL), and TEOS (TEO-S/APTES molar ratio as low as 1.25) were added to the main batch at 80 • C under reflux for 90 min.Finally, the reaction vessel was opened and left at 80 • C for 30 min to completely remove water and ethanol.At room temperature, ~10.4 g of hardener was added to the mixture and mixed for 5 min.Before pouring into a Teflon mold, the resulting mixtures underwent degassing under vacuum.Samples were cured at 30 • C for 24 h, then post-cured at 80 • C for 4 h.The loading of magnesium hydroxide was evaluated from the stoichiometry around 5.2 wt%.
The in-situ hybrid Mg(OH) 2 -epoxy nanocomposite will be coded as EPO-Mg throughout the whole text, while the acronym EPO will be used for the unmodified epoxy resin.
Characterization
Differential scanning calorimetry (DSC) measurements were performed under a N 2 flow (50 mL/min) using a DSC 214 Polyma instrument (NETZSCH-Gerätebau GmbH, Selb, Germany).The measurements were carried out according to the following cycles: 1st heating up from 20 up to 300 • C at 10 • C/min, then cooling down up to 20 • C at − 10 • C/ min, and finally 2nd heating from 20 up to 300 • C at 10 • C/min.The glass transition temperature was evaluated on the 2nd heating curve.
The fire behavior of the prepared samples was deeply investigated through forced-combustion tests.In particular, a cone calorimeter (CC), as Noselab instrument (Nova Milanese, Italy), working with a kW/m 2 irradiative heat flux and placing the samples (50 × 50 × 2 mm 3 ) in horizontal configuration was used, following the ISO 5660 standard.Several parameters (namely, time to ignition (TTI, s), peak of heat release rate (pHRR, kW/m 2 ), total heat release (THR, kW/m 2 ), total smoke release (TSR, m 2 /m 2 ), specific extinction area (SEA, m 2 /kg), and the residues) of samples were measured.With respect to pyrolysis combustion flow calorimetry, CC involves the use of a spark to trig the flaming combustion of volatiles generated by heat radiation, as required by the ISO 5660 standard.A pyrolysis combustion flow calorimeter (PCFC, Fire Testing Technology Instrument, London, UK), following the ASTM D7309 standard, was employed to assess pHRR, THR, and HRC.In the pyrolysis zone, samples of about 7 mg were heated from 150 to 750 • C at 1 K/s.Three tests were carried out on each material system and the results averaged.
Machine learning predictive analysis
KNIME version 4.0.2(available free of charge at https://www.knime.com/download-previous-versions)was used as open-source software for creating data science.The estimation of the HRC of the in-situ hybrid Mg(OH) 2 -epoxy nanocomposite (EPO-Mg) was performed by a supervised machine learning approach.
Glass transition temperature of Epoxy/Mg(OH) 2 nanocomposite
The procedure leading to the formation of twinned Mg(OH) 2 -based nanocrystals with a pseudo-hexagonal symmetry in the hybrid epoxysilane matrix is summarized in Fig. 1.Fig. S1 shows that the in-situ generation of Mg(OH) 2 nanocrystals leads to a glass transition temperature of 82 • C, a lower value (~14%) compared to that of the unmodified epoxy resin.This finding may be due to the presence of crystalline domains (Fig. 1) which negatively affect the polymer chains, owing to their disturbance during the establishment of inter-chain interactions in the curing process [35,36].The higher glass transition of EPO with respect to EPO-Mg may be also ascribed to more dangling segments in the structure of the hybrid nanocomposite, resulting in a slight network loosening effect [36,37].
Fire behavior of the in-situ hybrid Mg(OH) 2 -Epoxy nanocomposites
HRR curves for the epoxy system and its hybrid counterpart, obtained by cone calorimetry using a 35 kW/m 2 irradiative heat flux, together with fire and smoke parameters, are reported in Fig. 2a, Table 1 and Table 2, respectively.First of all, it is worth noting that the nanofiller is responsible for an anticipation of the TTI by about 15 s (Table 1).This finding is attributed to two main effects, i.e.: (i) the low amount (~5 wt%) of magnesium hydroxide nanocrystals, which is not enough to exert an effective thermal protection during the first combustion stages, and (ii) the basic character of Mg(OH) 2 that speeds up the kinetics of the pyrolysis reaction occurring under the exposure to the irradiative heat flux [7,11,38,39].Besides, the presence of the nanofiller significantly improves some of the thermal and smoke parameters (Table 2), notwithstanding that its concentration is very low (i.e. 5 wt%, see section 2.2).The residue of the hybrid system at the end of the cone tests is almost doubled as compared to the pristine epoxy (Table 1 and Fig. 3), hence indicating that the ceramization effect induced by Mg(OH) 2 is very effective in protecting the underlying polymer network from the irradiative heat flux, slowing down the heat and mass transfer from the surroundings to the sample and vice versa.In particular, compared to pristine EPO, THR, pHRR, and HRR decrease by about 10, 37, and 29%, respectively, when the nanofiller is in-situ formed in the epoxy system (Fig. 2a and Table 1).Similarly, TSR and SEA values show a decrease of about 22 and 5% (Table 2), respectively, hence indicating that the nanofiller is quite effective as smoke suppressant, despite its very low loading in the epoxy system.Finally, CO/CO 2 ratio remarkably decreases (Table 2) in the presence of Mg(OH) 2 , hence indicating a certain efficiency of combustion that does not inhibit the conversion of CO to CO 2 [11][12][13].As a consequence, it can be argued that the flame retardant action of the nanofiller is more pronounced in the condensed phase, rather than in the gas phase.Flame retardancy index (FRI) is a dimensionless parameter that has been extensively used to compare the fire performances of flame retardant polymer-based systems with the ones of their unmodified counterparts [40][41][42].Thanks to FRI, it is possible to rank the material and evaluate its fire response.As reported in Table 1, the FRI value (1.2) of EPO-Mg is very low and only allows for a classification of the material as "poor".However, this result agrees with what was observed for in-situ silica-epoxy systems [3,43], for which a condensed phase action was also the main flame retardant mechanism.Based on that, the chemical composition of EPO-Mg could be modified by the addition of phosphorus (P)-based flame retardants and other synergists.This may significantly improve the fire response of EPO-Mg, resulting in an increase of FRI value.Besides, the incorporation of P-based compounds and nitrogen-containing species (e.g., melamine) may also positively affect the glass transition temperature of the nanocomposite through the establishment of proper interactions in the polymer network, as already observed in a previous study [44,45].
Table 3 and Fig. 2b show the results of Pyrolysis Combustion Flow Calorimetry (PCFC) tests.PCFC apparatus allows for the evaluation of different parameters concerning the fire behavior of composites [46,47].With respect to the cone calorimeter, the PCFC does not simulate a real fire scenario, as the combustion of the investigated samples is not performed in presence of air and does not involve the use of a spark ignition for the flame and a heat radiation source.PCFC is a non-flaming test, where the flaming combustion takes place in two separate chambers [46].Indeed, PCFC measurements involve a controlled pyrolysis of the samples followed by the oxidation of the volatile products.This configuration makes the PCFC very helpful to better study the gas phase action of flame retardants [46,48,49].The results provided by PCFC measurements additionally confirm the flame retardant action exerted by magnesium hydroxide in the condensed phase during the combustion.Similar to other metal hydroxides, magnesium hydroxide can reduce the heat transfer to the polymer bulk by a heat-sink effect, as its endothermic decomposition produces refractive oxide and water.The oxide acts as a thermal shield, in cooperation with the silica nanostructures, while the water dilutes the concentration of the flammable gases in the gas phase [3,50].The thermal shield not only lowers the heat exchange but also the transfer of flammable gases and oxygen at the boundary phase during the combustion.The combination of all these effects results in the remarkable decrease of THR, HRC, and pHRR values of the nanocomposite with respect to pristine EPO (Table 3 and Fig. 2b).
The protection of the ceramic shield leads to a notable increase in the residue at the end of PCFC (Table 3), which agrees with the cone calorimetry results.Some additional aspects are important to be mentioned, which further explain the lower values of THR, HRC, and pHRR measured by PCFC tests.Fig. 2b shows that the HRR curve of EPO is narrow, owing to a large amount of heat released in a short time.On the other side, the HRR curve of EPO-Mg appears flattened and broad, due to the strong condensed phase action of Mg(OH) 2 nanocrystals from the first decomposition steps, leading to a slow heat release over a longer period of time [51].
Prediction of the heat release capacity of in-situ hybrid Mg(OH) 2 -Epoxy nanocomposites by machine learning
Machine learning techniques allow for the formulation of complex algorithms to describe a phenomenon that is not possible to model through a traditional approach.The human being learns rules of general validity from nature and usually, this learning occurs through an iterative process, which slowly increases our knowledge.Taking inspiration from this process, machine learning is a learning algorithm, as it provides specific operations to the machine that this latter will use to afford future experiences.The machine learns by data sets, which are inserted in a generic algorithm programmed to perform a particular function.Artificial neural networks (ANNs) consist of a set of algorithms for classification and regression, which have been widely used to solve different problems [52,53].As the biological brain, neural networks are based on large groups of neurons connected by axons.The artificial neurons form an interconnected network of individual neural units.The connection between units can be reinforced or inhibited through a combination of the input values and an activation function, which returns the output of the neuron [19,54].
ANNs are mainly nonlinear mathematical functions able to transform a set of independent variables x = (x 1 , …, x n ), i.e., network inputs, into dependent variables y = (y 1 , …, y k ), i.e., network outputs.The obtained results depend on a set of values w = (w 1 , …, w n ), which are called weights.Eq. ( 1) represents the relationship between outputs and inputs of the network: where: During the training phase, the neuron recovers the information that will be located in the weights and biases and used in congruent situations.The activation function, f, is generally a threshold function that activates only neurons showing signals compatible with the threshold, hence the signal is transferred to the next neuron or neurons.Sigmoid, nonlinear stepped or logistic functions are some examples of activation functions [55].An iterative procedure is responsible for adjusting the weights during the training phase.This procedure is computational-demanding and requires a certain number of input-target pairs, which are called training sets.Indeed, in training, the values of weights that minimize a specific error function are searched [56].An ANN is usually composed of three parts, containing distinct quantities of neurons: (i) an input level, (ii) several hidden levels, (iii) an output level.Through the neurons belonging to the internal layers, the input signals move from the input level to the output level, as displayed in Fig. 4.
The heat release capacity (HRC) depends on the fire behaviour and the thermal stability of a specific material, revealing its attitude to degrade under combustion.HRC can be used to classify the flammability of materials and can be evaluated from PCFC results (see section 3.1) or from additive molar group contributions [15].As an alternative to these methodologies, some recent research works in the literature have demonstrated that artificial neural network (ANN) models represent a valuable approach for the prediction of HRC of several materials [27,57].ANN models can be very useful in case the PCFC apparatus is not available or low processing times (i.e., a reduced number of measurements) are required to investigate the combustion properties of a specific material.Herein, by the use of KNIME, an open-source software for creating data science and a supervised machine learning approach [58], we developed an artificial neural network-based model to predict the HRC value of EPO-Mg from its PCFC parameters and chemical-physical properties (input data set, Table S1).The input parameters for the model (columns of Table S1) consist of chemical-physical properties, HRC, THR, and char yield of several polymers, including EPO and EPO-Mg, whose experimental values were found in the literature [26].A statistical analysis was performed on input dataset to estimate mean and variance of each different parameter.The results of this analysis, collected in Table S2, clearly show the heterogeneity among the selected classes of polymers.The physico-chemical properties of EPO and EPO-Mg were assumed the same and equal to those of an epoxy resin cured with an aliphatic amine hardener (i.e., EPA in Ref. [26]).For the hybrid Mg(OH) 2 -epoxy nanocomposite (EPO-Mg) and the pristine resin (EPO), the values of Tg, HRC, THR, and char yield were experimentally measured in this research work by DSC and PCFC tests, respectively.As shown in section 3.1, the HRC value of EPO-Mg gathered from PCFC measurements was found around 281 J/g-K and will be used, together with the HRC values of the other polymers (Table S1), to evaluate the reliability of the developed ANN model through a sensitivity analysis.The validation of the artificial neural network-based system relies on the verification of the predictive capacity of the model exploiting never used input data.To avoid an overfitting for the model, the number of input parameters was chosen considering the number of available observations.
We split the input data set into two subsets: the training set and the test set.The training set (Table S3) was employed for training the model, while the test set (Table S4) for evaluating the model's forecasting capability.A measure of the model's performance was carried out by assessing the accuracy of the test data.The input data (Table S1) set was randomly split as follows: 70% of training data set and 30% of test data set.The connection weights based on the error committed on the output production are adjusted by the training data set that contains 34 observations.On the other side, the test data set (Table S4), containing 15 observations, represents instances available for the test and can be used to validate the algorithm and verify the network's forecasting capacity, when it receives input data never seen before (HRC column in Table S3), which will be named as "new data" in the following.
The simulation model developed in this research work is a fully connected feed-forward artificial neural network based on a multilayer perceptron [59,60].Fig. 5 shows the architecture of the model based on neural networks and the structure of each layer.The structure mainly consists of an input layer with twelve variables (molecular weight, M w ; Van-der-Waals volume, V van-der-Waals ; molar volume, V m ; density, ρ; solubility parameter, δ p ; molar cohesive energy, e coh ; glass transition temperature, T g ; molar heat capacity, C n ; index of refraction and entanglement, N; entanglement molecular weight, M w ; THR and char yield from PCFC measurements), as shown in Table S1, one hidden layer with six neurons and an output layer with a single neuron that returns the predicted heat release capacity of the material.
To predict the values of HRC for EPO-Mg, our artificial neural model performs a locally weighted regression with the use of a trained multilayer perceptron classifier, which performs the regression, and a knearest neighbors algorithm (K-NN) [61].We employed this supervised machine learning algorithm to assign weights to the training data, considering their distance from the new data.The assigned weight for each training data point is inversely proportional to its distance from the new data point (Fig. 6).Thus, when the classifier performs the regression, it gives more consideration to "local" data (i.e., the training data in the K-regions, which are the nearest data to the new ones), leading to more accurate results compared with a normal regression approach.
Generally speaking, the training of a multilayer perceptron model involves two main steps: (i) the tuning of the hyperparameters and (ii) the validation of the model trained with the hyperparameters by using a subset of the input data [62,63].In addition to these steps, to obtain a more accurate prediction of the test data set and thus also of the HRC value of EPO-Mg, we performed the "cleaning" of the input data for the algorithm.This data cleaning phase involves two pre-processing procedures consisting of correlation filtering and Z-Score normalization.The correlation allows for a reduction of the redundant columns of input data of Table 1, which do not provide useful or necessary information for the algorithm and its accuracy.The correlation threshold was chosen as 70%: therefore, all the columns with a correlation value greater than this were not considered as input data set for our model.Subsequently, the Z-Score normalization of the input data of Table 1 was also carried out by applying Eq. (2): Fig. 5. Artificial neural network model architecture composed by three layers.M w , molecular weight (g/ mol); V van-der-Waals, Van-der-Waals volume (mL/mol), V m is molar volume (mL/mol), ρ is density (g/mL), δ p is solubility parameter (MPa 1/2 ), ecoh is molar cohesive energy (J/mol), T g is glass transition temperature (K), C n is molar heat capacity (J/mol-K), N is the index of refraction and entanglement, M w is the entanglement molecular weight (g/mol), THR is the total heat release (kJ/g), HRC is the heat release capacity (J/g-K). where: • Z is the final normalized value • x is the original value • μ is the average value • σ is the standard deviation The Z-Score normalization is very useful to handle outliers and has been shown in the literature to increase the accuracy of regression models [64].After the cleaning process, the input data set appears as reported in Table S5, which is reduced to eight columns showing values of M w , V van-der-Waals, δ p , T g , N, M w entanglement molecular weight, THR and char yield for the remaining polymers.Normalized input data set was used to predict the HRC value of the EPO-Mg.
After the cleaning process, we performed a tuning phase mainly concerning the search of the values of the hyperparameters able to minimize both the root mean square error (RMSE) and the mean absolute error (MAE) for the algorithm [65,66].The minimization of both errors involved the use of a "Parameter Optimization Loop (POL)".In the present research study, POL was exploited for finding the best combination of hyperparameters (such as the "K" parameter of K-NN algorithm and the internal hyperparameters of the ANN), which minimize the global error when performing locally weighted regressions on each subset of the input data (extracted through the "X-Partitioner" node) [22].As mentioned, two types of hyperparameters were tuned: "K" (i.e., number of neighbors), which belongs to the K-NN algorithm, and the internal hyperparameters (e.g., the learning rate, the momentum, and the number of epochs) of the multilayer perceptron [67].K equal to 9 was found as the best value for our K-NN algorithm, as it gives the lowest RMSE and MAE.After this tuning phase, the HRC values for the test data set were predicted (Table S4) and the values of RMSE and MAE were evaluated at around 145.6 and 186.1, respectively.MAE and RMSE provide an insight into the average distance between the experimental values and the ones predicted by the model [65,66].These values of MAE and RMSE confirm that our algorithm has a good predictive capability, as the predicted HRC values are very close to the experimental ones.The trained algorithm provides a predicted value of HRC for EPO-Mg of around 273 J/g-K (Table S6), which is quite similar to the experimental value (281 J/g-K), in the range of discrepancy (Table 3).The whole KNIME workflow employed in this research work to predict the HRC value is reported in Fig. S2.
Conclusions
In this work, we evaluated the fire behavior of an hybrid Mg(OH) 2epoxy nanocomposite and correlated the experimental data to modeling its heat release capacity.The epoxy nanocomposite exhibited lower glass transition temperature compared to the unmodified resin, probably due to the presence of nanocrystals resulting in some disturbing effects during the establishment of inter-chain interactions.Interestingly, despite the very low loading (~5 wt%) of the in-situ generated nanofiller, Mg(OH) 2 nanocrystals were able to significantly lower the fire and smoke parameters during cone calorimetry and pyrolysis combustion flow calorimetry tests, hence enhancing the overall flame retardant behavior of the epoxy network.Then, a machine learning approach was designed for building a model, based on simple data (i.e., training data), able to make predictions without being explicitly programmed to perform that function.In particular, the developed artificial neural network-based system was able to effectively predict the heat release capacity of the prepared Mg(OH) 2 -epoxy nanocomposite, resulting in low error values (MAE and RMSE equal to 145.6 and 186.1, respectively).This research demonstrates that machine learning may be considered a valuable tool for predicting the fire performances of novel flame retardant polymer-based nanocomposites and better designing their future functionalities.
Declaration of competing interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Aurelio Bifulco reports financial support was provided by Italian Ministry of Education and Research.Aurelio Bifulco reports a relationship with University of Naples Federico II that includes: employment.
Fig. 1 .
Fig. 1.Overall procedure for the synthesis of hybrid Mg(OH) 2 -epoxy flame retardant nanocomposites.DGEBA: bisphenol A diglycidyl ether; APTS: 3-aminopropyltriethoxysilane; TEOS: tetraethyl orthosilicate.On the left with blue and yellow colours, twinned Mg(OH) 2 nanocrystals with a pseudo-hexagonal morphology and a few multisheet-silica nanoparticles are represented, respectively.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Fig. 3 .
Fig. 3. Photographs of the char residues obtained after cone calorimetry tests for (a) EPO and (b) its hybrid nanocomposite (EPO-Mg).
Fig. 4 .
Fig. 4. Schematic representation of a generic artificial neural network with nodes-related weighted connections.
Fig. 6 .
Fig. 6.Operating principles of a K-NN algorithm.Red triangles represent the nearest training data to the new data (green circle) or "never seen before" input data.These training data are called "local data" and are confined in specific Kregions.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) Bifulco and Angelo Casciello: Conceptualization, Methodology, Formal analysis, Investigation, Validation, Writing -Original Draft.Giulio Malucelli: Conceptualization, Methodology, Supervision, Resources, Writing -Review and Editing.Sabyasachi Gaan and Stanislao Forte: Resources, Writing -Review and Editing.Antonio Aronne and Claudio Imparato: Methodology, Validation, Writing -Review and Editing.All authors commented on the final manuscript of this study.
Table 1
Results from cone calorimetry tests for unfilled epoxy and its hybrid nanocomposite.
Table 2
Smoke parameters from cone calorimetry tests for unfilled epoxy and its hybrid nanocomposite.
Table 3
Results from pyrolysis combustion flow calorimeter for unfilled epoxy and its hybrid nanocomposite.A.Bifulco et al. | 7,198.8 | 2023-08-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Computer Science"
] |
Putative enhancer sites in the bovine genome are enriched with variants affecting complex traits
Background Enhancers are non-coding DNA sequences, which when they are bound by specific proteins increase the level of gene transcription. Enhancers activate unique gene expression patterns within cells of different types or under different conditions. Enhancers are key contributors to gene regulation, and causative variants that affect quantitative traits in humans and mice have been located in enhancer regions. However, in the bovine genome, enhancers as well as other regulatory elements are not yet well defined. In this paper, we sought to improve the annotation of bovine enhancer regions by using publicly available mammalian enhancer information. To test if the identified putative bovine enhancer regions are enriched with functional variants that affect milk production traits, we performed genome-wide association studies using imputed whole-genome sequence data followed by meta-analysis and enrichment analysis. Results We produced a library of candidate bovine enhancer regions by using publicly available bovine ChIP-Seq enhancer data in combination with enhancer data that were identified based on sequence homology with human and mouse enhancer databases. We found that imputed whole-genome sequence variants associated with milk production traits in 16,581 dairy cattle were enriched with enhancer regions that were marked by bovine-liver H3K4me3 and H3K27ac histone modifications from both permutation tests and gene set enrichment analysis. Enhancer regions that were identified based on sequence homology with human and mouse enhancer regions were not as strongly enriched with trait-associated sequence variants as the bovine ChIP-Seq candidate enhancer regions. The bovine ChIP-Seq enriched enhancer regions were located near genes and quantitative trait loci that are associated with pregnancy, growth, disease resistance, meat quality and quantity, and milk quality and quantity traits in dairy and beef cattle. Conclusions Our results suggest that sequence variants within enhancer regions that are located in bovine non-coding genomic regions contribute to the variation in complex traits. The level of enrichment was higher in bovine-specific enhancer regions that were identified by detecting histone modifications H3K4me3 and H3K27ac in bovine liver tissues than in enhancer regions identified by sequence homology with human and mouse data. These results highlight the need to use bovine-specific experimental data for the identification of enhancer regions. Electronic supplementary material The online version of this article (doi:10.1186/s12711-017-0331-4) contains supplementary material, which is available to authorized users.
Background
Genomic selection is a powerful tool that has rapidly accelerated genetic gains in the dairy industry [1].
Genomic estimated breeding values (GEBV) for ranking selection candidates are calculated as the sum of the individual effects of genome-wide single nucleotide polymorphisms (SNPs). Genomic prediction for a given trait of interest would be most accurate if all causative variants that affect a trait were known and used in the prediction. For most complex traits, such as milk production in dairy cattle, very few causal variants are known [2] and therefore it is unlikely that the full set of causative variants are contained within the SNP panels used for routine evaluation. The task of identifying causative variants for complex traits is challenging since it is likely that a very large number of causative mutations with small effects contribute to the total genomic variation of the trait [3].
Recent research has indicated that much of the variation that affects complex traits lies in the non-coding genome [2], particularly transcriptional regulatory elements. Enhancers, which are also called locus control regions (LCR) or upstream activating sequences (UAS) [4], are non-coding DNA sequences, which when they are bound by specific proteins, enhance the transcriptional level of a related gene or set of genes [4]. To date, the identification of genomic regulatory elements including enhancers has followed two main approaches. Firstly, evolutionarily conserved non-coding sequences between mammalian species or higher vertebrates [5,6] have been used to identify the more conserved developmental enhancers [7,8]. Secondly, a more recent approach that uses chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-Seq) can detect candidate enhancers on a genome-wide scale. This approach uses antibodies to snapshot transcriptional proteins that are bound to DNA sequences in vivo, and has revealed a much larger number of candidate enhancers [4,9] than the previous approach, the majority of which were detected only in a specific physiological context [10]. Examples of biological signals that allow the identification of enhancers are mono-methylation of lysine 4 on histone H3 (H3K4me1) [11][12][13], p300-CBP coactivator protein family [14][15][16], tri-methylation of lysine 4 on histone H3 (H3K4me3) and acetylated lysine 27 on histone H3 (H3K27ac) [17][18][19]. The histone mark H3K4me3 displays a bimodal distribution that flanks the transcription start sites (TSS) of active or to be expressed genes in eukaryotes [20]. It is a prevalent histone mark for promoters [21,22] and is also found in the coding regions of genes [21,22], and occasionally it marks active enhancers [13,20,23,24]. The histone mark H3K27ac distinguishes active enhancers from poised enhancers at a tissue-specific level and in a developmental-specific manner [12,25]. It also marks active promoters [12] and displays broader profiles than H3K4me3, which is in line with its association with open chromatin [12,13]. The number of histone marks and co-occupation of other cellular elements collaboratively define the transcriptional state of a genomic region [20].
The aim of this study was to identify bovine enhancer regions in silico based on sequence homology with functional annotation data in other species in addition to bovine ChIP-Seq data. We evaluated the influence of mutations in enhancer regions on complex production traits by performing a multi-breed genome-wide association study (GWAS) with imputed whole-genome sequence data in 16,581 cattle followed by meta-analysis and enrichment analysis.
Mammalian enhancer sets
We used four public mammalian enhancer datasets to search for bovine putative enhancers, i.e. VISTA [26], FANTOM5 [27], dbSUPER [28] and the Villar et al. [29] study. The VISTA enhancer browser [26] selects evolutionarily ultra-conserved sequences between vertebrates and validates enhancer activities in transgenic mouse reporter assays [6]. The functional annotation of the mammalian genome 5 project (FANTOM5) [27] provides a repository of active enhancers from various human and mouse tissues. FANTOM5 enhancers are defined by bidirectional transcription signals at the 5′ end of the transcription start site (TSS) using single-molecule Heli-Scope cap analysis of gene expression (CAGE) [30]. The database dbSUPER collects data on super-enhancers from various human or mouse tissues across multiple studies [28]. A super-enhancer (also known as a stretch enhancer) is a group of active enhancers that are densely clustered in a 10 to 30-kb region and are highly associated with cell identity genes and disease-associated genomic variations [31]. We combined these sets of homologous enhancers with predicted bovine enhancers from Villar et al. [29] who used ChIP-Seq to detect, in the bovine genome, binding sites to H3K4me3 and H3K27ac [17][18][19] [32], and most of the cows were from the 10,000 Holstein Cow Genomes Project and Jer-nomics Project [33]. Quality control and imputation were performed as described in [34], with an additional filter to retain only the SNPs that overlapped with sequence variants discovered in the 1000 Bull Genomes Project (run 4). The genotypes of all animals were then imputed to whole-genome sequence (WGS) using Fimpute [35] with a reference population of 1147 individuals with whole-genome sequences from the 1000 Bull Genomes Project (run 4). After imputation, 28,899,038 WGS variants were available. All genomic loci were mapped to the bovine genome assembly UMD3.1 (bostau6) [36].
Phenotypes
Phenotypes for the genotyped animals were available for milk production traits including fat yield (FY), milk yield (MY) and protein yield (PY) from the national dairy database operated by DataGene (Melbourne, Australia). The phenotypes used in the analyses were trait deviations (TD) for cows and daughter trait deviations (DTD) for bulls. TD were calculated based on cows' lactation records (three lactations on average) and corrected for known fixed effects as per DataGene routine evaluations from the April 2013 official breeding value run. DTD were generated from nationwide progeny test data collected on many bulls' daughters, and were corrected for known fixed effects such as herd, year and season. The animals used in our study were the same as or overlapped with those in previous publications [34,37,38].
Mapping bovine candidate enhancers
The human and mouse enhancer regions from VISTA, FANTOM5 and dbSUPER were mapped to the bovine reference genome assembly UMD3.1.1 via command line applications Nucleotide Basic Local Alignment Search Tool (BLASTn) [43] (default settings except for the e-value were 4 × 10 −17 ) and UCSC Batch Coordinate Conversion (liftOver) [40] (default settings), respectively. The BLASTn approach measures local sequence similarity to identify which query segments can be matched to different parts of the target genome [43]. The liftOver approach measures global sequence similarity where the query sequence is optimised to the best matching location in the target genome, although the best matching location may be stretched out on a much longer region than the query sequence [40]. The BLASTn software returned specific genomic coordinates for mapped query segments, whereas the liftOver command application returned a mapped file for all the genomic coordinates that were found in the target genome, and an unmapped file for all the query sequences that were partially or fully unmapped. We considered all the returned queries with full or partial hits as mapped input sequences in BLASTn, and all the queries that were not marked as fully unmapped were considered as mapped input sequences in liftOver. LiftOver outputs were combined with BLASTn results. All regions from the combined set that overlapped over more than one bp were merged into a longer and non-overlapping genomic interval. Bovine enhancer data from ChIP-Seq H3K4me3 and H3K27ac signals [29] were directly merged into the non-overlapping set, respectively.
Genome-wide association study
A multi-breed genome-wide association study (GWAS) was performed to detect imputed WGS variants that were associated with FY, MY and PY. Following the approach described by [37], Holstein and Jersey data were combined, but the analyses were separated by gender, because phenotype measurements in bulls and cows have different degrees of uncertainty [34]. The efficient mixed-model association expedited (EMMAX) analysis software package [44] was used to fit the 28,899,038 WGS variants one by one in the linear mixed model: where Y is a vector of phenotypes (DTD for bulls and TD for cows); W is the design matrix that allocates phenotypes to fixed effects accounting for overall mean and breeds; ω is a vector of fixed effect solutions; X is a vector of animal genotypes; β is a vector of genotype effects; Z is a matrix that allocates phenotypic records to animals and µ is a vector of polygenic breeding values fitted as a random effect and assumed to follow a normal distribution N 0, Gσ 2 g , where σ 2 g is the genetic variance of the trait, and G is the genomic relationship matrix calculated from the 800 K genotypes as in [45]; and e is a vector of residual errors distributed N 0, Iσ 2 e , where σ 2 e is the error variance. The polygenic breeding values were included in the model to avoid false positive SNP effects due to population structure and sub-structure [44].
Meta-analysis
The multi-breed GWAS results from bull and cows were combined using an inverse-variance weighting metaanalysis within a fixed effect model as described by [46].
We did not perform a joint analysis since inclusion of different accuracies for the phenotypes of bulls and cows was not possible in EMMAX. For the inverse-variance weighted meta-analysis, the following were calculated: 1. The standard error of SNP effects is calculated as follows: where i indicates the SNP at position i, j indicates the gender cohort, se is the standard error of the SNP effect, β is the SNP effect output from EMMAX, Q is the quantile function of the standard normal distribution and p is the GWAS P value output from EMMAX. 2. The inverse-variance weight for each SNP is then calculated as: where i and j are as defined above, w is the inversevariance weight, and se is the standard error of the SNP effect calculated from Eq. (1). 3. The inverse-variance weighted effect for each SNP is then calculated as: where i and j are as defined above, β is the weighted effect, β is the SNP effect output from EMMAX. 4. The SNP effect from the meta-analysis that combines gender cohorts is calculated as: where i and j are as defined above, β is the SNP effect from the meta-analysis, β is the weighted effect calculated from Eq. (3), and w is the weight calculated from Eq. (2). 5. The variance of the SNP effect from the meta-analysis that combines gender cohorts is calculated as: where i is as above, ṽ is the variance of the SNP effect from the meta-analysis, n is the number of cohorts being combined in the meta-analysis (here, n is equal to 2 because the bull and cow cohorts were combined), and w is the weight calculated from Eq. (2). (1) 6. The P value from the meta-analysis that combines gender cohorts is calculated as: where i is as above, p is the P value output from the meta-analysis, F is the quantile function of the standard normal distribution, β is the SNP effect from the meta-analysis calculated from Eq. (4), and ṽ is the variance of the SNP effect from the meta-analysis calculated from Eq. (5).
Variants with no effect or with a P value of 1 were removed from the downstream analysis. Of the 28,899,038 imputed WGS variants input for FY cohort, after the meta-analysis, 23,462,606 variants remained. Of the 28,899,038 imputed WGS variants input for MY cohort, after the meta-analysis 23,462,606 variants remained. Of the 28,899,038 imputed WGS variants input for PY cohort, after the meta-analysis 23,470,573 variants remained. Significant variants from the meta-analysis were selected using the same threshold as in the GWAS P ≤ 10 −8 .
Enrichment analysis
The bovine candidate enhancers were categorised into five enhancer sets based on their input databases: VISTA, FANTOM5, dbSUPER, Villar H3K4me3 or Villar H3K27ac. Two enrichment analyses, i.e. permutation test and gene set enrichment analysis (GSEA), were performed to examine if any of the bovine candidate enhancer sets were enriched with variants associated with FY, MY or PY. The permutation test compared the number of significant SNPs in an enhancer set with the null distribution sampled from the rest of the genome. However, the need for a predefined threshold for statistical significance in the permutation tests may result in not detecting relevant biological differences that are modest relative to the noise that is inherent to the data [47]. This insensitivity of the permutation test was partly overcome by GSEA, which considered the distribution of all effects and tested if SNPs in an enhancer set were responsible for the enrichment signal, without applying any significant threshold [47].
The permutation test was run for 10,000 random repeats to test if the number of significant SNPs in an enhancer set was significantly larger than that obtained by random chance. The numbers of SNPs and of significant SNPs in an enhancer set, and the number of SNPs in a random draw are denoted as N E , n s and m s , respectively. In each permutation, a significant SNP was determined by a global P value cut-off of P ≤ 10 −8 . The fold change of the enrichment was defined as the ratio of n s to the mean of all m s in random samples. The ranking position of n s within the distribution of all m s over all random samples, denoted as R, was determined, and a P value to test the significance of the ranking was computed. For the largest n s among all m s , the P value was set to <0.0001 and otherwise it was R 10001 . Our permutation tests resulted in 15 independent analyses (3 phenotypes × 5 enhancer databases). The GSEA statistics was the cumulative sum of the effects of SNPs in putative enhancers computed from the sorted list of all SNP effects. Here, the effect was assessed by -log10 (P value). At each point in the GSEA algorithm, the test statistic ES was computed as follows: where j is the position of the effect of an enhancer SNP in the sorted list of all SNP effects, P hit and P miss are respectively the cumulative probability of observing all enhancer SNPs and all non-enhancer SNPs up to position j, thus ES denotes the level of enrichment of enhancer SNPs up to position j. The position at which ES reaches the maximum deviation from 0, ES_max, defines the strength of the enrichment signal in the enhancer set. All enhancer SNPs that are identified before ES reaches ES_max are assigned to the candidate core enhancer set.
The significance of each GSEA was determined in a similar way as that for the permutation test described above. We randomly shuffled the SNPs within the sorted list while retaining the sorted positions of −log10 (P value) and recalculated the ES_max value. The shuffle was repeated 10,000 times and 10,000 ES NULL values were obtained. A GSEA result was considered significant if the ES_max value was larger than all ES NULL values. Our GSEA resulted in 15 independent analyses (3 phenotypes × 5 enhancer databases), but the sets of core enhancer SNPs were only those from the significant GSEA cohorts.
Mapping bovine candidate enhancers
Two aligners, BLASTn and liftOver, were used to map human and mouse enhancers on the bovine genome ( Table 1). All sets of bovine putative enhancer regions covered bovine chromosome 1 to 30. The bovine reference genome assembly bostau6 and bostau8 do not include chromosome Y. The mapping rate was defined as the ratio between the number of query sequences found in the bovine genome and the number of query sequences input for search. Cross-species mapping rates were equal to, in decreasing order, 96% for VISTA, 92% for dbSUPER and 87% for FANTOM5. The number of overlaps between BLASTn and liftOver results was small for FANTOM5 (<10%), moderate for dbSUPER (16%) and high for VISTA (71%). Over 93 and over 95% of the dbSUPER hits were within 10 and 30 kb to each other, respectively. As expected, homologous enhancer sequences were on average shorter than their respective query sequences (Table 1). A pair-wise comparison was performed to examine the degree of overlap between the sets of bovine putative enhancer regions (Fig. 1). Villar H3K27ac and dbSUPER were the two major enhancer sets, because the Villar H3K27ac set covered 82% of the Villar H3K4me3 bovine genomic intervals, and the dbSUPER set covered 71% of the VISTA and 52% of the FANTOM5 bovine genomic intervals ( Table 2; Fig. 1). However, the Villar H3K27ac and dbSUPER sets differed substantially (less than 5% overlaps; Table 2; Fig. 1).
Given that enhancers are highly tissue-specific, we compared only liver-specific enhancers from homologous enhancer sets and Villar ChIP-Seq enhancer set. Only eight VISTA enhancers were liver-specific, which generated 236 bovine putative liver enhancers. We could not determine from which tissue FANTOM5 sequences originated. No liver data was available in the dbSUPER database. The bovine putative VISTA-liver enhancers overlapped very little with the bovine-liver H3K27ac (27%) and did not overlap at all with the bovine-liver H3K4me3 enhancers (Fig. 2).
Genome-wide association study
The number of significant variants P ≤ 10 −8 for each trait is in Table 3. Bulls and cows demonstrated similar GWAS profiles for the respective phenotype cohorts (Fig. 3).
Meta-analysis
The meta-analysis recovered 92% of the significant variants from GWAS, and revealed additional variants that were not significant in the separate bull and cow GWAS (Table 3) and [see Additional file 1: Figure S1]. Significant variants were clustered on chromosomes 5, 14 and 27 for FY, chromosomes 5, 6, 14, 15, 20 for MY and chromosomes 5, 6, 11, 12, 14 and 16 for PY (Fig. 3) Fig. 4). However, the significant variants in the Villar H3K27ac and dbSUPER enhancer sets differed significantly, with less than 2% of the dbSUPER and less than 0.1% of the H3K27ac significant variants being identical (Table 4; Fig. 4).
Enrichment analysis
A permutation test with 10,000 repeats was performed to examine the enhancer sets for their global sequence wide significance. Only the Villar H3K4me3 and H3K27ac enhancers demonstrated genome-wide significance across all phenotypes P ≤ 10 −8 , whereas the homology-based enhancers did not show such a high level of enrichment in significant variants associated with milk production traits (Table 5; Fig. 5). Since dbSUPER comprised clusters of enhancers, we expanded the length of dbSUPER putative bovine enhancers, such that any sequences that were within less than 30 kb to each other were merged into a single longer enhancer sequence. The permutation test was then applied to the expanded dbSUPER enhancer sequences but the enrichment signal remained low.
Only the bovine-specific Villar H3K4me3 and H3K27ac enhancer sets demonstrated high levels of enrichment whereas homology-based enhancer sets all showed low levels of enrichment in GSEA. Around 29% of the SNPs in the Villar H3K4me3 enhancer set and 35% of the SNPs in the H3K27ac enhancer set accounted for the enrichment signals in milk production traits [see Additional file 4: Table S1]. These GSEA core enhancer SNPs were located across all the chromosomes regardless of the phenotype cohorts or histone modification signals. The number of core H3K4me3 SNPs were, in decreasing order, within intronic, upstream, intergenic, 5′-UTR, downstream, 3′-UTR, splicing, non-coding exonic and stop regions (Figs. 6, 7), whereas the core H3K27ac SNPs
Table 2 Degree of overlap between the sets of bovine enhancers analysed
Each value in the table represents the ratio, expressed as a percentage, of the total overlapping base pairs between the two enhancer sets listed in a row and column, relative to the total number of base pairs in the enhancer set listed in the corresponding row a VISTA is a database for evolutionarily ultra-conserved sequences between vertebrates b FANTOM5 is a database for active enhancers from various human and mouse tissue c dDbSUPER is a database for super-enhancers from various human or mouse tissues across multiple studies d H3K27ac represents the dataset from the Villar et al. [29] study, which used ChIP-Seq profiling to detect the regions of the bovine genome that contained the histone modification signal H3K27ac from four bulls' liver tissues e H3K4me3 represents the dataset from the Villar et al. [29] study which used ChIP-Seq profiling to detect the regions of the bovine genome that contained the histone modification signal H3K4me3 from four bull's liver tissues followed a slightly different order, i.e. within intergenic, intronic, upstream, downstream, 5′-UTR, 3′-UTR, splicing, non-coding exonic and stop regions (Figs. 6, 7). To demonstrate the power of GSEA over the permutation test, we examined the relationship between P value threshold and SNP location. We found that the SNPs located close to a gene tended to be more significant than their counterparts in intergenic regions. Most H3K27acspecific SNPs were intergenic and H3K4me3-specific SNPs were located in the vicinity of transcription start sites (TSS) [see Additional file 2: Figure S2 and Additional file 3: Figure S3]. As a result, the H3K4me3-specific SNPs tended to show a higher level of enrichment in the permutation test [see Additional file 4: Table S1]. However, the GSEA analysis revealed that more than 82% of the core SNPs responsible for the GSEA signal in the H3K4me3 set were also in the H3K27ac set, but more than 74% of the core SNPs in the H3K27ac set were not in the H3K4me3 set [see Additional file 5: Table S2]. This means that the H3K27ac-specific SNPs contributed some additional enrichment signal although their P values did not pass the P ≤ 10 −8 threshold.
Discussion
The first goal of this study was to identify and improve the annotation of enhancer regions in the bovine genome. To create a library of bovine enhancers, we used publicly available human and mouse enhancer databases from VISTA, FANTOM5 and dbSUPER, along with the bovine enhancer data that were detected by ChIP-Seq from the Villar D et al. (2015) study. VISTA contains ultra-conserved developmental enhancer sequences with more than 96% of these being mapped to the bovine genome (Table 1). DbSUPER included more than 92% sequences that were mapped to the bovine genome (Table 1) probably because it contains long genomic sequences from clusters of enhancers that are closely located, which increases their chances of being mapped. The FANTOM5 data comprises very short sequences that were mapped very sparsely to the bovine genome when searched by sequence similarity in BLASTn (9.15%; Table 1) but were well recovered by liftOver which uses information from whole-genome comparisons to tolerate more frequent changes between query and target sequences (88.77%; Table 1). We exploited homologous mammalian enhancer data to identify bovine enhancers and the results are in agreement with previous findings [25,29,[48][49][50] that showed that enhancer sequences, particularly the short and function-specific enhancers, are poorly conserved across species.
The second goal of this study was to validate our candidate bovine enhancer sites. We used a multi-breed GWAS followed by meta-analysis and enrichment analysis approach to examine if significant variants associated with milk production traits from meta-analyses are enriched in bovine putative enhancer sets. The genomewide significant variants that were detected by this procedure are located in genes that affect milk production traits in cattle, in novel candidate genes, and in our candidate bovine enhancer sets. Both the permutation test and GSEA showed that only the Villar H3K4me3 and Villar H3K27ac predicted enhancer regions were significantly enriched with SNPs that are associated with the complex traits analysed here. The Villar H3K4me3 and H3K27ac enhancer sets were respectively 2.0 to 3.0fold and 1.3 to 1.5-fold more enriched with variants that affect milk production traits than the rest of the genome (Table 5). Furthermore, the results of the permutation test and GSEA showed that the enriched H3K4me3 SNPs had significant effects within narrow genomic intervals close to genes. In addition, we observed that, in general, the H3K27ac enhancer regions encompassed the H3K4me3 enhancer regions but that most of the signals in the H3K27ac enhancer regions were located far from genes, and had small but significant effects. This finding is in line with existing literature that reports that the H3K4me3 enhancer regions display sharper peaks around TSS [51], the H3K27ac enhancer regions cover broader domains that are roughly equally distributed between intergenic and intronic regions [12], and that the proportion of SNPs at TSS reaching a significance level of −log10 (P value) higher than 10 is 50 to 100 times greater than that of SNPs in intergenic regions [52]. Our analysis did not show enrichment with enhancer regions for any production trait in any homology-based enhancer sets from VISTA, FANTOM5 and dbSUPER. There are two possible reasons for this finding. First, none of the VISTA, FANTOM5 and dbSUPER enhancer sets were sampled from a tissue that is directly linked to milk production (an example of tissue that is directly linked to milk production is the lactating mammary gland tissue). Therefore, the homology-based enhancers that are relevant to milk production may not be present Fig. 4 Degree of overlap between significant imputed WGS SNPs P ≤ 10 −8 in bovine enhancer sets in our downloaded databases and therefore cannot be considered in this study. Second, although VISTA, FAN-TOM5 and dbSUPER may contain sets from tissues that are involved in the physiological processes that are fundamental for the regulation of milk production, the procedure to map these sequences to the bovine genome is based on the identification of conserved sequences with human and mouse sequences, and as a result, the bovinespecific mutations within the homology-based enhancers cannot be captured [53]. Our results support the hypothesis of a rapid evolution of the enhancer sequences since the bovine-specific liver enhancer regions differed substantially from all homology-based liver enhancer regions (Fig. 2), which suggests that bovine-specific enhancers are more likely to be enriched with causative mutations that affect complex traits, in this case milk production. Our results, combined with the above reasons, highlight the complexity of the genomic regulatory machinery and the importance of analysing enhancers specific to the species under investigation [4]. The success of this study based on regulatory landscape data from one tissue type (liver) using two epigenetic marks (H3K4me3 and H3K27ac) indicates that our results might be even more convincing if we had data from more tissue types.
On chromosome 14, the observed enrichment signal in enhancer regions could be due to SNPs in linkage disequilibrium (LD) with the well-known mutation in the DGAT1 gene [54]. To account for LD confounding around the DGAT1 mutation, we re-ran our GWAS on chromosome 14 by correcting for the effect of the DGAT1 gene by including the causative mutation in the model as a fixed effect. The correlations of the SNP effects (P values) between before and after the correction were 85% (59%), which showed that there were other significant SNPs on chromosome 14 apart from the DGAT1 mutation. After correction, no significant SNPs remained in the VISTA and FANTOM5 enhancer sets for all milk production traits, but 34 to 67% significant SNPs remained in the Villar H3K4me3, Villar H3K27ac and dbSUPER enhancer sets [see Additional Table 5 Enrichment of significant enhancer SNPs P ≤ 10 −8 for milk production traits in the permutation tests a Fold change is the ratio between the actual number of significant SNPs in an enhancer set and the mean number of all significant SNPs in the 10,000 random samples b Ranking position of the actual number of significant SNPs in an enhancer set within the distribution of all the numbers of significant SNPs for the 10,000 random samples; if the actual number of significant SNPs was the largest among all the numbers of the 10,000 random significant SNPs, the rank was set to <0.0001; otherwise it was denoted as the ranking position of the actual number of significant SNPs among the number of random significant SNPs Table S3]. The SNPs that remained significant after the correction in the putative enhancer sets were located in regions up to 10 Mb around the DGAT1 gene. In addition, while the Villar H3K4me3 and dbSUPER enhancer sets had no corrected significant variants within the DGAT1 gene, the Villar H3K27ac enhancer set include one such significant variant (Chr14: 1797137 in FY and MY cohorts) in the first intron of the DGAT1 gene. Several candidate regulatory variants that affect the expression of MGST1 have been reported to be responsible for the QTL effect on chromosome 5 for milk production traits [38,55]. We found that they were within or close to the Villar H3K4me3 and H3K27ac enhancer regions, which provided evidence that the causal mutation is in fact a regulatory variant [see Additional file 7: Table S4 and Additional file 8: Figure S4].
Several studies have reported that the variant Chr6:88741762 is significantly associated with milk production traits [38]. This variant was significant in both our MY and PY cohorts, within the H3K27ac set, and is located 2470 bp upstream of the GC gene. An RNA-Seq analysis [56] showed that GC was most highly expressed in the liver and over-expressed in the mammary gland, and that there was a strong allele-specific expression in liver compared to 17 other bovine tissues [see Additional file 8: Figure S4].
Conclusions
This study used mammalian enhancer prediction data and bovine trait association to provide a functional variance analysis of candidate bovine enhancer regions. Overall, our findings agree with previous research that enhancer sequences are species-specific and rarely conserved across species. We conclude that bovine-specific histone data such as H3K4me3 and H3K27ac are essential for the successful functional annotation of bovine enhancer regions. Although the amount of bovine enhancer information is limited, we have successfully identified many genomic regions as potential enhancers and demonstrated that variation in these regions is associated with variation in animal production traits. Future studies will benefit from the combination of information from topological domain association, expression quantitative trait loci and bovine ChIP-Seq data, such as that generated from the Functional Annotation of Animal The histograms represent the number of significant variants in random samplings. If an analysis was significant, the vertical line would be on the right to the histogram and clearly separated from the histogram | 7,495.6 | 2017-07-06T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Optimal control for cancer treatment mathematical model using Atangana–Baleanu–Caputo fractional derivative
In this work, optimal control for a fractional-order nonlinear mathematical model of cancer treatment is presented. The suggested model is determined by a system of eighteen fractional differential equations. The fractional derivative is defined in the Atangana–Baleanu Caputo sense. Necessary conditions for the control problem are derived. Two control variables are suggested to minimize the number of cancer cells. Two numerical methods are used for simulating the proposed optimal system. The methods are the iterative optimal control method and the nonstandard two-step Lagrange interpolation method. In order to validate the theoretical results, numerical simulations and comparative studies are given.
In [21] an interesting mathematical model for cancer treatment is presented. This model is governed by a system of eighteen differential equations. The first aim of this paper is to develop this model in order to control the cancer cells. In [22], optimal control of a fractional-order delay model for cancer treatment is presented. Here the fractional-order derivative is defined in the Caputo sense.
Applications of fractional calculus have increased in the last few decades, after centuries of small advancements. Examples can be found in a variety of scientific areas: engineer-ing, biology, epidemiology, amongst others ( [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37]). In most cases, the fractional-order differential equations (FODEs) models seem more consistent with the real phenomena than the integer order models. This is due to the fact that fractional derivatives and integrals enable the description of the memory and hereditary properties inherent in various materials and processes that exist in most biological systems.
In [14][15][16]38], some fractional optimal control problems (FOCPs) have been introduced. Sweilam and AL-Mekhlafi, studied optimal control of some biology models in [22,30,[39][40][41][42]. In [2], Torres et al. introduced and analyzed a multiobjective formulation of an optimal control problem, where the two conflicting objectives are minimization of the number of HIV-infected individuals with AIDS clinical symptoms and co-infected with AIDS and active TB and costs related to prevention and treatment of HIV and/or TB measures. More recently, in the Atangana-Baleanu Caputo sense (ABC) one defined a modified Caputo fractional derivative by introducing a generalized Mittag-Leffler function as the nonlocal and non-singular kernel ( [43]). These new types of derivatives have been used in modeling of real life applications in different fields ( [44,45]). In [46][47][48][49] necessary optimality conditions for FOCPs are obtained in the Riemann-Liouville sense and numerically studied by a finite difference method. In [50], the spectral method is developed for a distributed-order fractional optimal control problem. In [51] Baleanu et al., used a central difference scheme for solving FOCPs.
In this paper, we introduced the fractional mathematical model without singular kernel for a cancer treatment model with modified parameters ( [52]). Minimizating of tumor cells of FOCPs for the proposed model is the aim of this article. Two numerical techniques are introduced to study the nonlinear FOCPs. The techniques are: the iterative optimal control method (IOCM) ( [22,30,42]) and the nonstandard two-step Lagrange interpolation method (N2LIM), which is presented here as an adaptation for the two-step Lagrange interpolation method. Numerical simulations are given. To the best of our knowledge the fractional optimal control without singular kernel for cancer treatment based on synergy between anti-angiogenic model was never explored before. This paper organized as follows: The fractional-order model with two controls is given in Sect. 2. In Sect. 3, the optimality conditions are derived. In Sect. 4, numerical methods for FOCPs are presented. In Sect. 5, numerical experiments and simulations are presented. Finally the conclusions are given in Sect. 6.
The model problem
In the following, the cancer treatment fractional model based on synergy between immune cell therapies and an anti-angiogenic method with modified parameters is presented. It is important to notice that all the parameters here depend on the fractional order α as an extension of the model of integer order which is given in [21]. The model consists of eighteen variables dependent on the time. Two control variables u M (t), u A (t) are given for measuring the immunotherapy and the anti-angiogenic therapy, respectively. The variables can be identified as follows: • T(t): Number of cancer cells.
• U(t): Number of mature unlicensed dendritic cells.
• D(t): Number of mature licensed dendritic cells.
• A E (t): Number of activating/proliferating effector memory CD8 + T cells.
• E(t): Number of activated effector memory CD8 + T cells.
• A H (t): Number of activating/proliferating memory helper CD4 + T cells.
• H: Number of activated memory helper CD4 + T cells.
• A R (t): Number of activating/proliferating regulatory T cells.
• R(t): T cells number of activated regulatory.
The parameters of the model are described in [21,53,54]. The new system can be described by fractional-order differential equations as follows: ABC a ABC a ABC a ABC a ABC a ABC a ABC a ABC a ABC a ABC a ABC a The parameters ω α 1 and ω α 2 are the weight factors.
The FOCPs
Consider the state system (1)- (18), in R 18 , let be the admissible control set. The objective functional is defined as follows: where the weight constant of cancer cell numbers is A. Moreover, B 1 , is the weight constant of immunotherapy and C is the weight constant of anti-angiogenic therapy. Now, the aim is to minimize the following objective functional: subject to the constraints ABC a with the following initial conditions: The modified objective functional is defined as follows ( [30]): where the Hamiltonian is given as follows: From (21) and (22) ABC 0 where λ j , j = 1, 2, 3, . . . , 18, are the Lagrange multipliers.
Existence of an optimal control pair
The existence of the optimal control pair can be directly obtained using the results in Fleming and Rishel [55] and Lukes [56]; more precisely, we have the following theorem.
Theorem 3.2 There exists an optimal control pair
Proof To prove the existence of an optimal control, we use the result in [56]. Note that the control and the state variables are nonnegative values. In this minimizing problem, the necessary convexity of the objective functional in u A , u M are satisfied. The set of all the control variables (u M , u A ) ∈ Ω is also convex and closed by definition. The optimal system is bounded, which determines the compactness needed for the existence of the optimal control. In addition, the integrand in functional (19), "AT + u 2 A + Cu 2 M , " is convex on the control set Ω. Also we can claim that there exist a constant μ > 1 and numbers c 1 , c 2 such that because the state variables are bounded, it completes the existence of an optimal control.
Nonstandard two-step Lagrange interpolation method
For simplicty consider the FODEs in the following general form: The Atangana-Baleanu fractional-order derivative in the Caputo sense is given as follows ( [43]): where M(α) = 1α + α Γ (α) is the normalization function, E α is the Mittag-Leffler function.
Thanks to the fundamental theorem of fractional calculus with (69), we have at t n+1 we have The two-step Lagrange interpolation is given as follows: Equation (71) is replaced in (70) and performing the same steps as in [57], we obtain To obtain high stability [58], we used a simple modification in (72). This modification is to replace the step size h with φ(h) such that φ(h) = h+O(h 2 ), 0 < φ(h) ≤ 1. For more details on NSFDM see [40,[59][60][61][62]. The nonstandard two-step Lagrange interpolation method (NS2LIM) is given as follows:
Numerical results
In the following, N2LIM is applied to solve the optimality system (50)- (67) and (27) [21] with the power α, 0 < α ≤ 1. These state equations are initially solved by the proposed methods. Then we will solve the co-state equations (27)-(44) by using a nonstandard finite difference method with back step in the time. Figure 1 shows the approximate solutions at α = 0.96 of the state variables without controls. Figure 2 shows the behavior of approximate solutions E(t), I(t) and T(t) in two cases with and without controls using N2LIM. We noted that, in controlled case the increment of E(t) and Y (t) lead to decrease the number of cancer cells T(t). Figure 3 shows the approximate solutions of the state variables T, U, E, Y , S and R with control case and B 1 = 100, C = 1000 at different α using N2LIM. It is clear that the best result is at α = 0.98 because the number of cancer cells is minimal. Also, these results show Table 1 shows the comparison between the values of the objective functional using N2LIM with and without controls at T f = 100 and different values of α and φ(h). We note that the best result is at φ(h) = 0.025(1e -h ).
The values of objective functional (19) by the IOCM ( [22,30,42]) and N2LIM at different values of α are shown in Table 2. We note that the N2LIM results are better than the IOCM results. We use Matlab on a computer with Windows 7 home premium, RAM 4 GB and system type 64-bit operating system.
Conclusions
In this paper, numerical solutions for optimal control of fractional order with generalized Mittag-Leffler function for cancer treatment based on synergy between anti-angiogenic and immune cell therapies are presented. The necessary optimality conditions are proved, where two controls u A (t), u M (t) are added to reduce the cancer cells number. N2LIM is developed to study the model problem. We present some simulations that support our theoretical findings and show the effectiveness of the model. Comparative studies with IOCM are implemented, it is found that the values of the objective functional which are obtained by N2LIM are better than the results obtained by IOCM. Moreover, N2LIM can be applied to solve the fractional optimal control problem simply and effectively. | 2,333.8 | 2020-07-06T00:00:00.000 | [
"Mathematics",
"Medicine"
] |
Pancreatic cancer-educated macrophages protect cancer cells from complement-dependent cytotoxicity by up-regulation of CD59
Tumor-associated macrophages (TAMs) are versatile immune cells that promote a variety of malignant behaviors of pancreatic cancer. CD59 is a GPI-anchored membrane protein that prevents complement activation by inhibiting the formation of the membrane attack complex, which may protect cancer cells from complement-dependent cytotoxicity (CDC). The interactions between CD59, TAMs and pancreatic cancer remain largely unknown. A tissue microarray of pancreatic cancer patients was used to evaluate the interrelationship of CD59 and TAMs and their survival impacts were analyzed. In a coculture system, THP-1 cells were used as a model to study the function of TAMs and the roles of pancreatic cancer-educated macrophages in regulating the expression of CD59 in pancreatic cancer cells were demonstrated by real-time PCR, western blot and immunofluorescence staining. The effects of macrophages on regulating CDC in pancreatic cancer cells were demonstrated by an in vitro study. To explore the potential mechanisms, RNA sequencing of pancreatic cancer cells with or without co-culture of THP-1 macrophages was performed, and the results showed that the IL-6R/STAT3 signaling pathway might participate in the regulation, which was further demonstrated by target-siRNA transfection, antibody neutralization and STAT3 inhibitors. Our data revealed that the infiltration of TAMs and the expression of CD59 of pancreatic cancer were paralleled, and higher infiltration of TAMs and higher expression of CD59 predicted worse survival of pancreatic cancer patients. Pancreatic cancer-educated macrophages could protect cancer cells from CDC by up-regulating CD59 via the IL-6R/STAT3 signaling pathway. These findings uncovered the novel mechanisms between TAMs and CD59, and contribute to providing a new promising target for the immunotherapy of pancreatic cancer.
Introduction
Pancreatic cancer has a poor prognosis and a rising incidence 1,2 . Pancreatic ductal adenocarcinoma (PDAC), is the most common type and accounts for 90% of all pancreatic cancer cases 3 . Immune checkpoint inhibitors, including therapies against cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) and programmed cell death protein 1 (PD-1), have played limited roles in treating pancreatic cancer 4,5 . Therefore, new immunotherapeutic strategies for pancreatic cancer are urgently needed.
The complement system is a crucial part of the immune system and protects the host from pathogenic microorganisms and damaged cells 6 . The benefit of mAb-based immunotherapies, such as rituximab and ofatumumab, is attributed partially to their ability to evoke complementdependent cytotoxicity (CDC) that eliminates tumor cells 6,7 . CD59 is a glycosylphosphatidylinositol (GPI)anchored membrane protein that regulates complement activation by preventing C9 from polymerizing and forming the membrane attack complex (MAC) 8 . CD59 was reported to be highly expressed in clinical patients with pancreatic cancer 9,10 . However, the role of CD59 in the prognosis of pancreatic cancer has not been reported. Tumor-associated macrophages (TAM) have been demonstrated to play an important role in the processes of tumor carcinogenesis, including the escape of cancer cells from the tumor into the circulation and the suppression of antitumor immune functions and drug resistance 11 . Studies of the immunosuppressive functions of TAMs have mainly focused on tumor-promoting cytokines and their suppressive effects on T cell function. The effects of TAMs on the functions of the complement system have rarely been reported.
In this study, the interactions and mechanisms of TAMs, CD59 and pancreatic cancer were studied, to uncover new immunotherapeutic targets for pancreatic cancer.
Material and methods
PDAC sample collection and tissue microarray construction PDAC tumor tissues and corresponding adjacent nontumor tissues after radical resection with an R0 tumoral margin were collected from 74 patients aged 34-85 using the following inclusion criteria: (1) all of the patients had complete clinico-pathological information and a followup visit; (2) the tumor tissues were histologically proven to be ductal adenocarcinoma; (3) both the paired tumor and nontumor tissues were obtained; and (4) all of the patients did not undergo neoadjuvant chemotherapy. Tumor staging was based on the 8th edition of the TNM system designed by the American Joint Committee on Cancer (AJCC). All samples were used to construct tissue microarrays. None of the patients died before the followup visit. While 52 of 74 cases died, the remaining 22 cases were still alive until the end of the follow-up period, which ranged from 5 to 87 months after resection. This study was approved by the Ethics Committee of Peking Union Medical College Hospital, and all the patients signed the informed consent form. We used SurvExpress 12 to evaluate the relationship between CD59 expression and cancer risks in the TCGA Pancreatic Carcinoma Dataset. Through the SurvExpress program, the CD59 expression levels of patients in the TCGA dataset, which were divided into "Low Risk" and "High Risk" groups according to the prognostic index, were analyzed (http:// bioinformatica.mty.itesm.mx:8080/Biomatec/SurvivaX. jsp). We also explored the correlation between CD59 expression and overall survival (OS) in pancreatic cancer patients by the Kaplan-Meier plotter (http://kmplot.com), a widely used online database 13 .
Immunohistochemistry (IHC) assay and evaluation
The pancreatic cancer tissue microarray above was used to evaluate the expression of CD59 and CD163 in tumor and peritumor tissue. The sections were then deparaffinized in xylene, rehydrated according to standard histopathological procedures and stained with hematoxylin and eosin. The slides were then incubated with 1:200 dilutions of CD59 antibody (HPA026494, Sigma-Aldrich) and CD163 antibody (ab182422, Abcam). The stained tissues were scored by two pathologists blinded to the patient information. The expression level of CD59 was evaluated by the H-score 14 , a widely used evaluation criterion for immunostaining that involves multiplying a proportion score and an intensity score [15][16][17] . The H-score value with the largest Youden's Index for each variable within the receiver operating characteristic (ROC) curve was selected as the cut-off. The specimens with an Hscore above or equal to the cut-off value were defined as those with high expression of CD59, whereas the others were regarded as having low CD59 expression. CD163 is a commonly accepted marker for tumor-promoting TAMs 11 . The CD163-positive macrophages in the stained sections were counted using ImageScope software, and the density was estimated (per square millimeter) at a higher magnification (×200). The cut-off value was selected by the ROC curve as mentioned above.
To determine whether CD59 expression was related to the cross-talk between cancer cells and TAMs, we created a transwell coculture system. Human monocyte THP-1 cells were used as a model to study the function of TAMs according to previously reported protocols 18 and THP-1 cells (2.5 × 10 5 /mL) were induced into naive macrophages (Mφ group) using phorbol-12-myristate-13-acetate (PMA; 100 ng/mL, Sigma-Aldrich, USA) for 24 h. Pancreatic cancer cells (2.0 × 10 5 /mL) were seeded into 6-well plates (Corning, NY, USA). After 24 h, naive Mφ cells were seeded into the upper compartment of a transwell with a 0.4-μm pore size (Corning, NY, USA) with DMEM or RPMI-1640 medium supplemented with 10% FBS. Then, the upper and lower compartments were combined and cultured for 48 h in a humidified chamber at 37°C. The corresponding cancer cells were seeded into the upper compartments as a control. Similarly, to evaluate the macrophage features, Mφ cells were seeded into the lower compartment of the transwell, and the cancer cells or the corresponding Mφ cells were seeded into the upper compartment.
Immunofluorescence staining
A total of 1.5 × 10 5 /ml pancreatic cancer cells were seeded into 6-well plates on 12-mm coverslips. After 24 h of culturing, the upper compartment of a transwell with a 0.4 μm pore size covered by 1.5 × 10 5 /ml pancreatic cancer cells or by the corresponding Mφ cells was put into the plates. After 48 h, the cells on the coverslips were fixed with 4% paraformaldehyde and were permeabilized with 0.1% saponin. Nonspecific staining was blocked by incubation with 5% goat serum/ PBS for 1 h. Subsequently, the cells were incubated at room temperature for 1 h with the anti-CD59 antibody (HPA026494, Sigma-Aldrich) at a dilution of 1:200 and then incubated with a goat anti-rabbit Alexa Fluor 488 IgG (H + L) secondary antibody (Invitrogen, Cat # A-11034) for 1 h. Nuclei were strained with DAPI, and coverslips were placed face down onto a drop of antifading mounting medium on a microscope slide. Images were captured via a confocal laser scanning microscope (Nikon A1R) at 600X magnification. Each experiment was performed in triplicate.
Flow cytometry analysis (FCM)
CD59 expression and CD163 expression on the cell membrane were detected by flow cytometry as described previously 20,21 . The single cell suspension was collected in ice-cold PBS and incubated with an FITCconjugated anti-human CD59 antibody or PerCPconjugated anti-human CD163 antibody (Biolegend, USA) in a darkroom for 40 min. Next, the stained cells were resuspended in PBS with 2% paraformaldehyde and were stored at 4°C prior to flow cytometric analysis (Accuri C6, BD, USA). For each analysis, an isotype-matched monoclonal antibody was used as a negative control. Each experiment was carried out in triplicate.
CDC and apoptosis assays
Pancreatic cancer cells were prepared as previously described. For the CDC assay, 2.0 × 10 5 /ml pancreatic cancer cells were seeded into 6-well plates, and the plates were incubated for 24 h at 37°C and 5% CO 2 . Then, fresh human serum, diluted 2:5 in assay medium, or heatinactivated serum was used as a control and was added and incubated for 24 h. Complement-mediated cell death of tumor cells was quantified by staining with annexin V/ propidium iodide using an FITC Annexin V Apoptosis Detection Kit (Neobioscience, Shenzhen, China) according to the manufacturer's instructions. All experiments were performed in triplicate.
Enzyme-linked immunosorbent assay (ELISA)
The levels of IL-6 in the coculture media were determined using commercially available IL-6 ELISA kits by R&D Systems (Noves, USA) according to the manufacturers' protocols. All samples were measured in triplicate with the provided immunoassay standard as a positive control. The ELISA plates were measured using a microplate reader at a wavelength of 450 nm; absorption was adjusted by subtracting background measurement results at 570 nm. A standard curve was created according to the manufacturers' protocols and calculations were performed.
RNA sequencing (RNA-seq)
Total RNA of AsPC-1 and THP-1 macrophage-treated AsPC-1 was freshly extracted and sequencing libraries were generated using the NEBNext UltraTM RNA Library Prep Kit for Illumina (NEB, USA) following the The ROC curve of CD59 for OS in pancreatic cancer tissues. c Comparison of the H-scores of CD59 between tumor and nontumor tissues (p = 0.027, Mann-Whitney U test). d The influences of tumoral CD59 expression on overall survival (p = 0.025, log-rank test). e Multivariate Cox regression analysis showed that CD59 expression was an independent prognostic marker in the age subgroup (age < 60 y, HR = 2.611, p = 0.012). f Comparison of CD59 expression between patients in the "High Risk" group and in the "Low Risk" group through the SurvExpress program (p < 0.001). g The prognostic value of CD59 mRNA expression in the Kaplan-Meier plotter dataset (HR = 2.31, p < 0.001) manufacturer's protocol. RNA purity was verified using a NanoPhotometer spectrophotometer (IMPLEN, CA, USA) and the RNA integrity was assessed using the RNA Nano 6000 Assay Kit of the Bioanalyzer 2100 system (Agilent Technologies, CA, USA). Gene expression profiling and data analysis were conducted by the Beijing Novogene Experimental Department. All data were analyzed according to the manufacturer's protocol. Differentially expressed genes were then identified according to fold change. The threshold set for up-and downregulated genes was a fold change greater than 2.
Statistical analysis
The H-scores of CD59 staining and TAM infiltration in tumor and peritumor tissues were compared using the Mann-Whitney U test. IBM SPSS Statistics software version 21.0 and GraphPad Prism software version 5.0 were used for statistical analysis and for drawing the graphs. Overall survival was analyzed using the Kaplan-Meier product-limit method, and the significance of our variables was measured by the log-rank test. The Fisher exact test was used to analyze associations between two variables, and the Pearson Chi-square test was used to analyze associations between more than two variables. Multivariable analysis and analysis of continuous and ordinal variables was performed using the Cox proportional hazards regression method. A two-tailed p-value of p < 0.05 was significant.
Results
CD59 overexpression in pancreatic cancer tissues was associated with high histological grade and a poor prognosis of pancreatic cancer patients CD59 expression levels as H-scores for pancreatic cancer tissues and adjacent nontumor tissues were different (Fig. 1a). Figure 1a shows the representative images of different staining intensities of CD59 in tumor tissues and the adjacent tissue. Based on cut-off values for clinicopathologic variables derived from the corresponding ROC curve (Fig. 1b), the median H-score of CD59 was 18.15 (range, 0-190). The H-score of CD59 was significantly higher in pancreatic cancer tissues than that in nontumor tissues (p = 0.027, Mann-Whitney U test, Fig. 1c). As shown in Table 1, the expression of CD59 in the tumor tissue was significantly associated with the histological grade (p = 0.034). No significant association was detected between CD59 expression and the other clinicopathological features. The effect of CD59 expression on the OS of the patients was detected using the Kaplan-Meier method and log-rank test. The univariate analysis showed that a worse overall patient survival was significantly associated with high CD59 expression in the tumor tissues (p = 0.025, Fig. 1d and Table 2), the N stage (p = 0.018) and the histological grade (p = 0.010) ( Table 2). Multivariate Cox regression analysis showed that the histological grade and the N stage were independent prognostic markers (all p < 0.05) ( Table 2) and that CD59 expression was an independent prognostic marker in the age subgroup (age < 60 y, HR = 2.611, p = 0.012, Fig. 1e and Table 3). Through the SurvExpress program, patients in the "High Risk" group presented a significantly higher expression level of CD59 than those in the "Low Risk" group (p < 0.001, Fig. 1f). According to the Kaplan-Meier plotter, the prognosis for the high-CD59 mRNA-expression level group was dramatically poorer than that for the lower level group (HR = 2.31, p < 0.001, Fig. 1g). Therefore, overexpression of CD59 may be a biomarker indicating worse survival for pancreatic cancer patients.
TAM infiltration and CD59 expression were positively correlated in pancreatic cancer tissues TAMs are a major constituent of the tumor immunosuppressive microenvironment and are known to 10 15 stimulate key steps in tumor progression. Figure 1a shows representative images of serial section staining of CD163 and CD59 in tumor and adjacent nontumor tissues. The total number of intratumoral TAMs was significantly higher than that in nontumor tissues (p = 0.018, Mann-Whitney U test, Fig. 2a). The cut-off value of intratumoral TAMs was selected by the ROC curve as mentioned above, and the median number of TAMs was 82.5 (Fig. 2b). TAMs that infiltrated the tumor had a positive correlation with CD59 expression (R 2 = 0.724, Fig. 2c). According to the classification of CD59 expression levels in tumor tissues, the number of TAMs that infiltrated the tumor tissues with a high level of CD59 expression was significantly higher than the number of TAMs in tissues with low CD59 expression (p < 0.001, Fig. 2d). Furthermore, worse OS was significantly associated with a high TAM infiltration in the tumor tissues using the Kaplan-Meier method and the log-rank test (p = 0.034, Fig. 2e). These data suggested that CD59 expression was proportionally correlated with TAM infiltration in pancreatic cancer tissues and that there might be crosstalk and cooperation between TAMs and CD59 expression in pancreatic cancer cells.
THP-1 macrophages upregulated CD59 expression on cancer cells and protected cells from CDC in vitro
To evaluate the effects of TAMs on CD59 expression in cancer cells, we examined the expression levels of CD59 protein in 7 human pancreatic cancer cell lines (BxPC-3, MiaPaCa-2, T3M4, PANC-1, AsPC-1, Su86.86, and CFPAC-1, Fig. 3a) and selected AsPC-1 as the high expression group. BxPC-3 and MiaPaCa-2 were selected as the medium and low expression groups for further study, respectively. We examined the effects of THP-1 macrophages on CD59 expression in these three cell lines by western blot and FCM. The CD59 expression in the three cell lines was elevated in the coculture group with THP-1 macrophages compared with expression in the control group (Fig. 3b, d). Since the CD59 expression level in MiaPaCa-2 was much lower than that in AsPC-1 and BxPC-3, the CD59 band of MiaPaCa-2 was almost invisible when detected together with the others (data not shown). Therefore, the western blots of the three groups were detected individually, and the results were clear (Fig. 3b). Therefore, AsPC-1 and BxPC-3 were chosen for additional experiments. The effect of TAMs on CD59 expression in these two cell lines was also confirmed by immunofluorescence staining (Fig. 3d). Activated THP-1 macrophages and pancreatic cancer cells were cocultured in the 0.4-μm transwell system, and the Mφ macrophages cocultured with pancreatic cancer cells became a CD163 + M2 phenotype (Fig. 4a), which was similar to the phenotype of TAMs in the tumor tissue. We also detected representative markers of THP-1 macrophages (CD163/ Arg-1/IFN-γ/iNOS, Fig. 4b, c) induced by the pancreatic cancer cells at the mRNA and protein levels, according to the characteristics of TAMs. To dissect the function of CD59 in pancreatic cancer cells against CDC, we used siRNAs to inhibit CD59 expression in AsPC-1 and BxPC-3, which was detected by western blot and FCM (Fig. 4d, e). To determine whether the increase in CD59 expression by THP-1 macrophages could inhibit CDC, we conducted a CDC and apoptosis assay between the si-CD59, coculture and control groups. Then, we used fresh human serum, diluted 2:5, in the assay medium to provide the complement system and used heat-inactivated serum as the control. After reducing CD59 expression in pancreatic cancer cells, the survival rate of cells decreased in the presence of the complement system (Fig. 4f, g). After coculture with THP-1 macrophages, the survival rate of cells markedly increased compared with that of the control group (Fig. 4h, left panel: AsPC-1 group, right panel: BxPC-3 group, ***p < 0.001). Therefore, pancreatic cancer-educated macrophages could upregulate CD59 expression on cancer cells and protect cancer cells from CDC in an in vitro experiment.
Pancreatic cancer-educated macrophages induced the upregulation of CD59 in pancreatic cancer cells via the IL-6R/STAT3 pathway Gene expression profiling was performed in AsPC-1 cocultured with THP-1 macrophages and in AsPC-1 alone (Fig. 5a). Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG) and Reactome pathway analyses of differentially expressed genes (p < 0.05) were performed (Fig. 5b, c). We found that the "cytokine-cytokine receptor interaction" category ranked 1st in the GO enrichment analysis out of all the differentially expressed genes; IL-6R was significantly elevated, and the result was confirmed by western blot Fig. 2 TAM infiltration and CD59 expression were positively correlated in pancreatic cancer tissues. a There was more TAM infiltration in tumor tissues than that in nontumor tissues (p = 0.018, Mann-Whitney U test). b The ROC curve of intratumoral TAMs for OS in pancreatic cancer tissues. c TAMs that infiltrated the tumor and CD59 expression were positively correlated (R 2 = 0.724). d The number of TAMs that infiltrated the tumor tissues with high levels of CD59 expression was significantly higher than that in tissues with low CD59 expression (p < 0.001, Mann-Whitney U test). e The influence of tumoral TAM infiltration on OS (p = 0.034, log-rank test) (Fig. 5d). As mentioned before, CD59 expression on AsPC-1 and BxPC-3 was significantly elevated in the coculture group with macrophages compared with that in the control group, resulting in rapid phosphorylation of STAT3 (Fig. 5e). Considering that STAT3 is the major tumor-promoting effector of IL-6 22,23 , we focused on the STAT3 signal transduction pathway in promoting macrophage-mediated differentiation of pancreatic cancer cells. We hypothesized that IL-6, secreted by TAMs, might induce the upregulation of CD59 in pancreatic cancer cells via STAT3 activation. IL-6 expression of pancreatic cancer-educated macrophages was proven to be significantly increased using qRT-PCR and western blots (Fig. 5f), and IL-6 secreted by the macrophages cocultured with cancer cells increased markedly, as detected by ELISA (Fig. 5g). Then, the pancreatic cancer cells were incubated with various concentrations of IL-6 (0, 0.01, 1, 10 and 100 nM), and the CD59 expression and phosphorylation levels of STAT3 in cancer cells were significantly elevated to levels comparable to those in cells stimulated with THP-1 macrophages (Fig. 5h). To gain further insight into the importance of IL-6 in macrophages as a key upstream factor driving CD59 up-regulation in pancreatic cancer cells, we neutralized IL-6 from the supernatants of the coculture system via a commercially available blocking antibody. Neutralizing IL-6 within the coculture system led to a reduction in the ability of macrophages to upregulate CD59 expression in pancreatic cancer cells, but the result was not so remarkable (Fig. 6a). Then we performed the experiment to knockdown IL-6 by siRNA in the macrophages and used the macrophage conditioned media to stimulate cancer cells (Fig. 6b). We found that the knockdown of IL-6 in macrophages reduced CD59 expression much more so than the IL-6 blocking antibody (Fig. 6c). Subsequently, we transfected STAT3 siRNAs and siNC into pancreatic cancer cells. SiSTAT3 pancreatic cancer cells cocultured with THP-1 macrophages expressed lower amounts of CD59 than the siNC group (Fig. 6d). Then, we used AG490 to deplete STAT3 protein in pancreatic cancer cells as previously described 24 , and AG490 significantly decreased the expression of CD59 in cocultured pancreatic cancer cells in a concentration-dependent (Fig. 6e). The effects on CD59 expression in the representative condition were also detected by FCM (Fig. 6f). The CDC and apoptosis assays were performed in representative groups, and the protection conferred by CD59 was increased when recombinant IL-6 was used and reversed when IL-6 was knocked down (Fig. 6g, h). The survival rates in the si-STAT3 and AG490 groups were much lower, indicating that the STAT3 signaling pathway could affect the behavior of cancer cells in multiple ways. The effects of IL-6 antibody on CDC were not prominent, and this might be due to the degradation or low activity of antibody in the coculture media. These data demonstrated that M2 macrophages up-regulated CD59 expression in cancer cells via the IL-6R/STAT3 signaling pathway.
In summary, these results revealed a novel interaction between TAMs and CD59 expression in pancreatic cancer cells, and demonstrated that pancreatic cancer-educated macrophages could protect pancreatic cancer cells from CDC by regulating CD59. These novel findings presented new therapeutic strategies of the immunotherapy of pancreatic cancer. (Fig. 7).
Discussion
Pancreatic cancer is extremely malignant and its 5-year overall survival was <10%. The biomedical research on pancreatic cancer has shifted from the study of biological behavior and the regulation mechanisms of tumor cells themselves into the study of tumor microenvironments [25][26][27] . Cancer cells can be protected from CDC by high expression of CD59, and the function of immune cells in the tumor microenvironment can also be affected by CD59 8 . TAMs have been demonstrated to play an important role in tumor progression 28 . In this study, we showed that the expression levels of CD59 in clinical specimens were positively correlated with a worse prognosis for pancreatic cancer patients and that CD59 expression was positively correlated with TAM infiltration in pancreatic cancer tissues. We also demonstrated that IL-6 derived from pancreatic cancer-educated macrophages played vital roles in regulating the expression of CD59 in pancreatic cancer. This research explores new and promising therapeutic strategies for immunotherapy of pancreatic cancer.
High CD59 expression is generally associated with a worse prognosis in human cancers, including colorectal 29 , prostatic 30 , ovarian 31 , and lung 32 cancers. Contradictory results have also been reported in several types of human cancers, such as breast cancer 33,34 . The role of CD59 in the progression and prognosis of pancreatic cancer has not yet been reported. Our study demonstrated that CD59 was overexpressed in pancreatic cancer tissues and that a high expression level was associated with a high histological grade and a worse prognosis. These results were also proven by online databases. CD59 expression in pancreatic cancer tissues was much higher than that in the surrounding tissues. Multivariate Cox regression analysis also showed that CD59 expression was an independent prognostic marker in the age subgroup (age < 60 y). It has been reported that CD59 inactivation or deficiency was associated with the development of atherosclerosis 35 and the complications of diabetes 36 , and the incidence of these b, c GO and KEGG pathway analyses of differentially expressed genes. d IL6-R was extremely elevated in the cocultured group compared with that in the control group. e AsPC-1 and BxPC-3 cells were cultured with THP-1 macrophages and analyzed for the level of total STAT3 or phosphorylated STAT3 (p-STAT3). f, g IL-6 expression in macrophages cocultured with AsPC-1 and BxPC-3 cells was detected by qRT-PCR, western blot and ELISA. h The CD59 expression and phosphorylation of STAT3 in pancreatic cancer cells incubated with various concentrations of recombinant IL-6 (0, 0.01, 1, 10, and 100 nM) were detected by western blot diseases increased significantly with age. This phenomenon may indicate that in elderly cancer patients, the benefits of low-levels of CD59 against cancer might counteract the effects of atherosclerosis or diabetes. However, it still requires further exploration. We also observed that CD59 expression was positively correlated with CD163 + M2 type macrophage infiltration in pancreatic cancer tissues. In the in vitro experiments, we found that pancreatic cancer cells were able to induce monocytes to differentiate into M2-type macrophages which could upregulate CD59 expression in cancer cells and protect cells from CDC. It is worth stating that the CD59 expression levels of different types of cancer cells vary. The change in CD59 expression after macrophage coculture is minimal for Mia PaCa-2 cells compared to AsPC-1 and BxPC-3 cells. Apart from CD59, CD46, and CD55 could also enable tumor cells to evade CDC and function as complement regulators. This may be due to MiaPaCa-2 cells mainly expressing CD46 or CD55 in addition to CD59, and future experiments should be performed. When the CDC assay was performed, the percentage of dead cancer cells was not as high as expected. This might be due to the protective function of CD46 or CD55 on cancer cells. Future experiments should be performed to investigate the effects of macrophages on CD46 or CD55 expression in cancer cells and the interactions between these complement restriction factors. Using gene expression profiling and bioinformatics analysis, we identified "cytokine-cytokine receptor interaction" as the main mechanism. Additional experiments have proven that IL-6 knockdown or STAT3 phosphorylation inhibition could reverse macrophageinduced CD59 upregulation. Here, we suggest that pancreatic cancer-educated macrophages induce the Fig. 6 The effects on CD59 expression of IL6-antibody, si-IL-6, siSTAT3 and the STAT3 inhibitors. a IL-6 neutralization in the coculture system by IL-6 antibody inhibited CD59 expression in response to THP-1 coculture to a certain degree, but not significantly. b IL-6 siRNAs downregulated the IL-6 expression of THP-1. c After IL-6 knockdown of macrophages, CD59 upregulation was reversed in the coculture system. d siSTAT3 pancreatic cancer cells cocultured with THP-1 macrophages expressed lower amounts of CD59 than that of the siNC group. e STAT3 inhibitor, AG490 significantly decreased the expression of CD59 in THP-1-cocultured pancreatic cancer cells in a concentration-dependent manner. f The CD59 levels of the different representative groups were detected by FCM. g, h CDC and apoptosis assay between the representative groups in the conditioned media (fresh human serum, diluted 2: 5) upregulation of CD59 in an IL-6R/STAT3-dependent manner in pancreatic cancer cells. In the tumor microenvironment, the IL-6R/STAT3 signaling pathway acted to promote tumor growth and progression 37 , and elevated circulating levels of IL-6 have been reported in patients with pancreatic 38 , breast 39 , colorectal 40 , and ovarian 41 cancer types, among others. In this study, we observed that pancreatic cancer cells could induce macrophages to exhibit a tumor-promoting phenotype, and then the tumor-promoting macrophages induced CD59 expression in cancer cells by IL-6 secretion and by inducing IL-6R expression in cancer cells. IL-6 binds to IL-6R and results in the activation of STAT3, leading to the transcription of STAT3 target genes. This phenomenon will provide a novel mechanism of the TAM's immunosuppressive function, which is to regulate the function of the complement systems. Although there are some studies on the complement components C5a and C1q regulating TAMs in recent years 42,43 , the mechanism of the regulation between TAMs and the complement system has not been thoroughly investigated. In our study, THP-1 was used as an in vitro model to investigate the function of macrophages. Future experiments should be performed with monocyte-derived macrophages or with TAMs isolated from PDAC patients to validate and confirm these findings with more cancer cells. CD59 is highly expressed on cancer cells to regulate complement activation and is highly associated with chemotherapy resistance and radio resistance 8,44 . Therapeutic strategies that regulate the function of CD59 in the tumor immune microenvironment may be a promising method for tumor immunotherapy. Considering the wide distribution of CD59 on somatic cells, simply blocking the function of CD59 may induce potential side effects. Thus, a better understanding of the immunosuppressive networks regulating CD59 is needed. The IL-6/ STAT3 pathway is hyperactivated in many types of cancers, and treatments that target the IL-6/STAT3 pathway have been widely studied and are listed as follows: directly targeting IL-6 with antibodies, such as siltuximab; targeting IL-6R with antibodies, such as tocilizumab; and targeting STAT3 inhibitors, such as peptidomimetics 37 . Targeting this signaling axis has been shown to be beneficial in the treatment of certain cancers. In this study, we demonstrated that TAMs induced the upregulation of CD59 in an IL-6R/STAT3-dependent manner and that IL-6 or STAT3 knockdown and the STAT3 pathway inhibitor AG490 could reverse the increase in CD59 expression caused by TAMs. Combined therapy by targeting TAMs, CD59 and IL-6 may be a direction for cancer treatments. Recent studies have shown that the potential application of bispecific and multispecific antibodies might provide new insights for cancer therapy. Evidence has shown that bispecific antibodies targeting CD59 and CD20 could increase the efficacy of immunotherapy in lymphocytic leukemia 20 . In recent studies, heterodimeric coiled coils were used as a tool to form polymers containing a variety of peptides 45,46 . Multispecific antibodies targeting TAMs, CD59, IL-6 and targetable cancer mutations might be a new promising strategy for cancer immunotherapy.
Conclusions
In summary, crosstalk between macrophages and pancreatic cancer cells through the upregulation of CD59 in an IL-6R/STAT3 manner protects pancreatic cancer from Fig. 7 Schematic diagram of this study: TAMs protect pancreatic cancer from CDC by upregulating of CD59 in a paracrine IL-6R/STAT3 manner CDC. This mechanism reveals promising insight into the exploration of novel therapeutic strategies for pancreatic cancer immunotherapy. | 7,161 | 2019-11-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
The Polyherbal Wattana Formula Displays Anti-Amyloidogenic Properties by Increasing α-Secretase Activities
Alzheimer’s disease is characterized by the deposition of insoluble amyloid-β peptides produced from the β-amyloid precursor protein (βAPP). Because α-secretase cleavage by ADAM10 and ADAM17 takes place in the middle of Aβ, its activation is considered as a promising anti-AD therapeutic track. Here we establish that the polyherbal Wattana formula (WNF) stimulates sAPPα production in cells of neuronal and non-neuronal origins through an increase of both ADAM10 and ADAM17 catalytic activities with no modification of BACE1 activity and expression. This effect is blocked by specific inhibition or genetic depletion of these disintegrins and we show that WNF up-regulates ADAM10 transcription and ADAM17 maturation. In addition, WNF reduces Aβ40 and Aβ42 generation in human cell lines. Altogether, WNF presents all the characteristics of a potent preventive anti-Alzheimer formula. Importantly, this natural recipe, currently prescribed to patients for the treatment of other symptoms without any secondary effect, can be tested immediately for further clinical studies.
Introduction
Alzheimer's disease (AD) is a progressive and yet incurable neurodegenerative disorder affecting the elderly. This syndrome, at its early stage, is characterized by mild memory loss before evolving to a severe decline of cognitive functions and ultimately leading to dementia and death. At the molecular level, proteolysis of the β-amyloid precursor protein (βAPP) by enzymes called "secretase" is a central event since it determines both the production rate and the nature of the amyloid peptide (Aβ) [1], the main component of the extracellular senile were from Vivantis (Selangor Darul Ehsan, Malaysia). PDBu, poly-D-lysine, GI254023X and dimethyl sulfoxide were from Sigma (St Louis, MO, USA). Skim milk powder was from HiMedia (Mumbai, India). Ammonium persulphate was from GE Health care (Pisataway, NJ, USA). The chemiluminescence HRP substrate was from Millipore (Bedford, MA, USA). SDS was from Amaresco (Solon, OH, USA). O-Phenanthroline and TAPI-0 were from Calbiochem (San Diego, CA, USA).
Preparation and analysis of WNF
The sources of all herbal components came from the wild stretching from the Central and the Northeastern parts of Thailand by contact suppliers who collected and sold the crude herbs to the Center of Applied Thai Traditional Medicine where the origin of each herb was recorded. All the production procedures were then supported by the Manufacturing Unit of Herbal Medicines and products, Center of Applied Thai Traditional Medicine (CATTM), Faculty of Medicine Siriraj Hospital (Bangkok) and were operated under Good Manufacturing Practice (GMP) certification. Briefly, individual herbs were first authenticated by experts, including certified pharmacognosists of the Center of applied Thai traditional medicine. All of raw materials were washed with de-ionized water (DI), dried by hot-air oven and then grinded, sieved and packed in laminated vacuum packaging bags. The polyherbal formula powder is then obtained by extraction of equal amount of each herb (weight/weight) with an 80% ethanol solution at a final concentration of 100mg/ml, filtered through cotton wool and subsequently centrifuged at 10,000xg for 10 min. The supernatant was evaporated and lyophilized to obtain freeze-dried powder and stored in amber bottle at 25˚C in desiccators. The physical properties, the heavy metal and microbial contamination of formula were assessed before any experiment is performed. In addition, the chemical assessments of the formula were verified using Thin Layer Chromatography (TLC) and Ultra Performance Liquid Chromatography (UPLC) as previously described [11]. WNF was freshly prepared before experimental use on cell lines as a 10mg/ml stock solution in 50% DMSO.
β-secretase fluorimetric assay on cell homogenates HEK293 cells stably overexpressing 1D4-BACE1 were cultured in 35mm-dishes until they reach 80% confluence, treated without (control) or with WNF (100μg/ml) for 16 hours at 37˚C in DMEM containing 1% FBS and assayed for their β-secretase activity as previously described [16]. Briefly, cells were collected, lysed with Tris 10mM pH 7.5, homogenized and kept on ice. Samples were assayed for their protein contents with the Bradford method and all adjusted to a 3μg/μl concentration. Thirty μg of each samples (10μl) diluted in 10mM sodium acetate buffer pH 4.5 were incubated for 30 min at 37˚C in black 96-well plates (in a final volume of 100μl) in the absence (triplicate) or in the presence (triplicate) of the β-secretase specific inhibitor JMV1197. Then, the β-secretase-specific JMV2236 substrate (10μM) was added to all samples and the β-secretase-specific activity corresponds to the JMV1197-sensitive fluorescence recorded at each time point at 320nm and 420nm excitation and emission wavelengths respectively.
Real-time quantitative polymerase chain reaction (q-PCR)
Following treatments (36 hours), total RNA was extracted and purified with the PureLink RNA mini kit (Ambion, Life Technologies, Austin, TX, USA). Real-time PCR was performed with 100ng of total RNA using the QuantiFast SYBR Green RT-PCR kit (Qiagen, Singapore) detector system (Eppendorf Mastercycler ep RealPlex) and the SYBR Green detection protocol. The 2x QuantiFast SYBR Green RT-PCR master mix, QuantiFast RT mix, QuantiTectPrimer Assay and template RNA were mixed and the reaction volume was adjusted to 25μl using RNase-free water. The specific primers were designed and purchased from Qiagen. Each primer is a 10x QuantiTect Primer Assay containing a mix of forward and reverse primers for specific targets: Hs_ADAM10_1_SG (QT00032641, human ADAM10), Hs_ADAM17_1_SG (QT00055580, human ADAM17) and Hs_GAPDH_1_SG (QT00079247, human GAPDH).
Sucrose gradient sub cellular fractionation
Wild-type HEK293 cells cultured in 100mm-dishes were incubated for 16 hours without (control) or with WNF (100μg/ml) in DMEM/1% FBS. Cells were then homogenized with a dounce homogenizer in 0.25M sucrose prepared in 10mM Tris-HCl (pH 7.4) and containing 1mM Mg(AcO) 2 . Equal amounts of protein were loaded at the top of a step gradient, centrifuged and fractions (1ml) were collected from top to bottom of each gradient. Proteins in fractions were precipitated overnight at 4˚C with methanol (4 volumes) and calnexin, Golgi 58K protein, ADAM10 and ADAM17 immunoreactivities were analyzed by western blot as described above.
Measurement of human Aβ production
HEK293 and SH-SY5Y neuroblastoma cells were transiently transfected with the human wildtype βAPP 751 cDNA for 36 hours and incubated without (control) or with various WNF concentrations during the last 16 hours (in 1ml of DMEM/FBS1%). Aβ40 and Aβ42 levels were then detected in the secretion media (50μl) using sandwich ELISA kits detecting human Aβ40 (khb3482) and human Aβ42 (khb3442) respectively (Invitrogen) following manufacturer's recommendations. Aβ levels (pg/ml) were obtained by reading absorbance at 450nm with a spectrophotometer and values were then normalized with βAPP and β-actin.
Statistical analysis
Statistical analyses were performed with the Prism software (GraphPad, San Diego, USA) using the unpaired t test for pair wise comparisons. All results were expressed as means ± SEM and the p values equal to or less than 0.05 were considered significant.
Results
We first evaluated the effect of WNF on the non-amyloidogenic α-secretase processing of βAPP by cultured human HEK293 cells overexpressing βAPP 751 and showed that concentrations from 1 up to 100μg/ml dose-dependently increase the secretion of sAPPα as well as the production of the α-secretase-derived C-terminal counterpart (C83 fragment) without modifying βAPP immunoreactivity (Fig 1A), thereby indicating that WNF most likely up-regulates the αsecretase processing of βAPP rather than βAPP expression. Considering the well-established roles of ADAM10 and ADAM17 in the constitutive and PKC-regulated α-secretase processing of βAPP respectively [25], we tested the effects of the ADAM10-specific inhibitor GI254023X as well as the ADAM17-specific inhibitor TAPI-O on the WNF-induced sAPPα production. The results showed that GI254023X and TAPI-O respectively prevent the constitutive ( Fig 1B) and PDBu-stimulated (Fig 1C) WNF-dependent sAPPα secretion in βAPP-overexpressing HEK293 cells. Because these experiments were conducted in HEK293 cells artificially overexpressing high amounts of the βAPP protein, we then wanted to determine whether WNF was able to Wild-type HEK293 cells were treated as in (A) and allowed to secrete for 5 hours before sAPPα (medium, n = 4) as well as βAPP (n = 4) and β-actin (lysate) were analyzed by Western blot. *p<0.05; **p<0.02; ***p<0.001; #p<0.0001; ns, non-statistically different. Immunoblots illustrate representative gels and histograms correspond to the statistical analysis for all experiments. All bars correspond to the densitometry analyses (βAPP and C83 being normalized with β-actin), are expressed as a percentage of control (white bars, non treated cells) taken as 100 and are the means ± SE of the indicated number of independent experiments. Black lines in (B) and (C) indicate when lanes from the same original gels were spliced for better clarity. generate similar effects in the same cell line producing endogenous levels of βAPP. Indeed, WNF dose-dependently and significantly promotes sAPPα release without interfering with βAPP protein levels in wild-type HEK293 cells (Fig 1D).
We then validated these data in cells of neuronal origin and carried out similar experiments with cultured mouse N2a neuroblastoma cells stably overexpressing the neuron-specific βAPP 695 isoform. As observed in HEK293 cells, WNF stimulates the secretion of sAPPα in a dose-dependent manner without altering βAPP levels (Fig 2A). We then wanted to ascertain that the formula conveys similar effects on the endogenous α-secretase processing of βAPP and established that WNF significantly increases endogenous sAPPα secretion in a dosedependent manner in human SH-SY5Y neuroblastoma cells (Fig 2B). We then used a specific and reliable α-secretase-specific fluorimetric assay [24] to investigate the effect of WNF (100μg/ml) on ADAM10 and ADAM17 catalytic activities. We observed that WNF triggers a significant increase of both the constitutive (ADAM10-expressing HEK293 cells) (Fig 3A) and PKC-regulated (PDBu-treated ADAM17 expressing cells) (Fig 3B) o-phenanthroline-sensitive JMV2770-hydrolyzing activities. Similar experiments performed using GI254023X (ADAM10-specific) and TAPI-O (ADAM17-specific) inhibitors indicated that WNF significantly triggers both GI254023X- (Fig 3C) and TAPI-O-sensitive (Fig 3D) JMV2770 degradation and confirmed that WNF indeed targets ADAM10 and ADAM17. We also evaluated the impact of WNF treatment on the amyloidogenic BACE1 catalytic activity by means of a specific fluorimetric assay [16]. As shown in Fig 3E, WNF applied at a concentration of 100μg/ml did not affect the JMV1197-sensitive JMV2236 degradation in BACE1-overexpressing HEK2983 cells. Moreover, WNF treatments did not modify endogenous BACE1 immunoreactivity in wild-type HEK293 cells (Fig 3F).
We next examined the ability of WNF to stimulate sAPPα production in the previously well-characterized MEFs derived from wild-type, ADAM10 -/and ADAM17 -/animals [20][21][22]. Because the anti-sAPPα antibody DE2B4 is human-specific, we transiently transfected human βAPP 751 . As shown in Fig 4A and 4B, the three cell lines efficiently overexpressed human βAPP and secreted detectable amounts of human sAPPα at 36 hours post-transfection. Importantly, WNF (100μg/ml) could promote the constitutive (Fig 4A, upper left panel) and PKC-regulated (Fig 4B, upper left panel) release of sAPPα in the wild-type MEFs, but not in ADAM10 and ADAM17 knockout cells respectively (Fig 4A and 4B, upper right panels). Moreover, the positive effects of WNF toward both the constitutive (Fig 4C, left panel) and PDBu-induced (Fig 4D, left panel) JMV2770-hydrolyzing activities observed in wild-type MEFs were fully prevented by ADAM10 (Fig 4C, right panel) and ADAM17 depletions ( Fig 4D, right panel) respectively. Altogether, these data undoubtedly established ADAM10 and ADAM17 as genuine mediators of WNF-dependent α-secretase activation.
To determine whether WNF up-regulates ADAM10 and ADAM17 expressions, we first measured the impact of WNF treatment on ADAM10 and ADAM17 protein levels in HEK293 cells by western blot. Surprisingly, WNF (100μg/ml) significantly increased ADAM10 ( Fig 5A, left panels) but not ADAM17 endogenous immunoreactivities (Fig 5A, right panels). We next performed quantitative real-time PCR experiments and established that WNF treatment leads to a significant augmentation of ADAM10 but not ADAM17 mRNA levels ( Fig 5B). Because WNF apparently triggers ADAM17 maturation as illustrated by an increase of active ADAM17 (lower band, Fig 5A upper right panel), velocity sedimentation of ADAM10 and ADAM17 in sucrose step gradients was performed. Partial characterization of the fractions, using anti-58K Golgi protein and anti-calnexin antibodies as markers of the Golgi apparatus and the endoplasmic reticulum (ER), indicated that these organelles reside in fractions 1-3 and fractions 11-12 respectively (Fig 5C panels e and f). The results indicated that whereas the distribution profiles of endogenous ADAM10 was similar in control and WNF-treated cells (Fig 5C panels a and b), the maturation of endogenous ADAM17 was strongly enhanced by WNF, thereby increasing ADAM17 maturation in the Golgi/trans-Golgi network (TGN) (compare mature ADAM17 immunoreactivity in fractions 1-3 for panels c and d in Fig 5C) as well as the level of active ADAM17 in the ER/plasma membrane fractions (Fig 5C, lanes 11 and 12 in panels c and d).
Finally, because α-secretase cleaves βAPP in the middle of the Aβ sequence, we hypothesized that WNF treatment could affect amyloid peptides generation in non-neuronal and neuronal human cells transiently overexpressing human βAPP. Firstly, thirty-six hours posttransfection, both HEK293 and SH-SY5Y cells express high βAPP levels when compared to pcDNA3-transfected cells (Fig 6A and 6B upper panels). Secondly and remarkably, WNF applied at a dose of 100μg/ml significantly reduced the secretion of Aβ40 as well as the toxic and aggregate-prone Aβ42 peptide in HEK293 cells (Fig 6A) as well as in the SH-SY5Y neuroblastoma cell line (Fig 6B).
Discussion
Together with Aβ-targeting vaccination, the pharmacological inhibition of the two β-and γsecretases Aβ-forming enzymes stood during the past decades as the principal and most relevant therapeutic tracks aimed at preventing, slowing down or curing Alzheimer's disease. However, because BACE1 and γ-secretase cleave, in addition to βAPP, a constantly growing number of other substrates with important physiological functions, this strategy may engender severe deleterious side effects. Considering this matter of fact, a more recently developed alternative consists in the activation of the α-secretases ADAM10 and ADAM17. The principal advantage of such an approach as an anti-AD therapy resides in the fact that stimulation of this cleavage is not only expected to preclude Aβ production but also to support neurotrophism, neuroprotection and neurogenesis through an increased secretion of the βAPP-derived sAPPα metabolite. Unfortunately, because ADAM10 and/or ADAM17 have more than 80 other substrates, the cleavages of which yielding to pathological situations such as cancer and chronic inflammation [9], one should stay very cautious regarding the use of acute pharmacological activation of α-secretases and to rather envision stimulating these proteases via the mild, safe and regular consumption of natural compounds that would reduce the amyloid load on a long term basis and could thereby represent a valuable therapeutic alternative for AD treatment [26].
Falling within such an approach, the present study proposed to investigate the effect of the polyherbal Wattana formula on the non-amyloidogenic processing of βAPP in vitro in various cell lines by means of complementary techniques aimed at measuring sAPPα secretion, Aβ production as well as α-secretase catalytic activities, expression and subcellular distribution. We first established that WNF stimulates the constitutive and the PKC-regulated α-secretase activities in a dose-dependent manner. Importantly, WNF not only stimulates sAPPα secretion in βAPP-overexpressing non-neuronal HEK293 and neuroblastoma N2a cells, thereby indicating its ubiquitous action, but also behaves as a potent enhancer of endogenous sAPPα production in the human SH-SY5Y neuroblastoma cell line. In addition, using pharmacological inhibition and genetic depletion approaches, we formally identified ADAM10 and ADAM17 as the targeted proteases, thereby establishing WNF as a potent α-secretase enhancer and a possible anti-AD agent. It is important to underline here that, as far as the amyloid cascade hypothesis is considered to be at the center of gravity of the pathology, α-secretase natural activators in general and WNF in particular are expected to bring preventive rather than curative beneficial effects since they theoretically impair all the following events such as Aβ oligomerization and fibrillogenesis as well as the subsequent cognitive impairments associated with the disease.
Our observations that WNF does not interfere with the amyloidogenic β-secretase activity but significantly lowers both Aβ40 and Aβ42 production underline the fact that any therapeutic strategy leading to α-secretase activation with no modification of β-secretase activity is expected to be sufficient to impair Aβ generation and is in good agreement with the observation that the sole ADAM10 overexpression reduces both Aβ40 and Aβ42 levels in vivo in the brain of a transgenic mouse model of AD [27].
GAPDH mRNA levels were determined by real-time PCR. Black bars in histograms correspond to the densitometric analyses normalized with β-actin (A) or mRNA levels normalized with GAPDH (B), are expressed as a percentage of control (nontreated cells, white bars) and represent the means ± SE of 11 to 17 independent determinations. *p<0.0005; **p<0.0001; ns, non-statistically different. (C) HEK293 were incubated for 16 hours without (control) or with WNF (100μg/ml), homogenized and the sub-cellular distributions of endogenous ADAM10 (panels a and b), ADAM17 (panels c and d) as well as the ER marker calnexin (panel e) and the Golgi marker Golgi 58K protein (panel f) were analyzed by western blot after sucrose gradient fractionation. TGN, trans-golgi network; ER/PM, endoplasmic reticulum/plasma membrane. On a mechanistic point of view, it is of utmost interest to underline that ADAM10 and ADAM17, although presenting a similar general structure, are up-regulated by WNF via two distinct mechanisms. Thus, WNF induces an elevation of ADAM10 immunoreactivity and mRNA levels thereby demonstrating an effect at the transcriptional level. However, the same treatment has no impact on ADAM17 transcription but rather promotes its maturation/activation as shown by the marked increase of active ADAM17 and the concomitant decrease of the pro-enzyme (Fig 4C, bottom panels). Because WNF is a mixture of 15 medicinal plants, the most probable explanation would be that ADAM10 and ADAM17 are targeted by distinct WNF components, the identification of which remaining to be established.
In this respect, some of the molecules identified as part of the WNF by ultra-performance liquid chromatography [14] were indeed recently shown to be beneficial regarding AD pathology. Firstly, the active alkaloid and acetylcholinesterase inhibitor piperine can protect against neurodegeneration and cognitive impairment in a rat model of AD [28]. Secondly, the antioxidant and anti-inflammatory polyphenol gallic acid is able to decrease Aβ toxicity [29], to reduce amyloid fibril formation [30] and to attenuate neuronal damage by preventing Aβ oligomerization [31]. Thirdly, the antioxidant and NFκB inhibitor p-coumaric acid as well as the phenolic compound caffeic acid protect against Aβ25-35-induced neurotoxicity respectively in vitro in PC12 cells [32] and in vivo in rats [33]. Fourthly, ferulic acid, a phenol that is closely related to curcumin and has antioxidant properties, induces a resistance to Aβ42 toxicity in adult mice [34] and reduces amyloid deposition in a mouse model of AD [35]. However, contrary to our findings, all these described effects most likely occur at a late post-Aβ production step rather than at earlier βAPP processing stages. Nevertheless, two recent publications have evidenced that some of these compounds can indeed target βAPP-cleaving secretases. Hence, ferulic acid can reverse the behavioral deficits of the PSAPP transgenic mouse model of AD through a slight reduction of the β-secretase BACE1 stability and activity [36] that we could not detect in the whole extract. Moreover, octyl gallate, the ester of 1-octanol and gallic acid, has been shown to inhibit Aβ generation and to increase sAPPα secretion in vitro and in vivo via an increase of estrogen receptor-mediated ADAM10 activity [37] and could therefore support the WNF-dependent increase of ADAM10 activity/expression observed in the present study.
Supporting the promising anti-AD therapeutic use of medicinal plants, several other plant extracts have been recently reported to convey beneficial effects in vivo in animal models of AD ( [41] for review). Firstly, extracts prepared from Centella asiatica (L.) Urb. (Umbelliferae) were shown to ameliorate cognitive performances and to decrease Aβ levels, oxidative stress and senile plaques formation [42,43]. Secondly, extract from Bacopa monnieri (L.) Wettst. (Brahmi) displays neuroprotective effects, reduces Aβ production and can improve cognitive functions [44,45]. Finally, extracts from Withania somnifera (L.) Dunal diminish behavioral deficits, Aβ production and plaque pathology [46]. Following the present demonstration that WNF is able to increase sAPPα secretion and to reduce Aβ production in vitro, evaluating whether individuals who regularly consume WNF are less prone to developing AD will be of particular interest.
Conclusion
This study clearly establishes WNF as a potent in vitro activator of the non-amyloidogenic processing of βAPP. Because this is accompanied by a reduction of amyloid peptides production, it is our assumption that WNF may be used as a mild natural anti-AD preventive treatment. The identification of the WNF-containing active molecule(s) as well as the demonstration that WNF can slow down or reverse the pathology in transgenic mouse models of AD should deserve a particular attention in a near future. However, it is important to underline here that, regardless of this considerations, the use of this polyherbal formula is already possible for further clinical studies in humans since it is currently prescribed to patients for the treatments of AD-unrelated symptoms and does not display any secondary effects. | 4,772.6 | 2017-01-20T00:00:00.000 | [
"Biology"
] |
Measurement of relaxation times in extensional flow of weakly viscoelastic polymer solutions
The characterization of the extensional rheology of polymeric solutions is important in several applications and industrial processes. Filament stretching and capillary breakup rheometers have been developed to characterize the extensional properties of polymeric solutions, mostly for high-viscosity fluids. However, for low concentration polymer solutions, the measurements are difficult using available devices, in terms of the minimum viscosity and relaxation times that can be measured accurately. In addition, when the slow retraction method is used, solvent evaporation can affect the measurements for volatile solvents. In this work, a new setup was tested for filament breakup experiments using the slow retraction method, high-speed imaging techniques, and an immiscible oil bath to reduce solvent evaporation and facilitate particle tracking in the thinning filament. Extensional relaxation times above around 100 μs were measured with the device for dilute and semi-dilute polymer solutions. Particle tracking velocimetry was also used to measure the velocity in the filament and the corresponding elongation rate, and to compare with the values obtained from the measured exponential decay of the filament diameter.
Introduction
The capillary thinning and breakup of Newtonian and viscoelastic liquid filaments are considerably different (Anna and McKinley 2001;Oliveira and McKinley 2005). The filament thinning is triggered a capillary instability (Anna and McKinley 2001) and the subsequent evolution of the filament thread for Newtonian fluids is the result of the competition between the driving surface tension, viscosity, and inertial effects, with the filament diameter decreasing linearly with time in the last stage of the breakup process (Entov and Hinch 1997;Vega et al. 2014).
Adding even a small amount of a high molecular weight polymer to the fluid has a significant effect on the filament thinning and breakup (Goldin et al. 1969). Although the process is also initiated by a capillary instability, the presence of the polymer macromolecules generates a quasi-cylindrical filament, which takes more time to thin and pinch off (Middleman 1965;Goldin et al. 1969). During the thinning of viscoelastic liquid filaments, elastic and capillary forces balance each other, while inertial, viscous, and gravitational effects are often negligible. In this elasto-capillary regime, the filament diameter decreases exponentially with time (Entov and Hinch 1997), and tensile stresses grow exponentially because polymeric chains are elongated at a constant extensional rate,ε = 2/(3λ), where λ is the liquid extensional relaxation time (Bazilevsky et al. 1990). The formation of structures can occur during the thinning of the viscoelastic filament, such as the beads-on-a-string (BOAS) phenomenon (Bhat et al. 2010).
The measurement of the extensional properties of complex fluids is of great relevance for industrial processes such as inkjet printing, fiber spinning, spraying, and atomization. Several studies have used filament and capillary breakup rheometers (Bazilevsky et al. 1981;Bazilevsky et al. 1990; Matta and Tytus 1990;Stelter et al. 2000;Nelson et al. 2011;Dinic et al. 2015) to investigate different aspects of the problem, such as the velocity profile during the thinning of viscoelastic filaments (Gier and Wagner 2012); the effect of the molecular weight and concentration of the polymer on the filament thinning and breakup (Clasen et al. 2006;Tirtaatmadja et al. 2006;Arnolds et al. 2010); the BOAS instability (Oliveira and McKinley 2005;Bhat et al. 2010); the effect of the mass transfer resulting from water absorption in hygroscopic fluids leading to a change of the extensional viscosity (McKinley and Tripathi 2000); the measurement of both the relaxation time and extensional viscosity of viscoelastic fluids (McKinley and Tripathi 2000;Stelter et al. 2000;Anna and McKinley 2001;Nelson et al. 2011;Arnolds et al. 2010;Campo-Deaño and Clasen 2010;Vadillo et al. 2012;Keshavarz et al. 2015;Dinic et al. 2015).
Since the early 1990s, several devices have been developed to generate a uniaxial elongational deformation in viscoelastic fluids, and to quantify their extensional rheology (Galindo-Rosales et al. 2013). Considerable attention has been devoted to the filament stretching extensional rheometer (FiSER) (Anna and McKinley 2001;McKinley et al. 2001), and the capillary breakup extensional rheometer (CaBER TM ) (McKinley and Tripathi 2000; Anna and McKinley 2001;Rodd et al. 2005). The FiSER device, developed after the seminal work of Matta and Tytus (1990), imposes an exponentially increasing separation of the endplates, generating an extensional flow of constant deformation rate. A Versatile Accurate Deformation Extensional Rheometer (VADER-1000) was developed by Huang et al. (2016) for highly viscous samples (η 0 10 3 Pa s) and is available as a new commercial extensional rheometer.
In the CaBER TM apparatus, based on the pioneering work of Bazilevsky et al. (1981Bazilevsky et al. ( , 1990, the fluid is introduced between two cylindrical plates (typically between 4 and 6 mm in diameter) separated by a small distance, usually smaller than the plate diameter, to form a liquid bridge. Then, a rapid displacement of the upper plate is imposed, of the order of the plate diameter, which makes the liquid bridge unstable. The liquid bridge thins under the balance of capillary, viscous, elastic, inertial and gravitational forces. The velocity field far away from the end-plates (rods) is essentially one-dimensional and purely extensional (Schultz and Davis 1982).
As discussed by Galindo-Rosales et al. (2013), most of the extensional rheometers are suitable for operating only with high-viscosity fluids. Three notable exceptions developed recently are the Rayleigh-Ohnesorge Jet Elongational Rheometer (ROJER) Keshavarz et al. 2015), the optically-detected elastocapillary self-thinning dripping-onto-substract (ODES-DOS) extensional rheometer developed by Dinic et al. (2015), and the capillary thinning device developed by Vadillo et al. (2012), which was used to measure relaxation times as low as 80 μs of polystyrene in diethyl phthalate solutions. The ROJER device was used to investigate both the liquid extensional response and the effects of viscoelasticity on the atomization of dilute polyethylene oxide (PEO) solutions. In this extensional rheometer, the viscoelastic liquid jet is perturbed by a piezo-actuator at a prescribed frequency, while a stroboscopic imaging setup captures the motion of the liquid in slow motion, which allows a detailed analysis of the time evolution of the jet diameter during the breakup process. The ODES-DOS device was used to measure the response of aqueous dilute PEO solutions with small relaxation times (λ 1 ms), and low viscosities (η 0 20 mPa s).
It is also important to highlight the works of Walker (2001a, b, 2006), which used a method based on the work of Schümmer and Tebel (1982), to measure elongational properties of low viscosity fluids. The fluids were sprayed using an air atomizer, and the corresponding drop size distributions measured using a diffraction-based size analyzer. The results showed that viscoelasticity increases the mean drop diameter and a correlation between the measured relaxation times and the average droplet diameters was found.
Despite the variety of extensional rheometers developed so far, the HAAKE TM CaBER TM 1 (Thermo Scientific) is the leading edge apparatus commercially available which allows measuring the relaxation time and the extensional viscosity of dilute polymer solutions. According to Rodd et al. (2005), the minimum relaxation time measurable with this device is of the order of 1 ms. However, relaxation times of such a small magnitude are very difficult to measure, especially for low viscosity liquids, due to inertial effects and the short time for the filament thinning and breakup. Campo-Deaño and Clasen (2010) developed the so-called slow retraction method (SRM) by combining the CaBER TM 1 apparatus with high-speed imaging. In this technique, the filament thinning is promoted by a slow extension of the liquid bridge, contrary to the fast step strain of the conventional technique. Using the slow retraction method, inertial effects were minimized, and relaxation times as low as 240 μs were measured for aqueous solutions of PEO (Campo-Deaño and Clasen 2010). Solvent evaporation in volatile fluids or water absorption in hygroscopic liquids may play a significant role in the measurements using this method, and these effects can change the polymer concentration during the quasistatic liquid bridge stretching, depending on the solution volatility or hygroscopicity. In this work, we describe a miniaturized filament breakup device combining the slow retraction method and highspeed imaging techniques. A slow linearly increasing separation of the end-plates is used to trigger the elongational flow. In order to reduce solvent loss by evaporation or water absorption of hygroscopic solutions, the filament thinning and breakup takes place in an immiscible oil, which can also be useful to perform particle tracking to measure the velocity field in the thinning filament. We measure the extensional relaxation time of aqueous polyacrylamide solutions (PAA) over a wide range of concentrations, including ultradilute polymer solutions. Additionally, two PEO solutions matching those used recently by Keshavarz et al. (2015) in the ROJER device were used in the present investigation in order to validate the experimental technique and show its applicability for measuring relaxation times below 1 ms. Figure 1 shows an overall view of the experimental setup used in this work, and a zoomed view of the device where the two rods using a feeding flexible capillary (400 μm in diameter) connected to a syringe. In the experiments with an outer liquid bath, both the rods and the liquid bridge are submerged in a transparent perspex tank (D), filled with another immiscible liquid. The mobile rod is connected to a motorized stage (ZABER TECH T-LSM100A) (E) to control its distance from the static rod. The motorized traverse was mounted on a manual high-precision translation stage (F) to ensure the correct alignment of the two rods.
Experimental method Experimental setup
Digital images of the liquid bridge with a resolution of 1280 × 1000 pixels were acquired with a high-speed CMOS camera (FASTCAM MINI UX100) (G), operated typically at 5000 frames per second (fps), using an exposure time of 50 μs. To measure relaxation times below 1 ms, the camera frame rate was increased up to 40,000 fps, while the spatial resolution and the exposure time were decreased down to 1280 × 120 pixels and 5 μs, respectively. The camera was connected to a set of optical lenses (H) (OPTEM Zoom 70 XL) with variable magnification from 1× to 5.5×. The resulting magnification varied between 3.44 and 0.624 μm/pixel. The camera could be displaced both horizontally and vertically using a triaxial translation stage (I) to focus the liquid bridge in the field of view. The fluid was illuminated from the back side with white light provided by an optical fiber (J) connected to a metal halide light source (LEICA EL6000). The optical fiber was connected to a set of focusing lenses, providing a focused light beam approximately 25 mm in diameter. A frosted diffuser (K) was placed between the optical fiber and the cell to provide a uniform illumination. All these elements were mounted on the top of an optical table (L) to reduce vibrations. A microthermocouple (100 μm in diameter) was used to measure the temperature of the PEO and PAA liquids, which was T = 20 ± 1 • C and 25 ± 1 • C, respectively.
Experimental protocol
The extensional relaxation times of the polymeric solutions were measured using the following procedure. A liquid droplet was gently placed between the two rods located in the empty tank, creating a liquid bridge about 500 ± 50 μm in length with the triple contact lines anchored to the edges of the supporting rods. In the experiments with an outer liquid bath, the tank was subsequently filled with an immiscible oil until the liquid bridge was completely submerged in the bath. To induce the filament thinning, we used the slow retraction method. For this purpose, the mobile rod was displaced at a constant velocity of 5 μm/s while the other rod remained fixed. This speed was low enough for the liquid bridge to undergo a sequence of equilibrium states, until the instability occurs spontaneously above a critical distance between the rods (Slobozhanin and Perales 1993;Campo-Deaño and Clasen 2010). The instant t = 0, illustrated in Fig. 2, corresponds approximately to the onset of the instability. The subsequent filament thinning and breakup was recorded using the high-speed camera, as illustrated in Fig. 2 for some representative times.
The images acquired in the course of the experiments were processed to detect the filament interface with a subpixel resolution technique (Ferrera et al. 2006;Vega et al. 2009). The diameter of the filament at its mid-plane between the upper and lower filament ends, d mid , and the minimum diameter along the filament axis, d min , were determined for each image. Representing the diameter data as function of time in a semi-log plot, it is possible to identify the time interval within which the diameter decays exponentially for viscoelastic fluids. This interval corresponds to the elastocapillary regime, where the balance between surface tension and tensile stresses produces the homogeneous stretching of a quasi-cylindrical thread. The time evolution of the filament diameter in this regime was fitted by the exponential function (Bazilevsky et al. 1990) which allows the calculation of the extensional relaxation time, λ. Each experiment was performed five times to assess the reproducibility and to estimate the standard deviation of the measured relaxation times. The pathlines and velocity of some tracer particles suspended in the liquid filament were measured using particle tracking velocimetry (PTV). For this purpose, polystyrene tracer particles with density ρ = 1055 kg/m 3 and with 2 μm average diameter were introduced in the liquid before the experiment. The presence of an outer bath with a refractive index close to that of the working fluid minimizes refraction of light in the cylindrical fluid filament, allowing to detect precisely the position of the tracer particles in the acquired images. The particles in the fluid filament essentially moved in the axial direction, as expected for a purely extensional flow (Gier and Wagner 2012). The optical distortion produced by the quasi-cylindrical interface in the axial direction is negligible. The tracking of the position of the particles over time was performed with the opensource 3D computer graphics software BLENDER (version 2.73a), using its feature tracking capabilities and a Python script to determine the coordinates of the particles along time.
Fluids
The test fluids used in the experiments were polymeric solutions of PEO (Sigma-Aldrich, M w = 10 6 g/mol) in a mixture of glycerol/water (40/60 wt. %) and of polyacrylamide (Polysciences, M w = 18 × 10 6 g/mol) in water. Stock solutions with concentration c were prepared by dissolving the polymers in the solvent by agitation with a magnetic stirrer at low angular speeds, to minimize mechanical degradation of the long polymer chains. The PEO concentrations of the two solutions used were 100 ppm (PEO100) and 500 ppm (PEO500), which match those used by Keshavarz et al. (2015) in the ROJER device. The PAA concentrations ranged between 2 and 1000 ppm. Experiments were conducted both in air and using a liquid bath to minimize solvent evaporation. Silicone oils (SO) with kinematic viscosities ranging from 0.65 to 35 cSt were used to form the liquid bath. The variation of the shear viscosity, η, with shear rate,γ , was measured for all PEO and PAA polymer solutions using a shear rheometer (PHYSICA MCR301, Anton Paar) with a cone-plate geometry with 75 mm diameter and 1 • angle. As shown in Fig. 3, shear-thinning in the PAA solutions becomes more pronounced when the polymer concentration increases. This effect is more noticeable for c > 50 ppm, close to the overlap concentration (Sousa et al. 2015). On the other hand, the shear viscosity of the PEO solutions is nearly independent of the shear rate due to the low concentrations used in the experiments, well below the overlap concentration, c * = 1400 ppm . The surface tension σ was measured with the TIFA method (Cabezas et al. 2005), and is shown in Table 1 for different oil/water systems. The refractive index n was measured using an Abbe refractometer (WYA-2S, OPTIC IVYMEN SYSTEM). The corresponding values for the different oils are listed in Table 1, while the values for the PAA solutions and distilled water are shown in Table 2. Figure 4 shows the time evolution of the ratio between the minimum filament diameter, d min , and the rod diameter, d 0 , for the PEO solutions surrounded by air and also in an oil bath of the less viscous silicone oil. We consider the minimum diameter of the filament instead of the filament diameter at the middle plane between the upper and lower filament ends. The elasto-capillary regime occurs for about one decade of decrease of d min /d 0 . The extensional relaxation times measured in air for PEO100 and PEO500 fluids at T = 20 ± 1 • C were λ = 1.01±0.03 ms and 3.9±0.3 ms, respectively, which are in good agreement with the results obtained by Keshavarz et al. (2015) in their recent investigation using the ROJER device. The small differences between both measurements (1 % and 28 % for PEO100 and PEO500, respectively) are probably due to the variability in the polymer batches and small temperature differences in the two works. The same measurements were done again at T = 20 ± 1 • C using the less viscous oil as outer fluid, and the results presented in Fig. 4 show a similar behavior and the corresponding extensional relaxation times measured were λ = 1.27±0.04 ms and 3.89±0.01 ms for PEO100 and PEO500 fluids, respectively. The slow retraction method implemented in this work in a miniaturized device, together with the use of a low viscosity oil bath provides reliable measurements of the relaxation time even below 1 ms, as will be shown.
Results and discussion
Beads-on-a-string structures were formed over a substantial time interval of the final stage of the filament thinning process for the PEO solutions tested. This phenomenon is indirectly illustrated in Fig. 4, since the dimensionless diameter becomes nearly constant for t 10 ms and 30 ms for PEO100 and PEO500, respectively. The competition between capillary, elastic, and inertial forces leads to the formation of an array of beads connected axially by thin filaments. The onset of this type of instability is commonly observed for dilute solutions of high molecular weight flexible polymers. In particular, PEO solutions, which have been extensively used in the investigation of the capillary thinning and breakup of viscoelastic samples, frequently form these structures when stretched (Oliveira and McKinley 2005;Rodd et al. 2005;Tirtaatmadja et al. 2006).
The applicability of a capillary breakup extensional rheometer to low-viscosity fluids can be improved by using the slow retraction method and making use of smaller plate diameters, because inertial and gravitational effects are minimized in this way. However, during the slow retraction of the moving plate, the evaporation of volatile solvents or absorption of water vapor by hygroscopic fluids may produce misleading results. To reduce such an undesirable effect, the liquid bridge of the PAA polymer solutions tested was submerged in an oil bath. Figure 5 shows the temporal evolution of the filament diameter for fluids PAA100 and PAA750 submerged in 0.65 cSt, 5 cSt, and 35 cSt silicone oils. As can be observed, the elasto-capillary regime was reached for t 0.03 and 0.3 s for fluids PAA100 and PAA750, respectively. The extensional relaxation times determined are nearly independent of the outer bath viscosity, despite the wide range of viscosities of the oils used in the tests, covering a variation of nearly two orders of magnitude. In the experiments with air, the liquid bridge evolution was slightly faster, and the exponential thinning of the filament occurred earlier, but the slopes of the linear regions where the elasto-capillary regime was reached are comparable, and consequently the corresponding relaxation times are also similar. To allow a better comparison of the results, the curves were shifted in time to obtain the elastocapillary regime in similar time ranges for each polymer concentration. To minimize water evaporation in the experiments in air, particularly during the initial slow stretch of the filament, it is recommended to add in advance some water in the bottom of the reservoir (transparent tank D in Fig. 1) to saturate the environment in the vicinity of the filament.
In order to assess the influence of the outer bath viscosity on the filament thinning of the PAA solutions, experiments for c = 100 and 750 ppm were done with five different oil baths with kinematic viscosities varying between 0.65 and 35 cSt. In all cases, the elasto-capillary regime was observed over a significant time interval, and the corresponding extensional relaxation times were determined (Fig. 6). The outer bath viscosity did not affect significantly the extensional relaxation times measured in the range of oil viscosities tested. Nevertheless, in the following experiments using an outer oil bath, we will always use the less viscous silicone oil, to minimize the shear stress in the interface. Figure 7 shows the time evolution of the filament diameter for the PAA aqueous solutions surrounded by the 0.65 Oil viscosity (cSt) Fig. 6 Extensional relaxation time λ as a function of the oil bath kinematic viscosity for two PAA aqueous solutions: c = 750 ppm (solid symbols); c = 100 ppm (hollow symbols) cSt silicone oil at T = 25 • C. The concentration of PAA, and consequently the relaxation time, increases in the arrow direction. In all the cases (except for the solvent), an elastocapillary regime was identified (indicated in the figure by the red lines), and the relaxation time was determined by fitting (1) to the variation of the filament minimum diameter as function of time. For the solutions with higher concentrations, the filament becomes asymmetric relative to the mid-plane between the upper and lower filament ends due to the formation of a bead near the centre of the filament, followed by the onset of multiple beads along the thread. These events occurred at the times indicated in Fig. 7c, and are illustrated in Fig. 8. At t 0.9 s, the shape of the filament changes and is no longer cylindrical [inset (iii)]. Then, a single bead appears [inset (iv)] followed by multiple small beads [inset (v)]. After the onset of these instabilities, the filament diameter is no longer uniform, and the results are not reliable for the measurement of λ. Therefore, to determine the extensional relaxation time, we restricted the analysis of the filament only when it presented a quasi-cylindrical shape. Figure 9 shows a comparison between the extensional relaxation times measured using four techniques: the slow retraction method implemented in our device with and without an outer immiscible liquid bath, the slow retraction method implemented in the commercial CaBER TM 1 device (with a plate separation speed of 0.18 mm/s), and the standard fast stretching procedure also implemented in the commercial rheometer CaBER TM 1. In this last case, the rods were separated exponentially for 50 ms from 2 mm to a final distance between 5.63 and 7.42 mm, depending on the fluid tested. The rods in the CaBER TM 1 apparatus were 4 mm in diameter. Inertial effects prevented us from obtaining reliable results with the CaBER TM 1 device for relaxation times below about 10 ms, which corresponds to PAA concentrations below 100 ppm, approximately. Inertial effects were also relevant when the slow retraction method was implemented in the CaBER TM 1 device owing to the large diameter of the rods. Overall, there is good agreement between the results obtained with the CaBER TM 1 device and the measurements with the device developed in this work, particularly for the cases that use an outer liquid bath.
The results obtained with the PAA aqueous solutions exhibit a power law dependence upon the concentration (λ [ms] = 0.045(c [ppm]) 1.14 ), with a power law exponent close to one, in agreement with previous works (Clasen et al. 2006;Arnolds et al. 2010).
Particle tracking velocimetry (PTV) was used to track the position of several particles as function of time and computing their velocity in the liquid filament during the elasto-capillary regime. Figures 10 and 11 show the axial velocity component, v z , and the axial distance, z − z 0 , from the stagnation point position, z 0 , of three of the particles tracked in the course of the experiments. These particles were located next to the filament symmetry axis to minimize optical distortion, although we emphasize that the use of the oil bath minimizes light refraction, and also the streamwise location of the particle is not affected by the radial curvature of the liquid filament. It must be noted that the stagnation point does not necessarily lie on the filament midplane (Gier and Wagner 2012). We consistently calculated its position z 0 as the location where the linear fit to the data {(z − z 0 , v z )} has zero velocity at z = z 0 . Our results show that v z is approximately given by the uniform uniaxial extensional flow, v z =ε(z − z 0 ), whereε is the constant elongation rate in the elasto-capillary regime. As expected, the particle position, |z(t) − z 0 |, represented in a semi-log plot is close to a straight line, and the elongation rate can be computed from the slope of such line. The average values obtained from Figs. 10 and 11 areε = 77 s −1 and 1006 s −1 for c = 100 and 10 ppm, respectively. These values are consistent with those obtained from the relaxation times measured from the filament diameter decay as function of time:ε = 2/(3λ) ≈ 80 s −1 and 900 s −1 for c = 100 and 10 ppm, respectively.
The fact the inner and outer liquids can have similar refraction indices is a significant advantage of the use of an Fig. 11 Axial velocity component v z (middle graph) and distance |z − z 0 | from the stagnation point (right graph). The experiment was conducted with a solution of 10 ppm of PAA in water surrounded by 0.65 cSt silicone oil. The symbols in the graphs correspond to those labeling the tracked particles in the left image immiscible oil bath in our setup, allowing to easily track the position of tracer particles in the fluid filament during the extensional thinning. That feature can also be exploited to visualize the deformation, due to extensional flow, of flexible components of the tested fluid, such as red blood cell deformation in blood or DNA stretching. In these cases, the visualization of the fluid elements is better achieved using fluorescence imaging by means of laser illumination and an appropriate barrier filter or dichroic mirror to block the laser light between the fluid filament and the high-speed camera sensor, similar to the setup used by Gier and Wagner (2012). In this way, only the fluorescence emitted by adequate fluochromes attached to the fluid components is allowed to reach the camera sensor.
Conclusions
A setup for monitoring the filament thinning and breakup of a liquid placed between two cylindrical rods was developed, combining the slow retraction method with highspeed imaging techniques. The use of an immiscible oil bath allows to reduce solvent loss by evaporation of volatile liquids, or water absorption in hygroscopic fluids. Gravitational and inertial effects are minimized, due to the use of small diameter cylindrical rods and a slow retraction of the moving rod to induce the filament thinning, allowing reliable measurements of the extensional relaxation time of dilute polymer solutions down to about 100 μs. Using a low viscosity immiscible oil bath with a refractive index similar to the test fluid allows to use particle tracking velocimetry to measure the velocity of tracer particles and the corresponding elongation rate of the thinning filament, confirming the extensional relaxation times measured from the exponential decay of the filament diameter. | 6,430.2 | 2016-11-19T00:00:00.000 | [
"Physics"
] |
Rotational symmetry breaking in superconducting nickelate Nd0.8Sr0.2NiO2 films
The infinite-layer nickelates, isostructural to the high-Tc cuprate superconductors, have emerged as a promising platform to host unconventional superconductivity and stimulated growing interest in the condensed matter community. Despite considerable attention, the superconducting pairing symmetry of the nickelate superconductors, the fundamental characteristic of a superconducting state, is still under debate. Moreover, the strong electronic correlation in the nickelates may give rise to a rich phase diagram, where the underlying interplay between the superconductivity and other emerging quantum states with broken symmetry is awaiting exploration. Here, we study the angular dependence of the transport properties of the infinite-layer nickelate Nd0.8Sr0.2NiO2 superconducting films with Corbino-disk configuration. The azimuthal angular dependence of the magnetoresistance (R(φ)) manifests the rotational symmetry breaking from isotropy to four-fold (C4) anisotropy with increasing magnetic field, revealing a symmetry-breaking phase transition. Approaching the low-temperature and large-magnetic-field regime, an additional two-fold (C2) symmetric component in the R(φ) curves and an anomalous upturn of the temperature-dependent critical field are observed simultaneously, suggesting the emergence of an exotic electronic phase. Our work uncovers the evolution of the quantum states with different rotational symmetries in nickelate superconductors and provides deep insight into their global phase diagram.
The conventional superconductivity with transition temperature (Tc) lower than 40 K was successfully explained by the Bardeen-Cooper-Schrieffer (BCS) theory, in which the electrons with anti-parallel spins and time-reversed momenta form Cooper pairs, and the superconducting order parameter is of isotropic s-wave symmetry 1,2 .However, the discovery of high-temperature superconductivity (Tc > 40 K) in cuprates is beyond the expectation of the BCS theory, and the superconducting order parameters of cuprates are believed to be of nodal d-wave symmetry 3,4 .Thereafter, the mechanism of unconventional high-Tc superconductivity has become one of the most important puzzles in physical sciences.Recently, the observation of superconductivity in infinitelayer nickelates with a maximal Tc of 15 K in Nd1-xSrxNiO2 has motivated extensive researches in this emerging new superconducting family [5][6][7][8][9] .Mimicking the d 9 electronic configuration and the layered structure including CuO2 planes of the cuprates, the isostructural infinite-layer nickelates are promising candidates for high-Tc unconventional superconductivity [5][6][7][8][9] .Discerning the similarities and the differences between the nickelates and the cuprates, especially in the symmetry of the superconducting order parameters, should be of great significance for understanding the mechanism of unconventional high-Tc superconductivity.
Theoretical calculations have suggested that the nickelates are likely to give rise to a d-wave superconducting pairing, analogous to the cuprate superconductors.However, the consensus has not been reached and there are several proposals, including dominant d x 2 -y 2 -wave 10,11 , multi-band d-wave 12,13 , and even a transition from s-wave to (d+is)wave and then to d-wave depending on the doping level and the electrons hoping amplitude 14 .Experimentally, through the single-particle tunneling spectroscopy, different spectroscopic features showing s-wave, d-wave, and even a mixture of them are observed on different locations of the nickelate film surface, which complicates the determination of the pairing symmetry in the nickelates 15 .The London penetration depth of the nickelates family are also measured, and the results on La-based and Prbased nickelate compounds support the existence of a d-wave component 16,17 .However, the Nd-based nickelate, Nd0.8Sr0.2NiO2,exhibits more complex behaviors that may be captured by a predominantly isotropic nodeless pairing 16,17 .The pairing symmetry of the superconducting order parameter in the nickelate superconductors, the fundamental characteristic of the superconducting state, is still an open question, thus further explorations with diverse experimental techniques are highly desired.
In addition to the mystery of the superconducting pairing symmetry, the strong electronic correlation in nickelates is another element that makes the nickelate systems intriguing.The strong correlation is theoretically believed to play an important role in the nickelates systems 8,9,18,19 and the strong antiferromagnetic (AFM) exchange interaction between Ni spins has been experimentally detected 20 .Generally, the strong correlated electronic systems are anticipated to host a rich phase diagram and multiple competing states including superconductivity, magnetic order, charge order, pair density wave (PDW), etc 21,22 .In the nickelate thin films, the charge order, a spatially periodic modulation of the electronic structure that breaks the translational symmetry, has been experimentally observed by the resonant inelastic X-ray scattering (RIXS) 23- 25 .However, the charge order is only observable in the lower doping regime where the nickelates are non-superconducting.The interplay between the superconductivity and the charge order as well as other underlying symmetry-broken states is still awaiting explorations.With these motivations, we investigate the polar (θ) and azimuthal (φ) angular dependence of the critical magnetic field and the magnetoresistance of the infinite-layer nickelate Nd0.8Sr0.2NiO2superconducting films.The perovskite precursor Nd0.8Sr0.2NiO3thin films are firstly deposited on the SrTiO3 (001) substrates by pulsed laser deposition (PLD).The apical oxygen is then removed by the soft-chemistry topotactic reduction method using CaH2 power.Through this procedure, the nickelate thin films undergo a topotactic transition from the perovskite phase to the infinite-layer phase, and thus the superconducting Nd0.8Sr0.2NiO2thin films are obtained 5 .Figure 1a presents the schematic crystal structure of Nd0.8Sr0.2NiO2.In agreement with the previous reports 5 , the temperature-dependence of the resistance R(T) exhibits metallic behavior from room temperature to low temperature followed by a superconducting transition beginning at T c onset of 14.7 K (Fig. 1b).Here, T c onset is determined at the point where R(T) deviates from the extrapolation of the normal state resistance (RN).
Note that the R(T) curve shows a considerably broad superconducting transition with a smooth tail, which can be described by the Berezinskii-Kosterlitz-Thouless (BKT) transition in two-dimensional (2D) superconductors [26][27][28][29] .As shown in the inset of Fig. 1b, the R(T) curve under 0 T can be reproduced by the BKT transition using the Halperin-Nelson equation 30 , ] (R0 and b are materialdependent parameters, and T c ' is the superconducting critical temperature), yielding the BKT transition temperature T BKT of 8.5 K.An apparent difference is also noted between the R(T) curves under in-plane and out-of-plane magnetic fields (inset of Fig. 1b), implying the anisotropy of the superconductivity.
To obtain more insight into the anisotropic superconductivity in Nd0.8Sr0.2NiO2thin films, the critical magnetic field and magnetoresistance under different magnetic field orientations are measured.Here, the Corbino-disk configuration is used to eliminate the influence of the current flow in angular dependent magnetoresistance measurements 31 , which cannot be completely avoided in standard four-probe measurements 32,33 .The schematic image and the optical photo of a Corbino-disk device are shown in Fig. 1c.
To start with, the temperature-dependence of the critical field Bc is measured under the magnetic field applied along the c-axis (denoted as ⊥), the a/b-axis (∥, 0⁰), and the ab diagonal direction (∥, 45⁰).Here, Bc is defined as the magnetic field required to reach 50% of the normal state resistance (RN = 98.9 Ω), and the Bc(T) curves are collected in Fig. 1d.The T-linear dependence of B c⊥ (T), and the (Tc-T) Ginzburg-Landau (G-L) formula 34 : where ϕ 0 is the flux quantum, ξG-L(0) is the zero-temperature G-L coherence length and dsc is the superconducting thickness.The consistency with the 2D G-L formula near Tc indicates the 2D nature of the superconductivity in the Nd0.8Sr0.2NiO2thin films.To further study the dimensionality of the superconductivity, we measure the polar angular dependence of the critical magnetic field Bc(θ) for Nd0.8Sr0.2NiO2thin film at T = 6 K.
Here, θ represents the angle between the magnetic field and the c-axis of the Nd0.8Sr0.2NiO2.As shown in Fig. 1e, the Bc(θ) curve exhibits a prominent angular dependence of the external magnetic field, and a cusp-like peak is clearly resolved around θ = 90⁰ (B⊥c-axis).The peak in the Bc(θ) around 90⁰ can be well reproduced by the 2D Tinkham model and cannot be captured by the 3D anisotropic mass model, which qualitatively demonstrates the behavior of 2D superconductivity 35 (inset of Fig. 1e).
To obtain a more comprehensive depiction of the anisotropy, the polar angular and g, respectively), while four more subtle kinks at 90⁰ ± 20⁰ and 270⁰ ± 20⁰ can be seen under 12 T (marked by black dashed line Fig. 1g).Considering the crystal structure of the infinite-layer nickelate, the humps and kinks with relatively small variations may originate from the magnetic moment of the rare-earth Nd 3+ with 4f electrons, which slightly affects the superconductivity of adjacent NiO2 planes.and 315⁰ (45⁰ to the a/b-axis) from 8 K to 11 K, and becomes indistinguishable when the temperature is increased to 14 K (the top panel in Fig. 2c).
To confirm the correspondence between the minimum of the C4 R(φ) and the a/baxis of the Nd0.8Sr0.2NiO2thin films, control experiments have been carefully conducted.
Specifically, the Nd0.8Sr0.2NiO2Corbino-disk device is remounted and remeasured after rotating a finite in-plane degree Δφ.Through the comparison between the initial results R(φ) and the remeasured results after rotation R(φ+Δφ), the minimum (maximum) of the C4 symmetry are fixed with the a/b-axis (45⁰ to the a/b-axis), verifying the C4 symmetry of the R(φ) is an intrinsic property of the Nd0.8Sr0.2NiO2thin films (Supplementary Fig. 3).Note that the R(φ) curve at 14 K is already larger than the normal state resistance RN of 98.9 Ω, and is almost isotropic within the experimental resolution.Thus, the observed C4 symmetry below 11 K should be related to the superconducting characteristics of the Nd0.8Sr0.2NiO2thin films.Moreover, considering that the quasi-2D nature of the superconductivity in the Nd0.8Sr0.2NiO2and the large magnetoresistance amplitudes of the C4 anisotropy (ΔRC4/R) (approximately 10% under 8 K and 8 T in Fig. 2c; around 20% under 5.5 K and 12 T in the Supplementary Fig. 6), the C4 symmetry is not likely owing to the magnetic moment of the Nd 3+ between the NiO2 planes as discussed in Supplementary Fig. 8, but should be ascribed to the superconductivity in the NiO2 planes.Previously, the C4 symmetric R(φ) has also been reported in the cuprates.However, the C4 anisotropy is observed in the normal state with a magnitude of merely 0.05% and is attributed to the magnetic order 36 , representing a different mechanism from our observations.The in-plane critical field of a d-wave superconductor is theoretically predicted to exhibit a C4 symmetric anisotropy owing to the d-wave pairing symmetry 37 .The C4 anisotropic critical field as well as the C4 anisotropic magnetoresistance have been experimentally used to determine the dwave superconductivity in cuprate superconductors 38 and heavy fermion superconductors 39 , etc. Therefore, the C4 symmetry of our R(φ) curves is supposed to First, through the aforementioned remounted measurements after an in-plane rotation of Δφ, the C2 symmetry is confirmed to be invariant with respect to the sample mounting, since R(φ+Δφ) can nicely overlap with R(φ) after slight shifts (Supplementary Fig. 3).Second, the C2 features cannot be explained by the trivial misalignment of the magnetic field, because the φ angles of the C2 symmetry maxima are nearly corresponding to the minima of the misalignment-induced features (Supplementary Fig. 4).Third, the Corbino-disk configuration excludes the anisotropic vortex motion due to the unidirectional current flow, which has been reported in the previous works using the standard four-probe or Hall-bar electrode 32,33 .Fourth, the C2 features superimposed on the C4 symmetric R(φ) can be consistently observed in many other samples in the large magnetic field, demonstrating the strong reproducibility of the C2 and C4 anisotropy (Supplementary Fig. 13).
To quantitatively study the evolution of the C2 and C4 anisotropy of the R(φ), the C2 components (ΔRC2) and C4 components (ΔRC4) of each R(φ) curve at different temperatures and magnetic fields are extracted through trigonometric function fitting (Supplementary Fig. 7).Among them, the fitting curve of the R(φ) under 2 K and 16 T is shown in Fig. 2d and e.Here, the ratio between the average resistance of the R(φ) and the normal state resistance (Ravg/RN) are used as an independent variable for an intuitive comparison.Figure 2f shows ΔRC4 as a function of Ravg/RN under different magnetic fields, which exhibit similar parabolic behaviors with maxima approximately around 50% RN.Differently, the ΔRC2 are monotonically decreasing with increasing Ravg/RN as shown in Fig. 2g.With increasing magnetic field, the ΔRC4 show subtle decreasing tendency, while the ΔRC2 monotonically increase, exhibiting a magnetic field-mediated competition between the ΔRC4 and the ΔRC2.The parabolic Ravg/RNdependence of the ΔRC4 can be understood as the resistance anisotropy due to the superconductivity anisotropy becomes smaller when approaching the superconducting zero-resistance state or the normal state.However, the monotonic Ravg/ RN -dependence of the ΔRC2 cannot be explained by such a scenario, suggesting a different origin of the C2 symmetry.Generally, the observation of spontaneous rotational symmetry breaking in R(φ) curves would indicate the existence of nematicity 31,40 .However, in our measurements, the C2 component has a relatively small weight in the anisotropy of the R(φ) compared with the C4 symmetry (<12%), inconsistent with previous results of nematic superconductivity with a primary C2 feature 32 .To understand the origin of the C2 anisotropy, we recall the RIXS-detected charge order in the nickelates that is along the Ni-O bond direction and exhibits a competitive relationship with the superconductivity [23][24][25] .Considering the Ni-O bond direction of our C2 feature and the magnetic field-mediated competition between the ΔRC4 and the ΔRC2, our observation of the C2 anisotropy might result from the charge order in Nd0.8Sr0.2NiO2thin films.
The magnetic field suppresses the superconductivity and may alter the competition between the anisotropic superconductivity with C4 symmetry and the charge order fluctuations with C2 symmetry in our Nd0.8Sr0.2NiO2,leading to the monotonically magnetic field-dependent decrease of the ΔRC4 and increase of the ΔRC2 in our observations (Fig. 2g).As previously reported, the Sr doping dramatically lowers the onset temperature of the charge order in the nickelates 23 , which may explain the occurrence of the charge order in our Nd0.8Sr0.2NiO2only at low temperatures.In addition, since the C2 feature further breaks the rotational symmetry, our observation favors a stripe-like charge order in the nickelates, which is supported by theoretical proposals 18,19 , although further investigations are still needed.The phase diagram demonstrates an evolution of the superconducting states manifesting different rotational symmetries, depending on the external magnetic field.
Specifically, in the grey region labeled as ~isotropy (B < 5 T), the B c∥, 0⁰ 50% (T) and B c∥, 45⁰ 50% (T) curves overlap with each other and, consistently, the R(φ) curves are nearly isotropic within our measurement resolution, indicating the isotropic superconductivity (Fig. 3e).Under the magnetic field from 5 T to 11.5 T (the blue region labeled as C4), the R(φ) curves exhibit C4 rotational symmetric anisotropy (Fig. 3d), which should be ascribed to the superconductivity of the Nd0.8Sr0.2NiO2films since it disappears in the normal state (Fig. 2c).Simultaneously, B c∥, 0⁰ 50% (T) and B c∥, 45⁰ 50% (T) curves split in this region, according with the emergence of C4 anisotropy.When the magnetic field is increased above 11.5 T (the pink region labeled as C4+C2), an additional C2 anisotropy is observed in the R(φ) curves as a superimposed modulation on the predominant C4 anisotropy, which breaks the C4 symmetry (Fig. 3b and c varying parameters 14 .Experimentally, the s-and d-wave mixture has been reported by the previous STM study 15 .The second transition with the rotational symmetry turning from C4 to C4+C2 takes place around 11.5 T. Also, the second transition is accompanied with an anomalous upturn of the in-plane critical field, unveiling the emergence of an electronic state unexplored before.As discussed above, we speculatively ascribe the second transition to the emergence of the charge order in the Nd0.8Sr0.2NiO2films.It is normally believed that the long-range charge order disfavors the superconductivity 18,22 . In our Nd0.8Sr0.2NiO2films, with increasing magnetic field, the superconductivity gets suppressed and the stripe charge order fluctuation with short-range correlation emerges, accounting for the relatively small C2 symmetric anisotropy in R(φ) curves.The coexistence between the ordered phases with different broken symmetries is relatively rare, and the intertwined orders would give rise to more unexpected quantum phenomena and more complex phase diagram.In our Nd0.8Sr0.2NiO2thin films, the short-range stripe order coupled to the superconducting condensation may induce a secondary PDW state, in which the superconducting order parameter is oscillatory in space 22,41,42 .Through pairing in the presence of the periodic potential of the charge order, the Cooper pairs gain finite center-of-mass momenta 42 , which is also a signature of the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phase that features an upturn of the critical magnetic field at the low temperatures 43,44 , resembling our observations.These phases were not expected previously in the Nd0.8Sr0.2NiO2systems.Our findings suggest that the nickelates would be a potential option to explore these exotic states.
Our experimental observations have important implications on current theoretical debates.The isotropic superconductivity requires a primary pairing mechanism that is not expected in optimally hole-doped superconducting cuprates.The successive phase transitions reveal a subtle balance between several competing interactions that are unique for infinite-layer nickelates and also cannot be explained as in cuprates.In the Mott-Kondo scenario, the phase transition may be attributed to the competition between the Kondo coupling and the AFM spin superexchange coupling 14 .The Kondo coupling can produce local spin fluctuations that support isotropic s-wave pairing 45 , while the AFM coupling favors d-wave pairing.In Nd1-xSrxNiO2, the superconductivity emerges around the border of the Kondo regime 46 .Therefore, applying magnetic field may suppress the local spin fluctuations, tilt the balance towards AFM correlation, and thus induce a secondary d-wave component, which explains the emergent C4 symmetry by either dor (d+is)-wave pairing 14 .The charge order also competes with the AFM correlation 19 as well as the superconductivity.Previous experiments have shown that the charge order phase boundary may even penetrate into the superconducting dome.
Further increasing magnetic field may reduce both the superconductivity and AFM correlation, thus promote the charge order, and lead to the observed C2 symmetry.The interplay of the Kondo effect, the AFM correlation, the superconductivity, and the charge order provides a potential playground for novel correlated phenomena, which is well beyond a simple scenario.Any candidate theory should be made in conformity with all these experimental observations.
In confirming that the C2 anisotropic feature is not due to the trivial misalignment between the sample plane and the magnetic field.In the large magnetic field regime, the polar angular dependent anisotropy is weak, because the anisotropy ratio of the in-plane critical field to the out-of-plane critical field (Γ = Bc∥/Bc⊥) becomes smaller with increasing field, and is gradually approaching 1 (see Fig. 1d in the main text).Thus, the influence of the misaligned magnetic field is negligible in large magnetic field regime.The C4 symmetry with a smaller magnitude observed in the normal state of the Nd0.8Sr0.2NiO2can be ascribed to the magnetic moment of the rare-earth Nd 3+ .The magnetic order-induced C4 symmetric R(φ) curves have also been reported in the cuprates previously 36 , which is observed in the normal state with a magnitude of 0.05%, consistent with our Nd 3+ -resultant C4 anisotropy in the normal state.However, such a phenomenon cannot explain the C4 anisotropy measured in the superconducting transition region, which shows different orientations (C4 exhibits minima at 0⁰, 90⁰, 180⁰ and 270⁰ in the superconducting region, instead of 45⁰, 135⁰, 225⁰ and 315⁰ in the normal state) and much stronger magnitude (two orders of magnitude larger).
stimulated growing interests in the condensed matter community.Despite numerous researches, the superconducting pairing symmetry of the nickelate superconductors, the fundamental characteristic of a superconducting state, is still under debate.Moreover, the strong electronic correlation in the nickelates may give rise to a rich phase diagram, where the underlying interplay between the superconductivity and other emerging quantum states with broken symmetry is awaiting exploration.Here, we study the angular dependence of the transport properties on the infinite-layer nickelate Nd0.8Sr0.2NiO2superconducting films with Corbino-disk configuration.The azimuthal angular dependence of the magnetoresistance (R(φ)) manifests the rotational symmetry breaking from isotropy to four-fold (C4) anisotropy with increasing magnetic field, revealing a symmetry breaking phase transition.Approaching the low temperature and large magnetic field regime, an additional two-fold (C2) symmetric component in the R(φ) curves and an anomalous upturn of the temperature-dependent critical field are observed simultaneously, suggesting the emergence of an exotic electronic phase.Our work uncovers the evolution of the quantum states with different rotational symmetries and provides deep insight into the global phase diagram of the nickelate superconductors.
Fig. 1 |
Fig. 1 | Structure and the quasi-two-dimensional superconductivity in Nd0.8Sr0.2NiO2.a, Crystal structure of the infinite-layer nickelate Nd0.8Sr0.2NiO2.b, Temperature dependence of the resistance R(T) at zero magnetic field from 2 K to 300 K.The inset shows the R(T) curves below 20 K at 0 T (black circles), B∥ = 16 T (red circles), and B⊥ = 16 T (purple circles).Here, the B∥ is applied along the a/b-axis and B⊥ is along the c-axis.The blue solid line represents the BKT transition fitting using the Halperin-Nelson equation.c, Schematic image and optical photo (inset) of the Corbino-disk configuration for polar (θ) angular dependent magnetoresistance R(θ) measurements on the Nd0.8Sr0.2NiO2thin film.Here, θ represents the angle between the magnetic field and the c-axis of the Nd0.8Sr0.2NiO2.d, Temperature dependence of the critical magnetic field Bc(T) for the magnetic fields along the c-axis (denoted as ⊥),the a/b-axis (∥, 0⁰), and the ab diagonal direction (∥, 45⁰).Here, the Bc is defined as dependence of the magnetoresistance R(θ) at various temperatures and magnetic fields are measured, and two representative R(θ) curves at 5 K under 12 T and 8 T are shown in Figs.1f and g, respectively.The most notable features are two sharp dips at 90⁰ and 270⁰, corresponding to B⊥c-axis.The two sharp dips correspond to the cusp-like peak in the Bc(θ) curve, resulting from the quasi-2D anisotropy.With varying temperatures and magnetic fields, the two sharp dips are observed in all R(θ) curves measured in the superconducting region (Supplementary Fig.2), further confirming the quasi-2D nature of the superconducting Nd0.8Sr0.2NiO2thin films and suggesting that the layered superconducting NiO2 planes should mainly account for the superconducting properties in our transport measurements.Additionally, small humps at approximately 90⁰ ± 45⁰ and 270⁰ ± 45⁰ are observed under 8 T and 12 T (marked by black arrows in Figs.1f
Fig. 2 |
Fig. 2 | Azimuthal (φ) angular dependence of the magnetoresistance in Nd0.8Sr0.2NiO2.a, Schematic of the Corbino-disk device for azimuthal (φ) angular dependent magnetoresistance measurements.Here, φ represents the angle between the magnetic field and the a/b-axis of the Nd0.8Sr0.2NiO2.b, c, Azimuthal angle dependence of the magnetoresistance R(φ) at different temperatures under B = 8 T in the polar plot (b) and rectangular plot (c).d, e, Azimuthal angle dependence of the magnetoresistance R(φ) at T = 2 K under B = 16 T in the polar plot (d) and rectangular plot (e).Here, the logarithmic scale is used on the resistance-axis to specifically demonstrate the C2 symmetric feature.The blue solid lines are fits with the trigonometric function: R = Ravg + ΔRC4×sin(4φ) + ΔRC2×sin(2φ), where Ravg is the averaged magnetoresistance and ΔRC4 and ΔRC2 are the C4 and C2 components, respectively.The light blue area in (d) is a guide to the eye, representing the C2 anisotropy.f, g, Four-fold components ΔRC4 (f) and two-fold components ΔRC2 (g) versus the ratio between the averaged magnetoresistance and the normal state resistance (Ravg/ RN) under different magnetic fields.Here, the values of the C2 and C4 components are extracted by the trigonometric function fitting.
imply the C4 symmetric critical field of the predominant d-wave pairing in the Nd0.8Sr0.2NiO2thin films.The deduced d-wave pairing in the Nd0.8Sr0.2NiO2thin films cannot be definitely determined by our transport measurement results solely and requires further experimental investigations (e.g.phases-sensitive measurements).Remarkably, with further increasing magnetic field, additional two-fold (C2) symmetric signals are observed as small modulations superimposed on the primary C4 symmetry in the R(φ) curves.The representative R(φ) curve under 2 K and 16 T in the polar and rectangular plots are shown in Fig.2d and e, respectively.In addition to the predominant C4 symmetric R(φ), an additional C2 signal can be clearly discerned by R(φ = 0⁰ and 180⁰) being smaller than R(φ = 90⁰ and 270⁰), indicating the rotational symmetry breaking between a-axis and b-axis.In the following, elaborated experiments and analysis are discussed to exclude the possible extrinsic origin of the C2 feature.
Fig. 3 |
Fig. 3 | B versus T phase diagram for the Nd0.8Sr0.2NiO2.a, B versus T phase diagram for in-plane magnetic fields.The white region above B c∥, 0⁰ onset (T) (open circles) represents the normal state, and the dark blue region below B c∥, 0⁰ 1% (T) (open triangles) denotes the zero-resistance state (defined by R < 1% of RN).Between B c∥, 0⁰ onset andB c∥, 0⁰ 1% (T) is the superconducting transition region, which is separated into ~isotropy, C4, and C4+C2 regions from small magnetic field to large magnetic field regime, ).At the same time, the B c∥, 0⁰ 50% (T) curve shows an anomalous upturn at the low temperatures above 11.5 T, deviating from the saturating Bc at the low temperatures expected for a conventional superconducting state.The simultaneous occurrences of the rotational symmetry breaking in the R(φ) curves and the enhanced superconducting critical field behavior strongly indicate the emergence of an exotic state.The superconducting phase diagram may reveal two phase transitions characterized by spontaneous rotational symmetry breakings.The first transition occurs at approximately 5 T indicated by the change from isotropic superconductivity to C4 anisotropy.Considering that the R(φ) curves show the symmetry of the in-plane critical field and could reflect the superconducting pairing 37 , the first rotational symmetry breaking may suggest a transition from s-wave to d-wave superconductivity.The transition from s-wave to d-wave superconductivity is reminiscent of the theoretical phase diagram hosting s-, d-and (d+is)-wave superconductivity for nickelates with Fig. S1.R(T) and R(H) curves for the determination of Bc.R(T) under different magnetic field applieds along the a/b-axis (a), the ab diagonal direction (b), and the caxis (c), d, R(B) at different θ angles at 6 K.The B c∥, 0⁰ (T), B c∥, 45⁰ (T), B c⊥ (T) and
Fig. S2 .
Fig. S2.R(θ) curves showing the quasi-2D anisotropy.Polar angular dependence of magnetoresistance R(θ) at different temperatures under 2 T (a and b), 8 T (c and d), 12 T (e and f) and 16 T (g and h).Here, θ represents the angle between the magnetic field and the c-axis of the Nd0.8Sr0.2NiO2.The left panels show the rectangular plots and the right panels show the corresponding polar plots.
Fig. S3 .
Fig. S3.Remounted measurements after an in-plane rotation Δφ2.a, Schematic of the sample remounting after an in-plane rotation.b, c, The comparison between the initial results R(φ) and the remeasured results R(φ+Δφ1) after an in-plane rotation Δφ1 ~ 13⁰.The R(φ) and R(φ+Δφ1) curves under 16 T at 2 K (b) and 3 K (c) show the C4+C2 anisotropy.The R(φ+Δφ1) curves are shifted 13⁰ and 1.6 Ω for comparison, which can nicely overlap with initial R(φ) curves, confirming that the C4+C2 anisotropy is intrinsic.
Fig. S4 .
Fig. S4.Exclusion of the possibility that the C2 features are due to the misalignment.a, R(φ) under 16 T at 2 K, and the fit with the trigonometric function R = Ravg + ΔRC4×sin(4φ) + ΔRC2×sin(2φ), where Ravg is the averaged magnetoresistance and ΔRC4 and ΔRC2 are the C4 and C2 components, respectively (red solid line).b and e, R(φ) under 8 T at 2 K showing the misalignment-induced two-sharp-dips feature, and
Fig. S6 .
Fig. S6.R(φ) curves at different temperatures and magnetic fields.R(φ) curves in the polar plots at different temperatures and magnetic fields from 1 T to 16 T (a to p), showing the evolution of the rotational symmetry from isotropic to C4 symmetric and then to C4+C2 anisotropic.The temperatures and magnetic fields are labeled in the corresponding plots.The blue areas are guides to the eye, representing the C2 anisotropy.
Fig. S7 .
Fig. S7.Fit of the C4+C2 symmetric R(φ) curves at different temperatures and magnetic fields.R(φ) curves and the corresponding trigonometric function fits at different temperatures and magnetic fields from 12 T to 16 T (a to e).Each R(φ) curve is fitted by the trigonometric function R = Ravg + ΔRC4×sin(4φ) + ΔRC2×sin(2φ) to extract the C2 component (ΔRC2) and C4 component (ΔRC4).Here, the logarithmic scale is used on the resistance-axis in (c to e) to specifically demonstrate the C2 symmetric feature.
Fig. S8 .
Fig. S8.C4 anisotropy in the R(φ) curves of the superconducting state and the normal state.a, Rectangular plot of the R(φ) curve measured at 2 K and 16 T where the Nd0.8Sr0.2NiO2 is in the superconducting transition, showing C4+C2 anisotropy.b and e, Rectangular plot (b) and polar plot (e) of the R(φ) curves measured at 11 K and 16 T where the Nd0.8Sr0.2NiO2 is in the normal state, showing C4 and C2 anisotropy with different orientations and two orders of magnitude smaller compared with (a).c and f, Rectangular plot (c) and polar plot (f) of the R(φ) curves measured at 20 K and 16 T where the Nd0.8Sr0.2NiO2 is in the normal state, showing C4 anisotropy.d, R(T) curves under 0 T and 16 T.Three arrows indicate the corresponding temperature where (a), (b) and (c) are measured.The R(φ) curves shown here are measured with a current of 5 μA.
Fig. S10 .
Fig. S10.Reproducible critical field behaviors in sample S2.Temperature dependence of the critical magnetic field measured along a/b-axis (a), the ab diagonal direction (b), and c-axis (c), d, Comparison between the B c∥, 0⁰ 50% (T) and B c∥, 45⁰ 50% (T).e, Comparison between B c∥, 0⁰ 50% (T) and B c⊥ 50% (T).The blue and the orange solid lines are the 2D G-L fittings of the Bc(T) data near Tc.f, Bc at different θ angles at 6 K.The red solid line and blue solid line represent the theoretical fitting curves obtained by 2D Tinkham model and 3D anisotropic mass model, respectively.
Fig. S11 .
Fig. S11.Reproducible quasi-2D anisotropy in sample S2.Polar angular dependence of magnetoresistance R(θ) for sample S2 at different temperatures under 2 T (a and b), 8 T (c and d), 12 T (e and f), and 16 T (g and h).The left panels show the rectangular plots and the right panels show the corresponding polar plots.
Fig. S12 .
Fig. S12.Reproducible R(φ) behaviors in sample S2.Azimuthal angular dependence of magnetoresistance R(φ) for sample S2 at different temperatures under 4 T (a), 12 T (b), 16 T (c) in the polar plots, showing nearly isotropy, C4 anisotropy, and C4+C2 anisotropy, respectively.The R(φ) curve at 2 K and 16 T is further plotted in (d), where the C2 anisotropy can be better resolved.The blue area is a guide to the eye, representing the C2 anisotropy.
Fig. S13 .
Fig. S13.Reproducible C2 anisotropy superimposed on the C4 symmetry in more samples.R(φ) at different temperatures under 16 T from sample S2(a), S3(b), and S4(c), showing both the reproducible C4 and C2 symmetric features.The C4 anisotropy is manifested as four minima at 0⁰, 90⁰, 180⁰, and 270⁰ (maxima at 45⁰, 135⁰, 225⁰ and 315⁰) (a, b and c).The C2 anisotropy is manifested as R(0⁰) being larger than R(90⁰) (a) or R(0⁰) being smaller than R(90⁰) (b and c).The observations are consistent with the results of S1 shown in the main text.
1/2 -dependence of B c∥, 0⁰ and B c∥, 45⁰ near Tc show agreement with the phenomenological 2D | 7,187.6 | 2022-10-31T00:00:00.000 | [
"Physics"
] |
Fas Resistance of Leukemic Eosinophils Is Due to Activation of NF-κB by Fas Ligation1
TNF family receptors can lead to the activation of NF-κB and this can be a prosurvival signal in some cells. Although activation of NF-κB by ligation of Fas (CD95/Apo-1), a member of the TNFR family, has been observed in a few studies, Fas-mediated NF-κB activation has not previously been shown to protect cells from apoptosis. We examined the Fas-induced NF-κB activation and its antiapoptotic effects in a leukemic eosinophil cell line, AML14.3D10, an AML14 subline resistant to Fas-mediated apoptosis. EMSA and supershift assays showed that agonist anti-Fas (CH11) induced nuclear translocation of NF-κB heterodimer p65(RelA)/p50 in these cells in both a time- and dose-dependent fashion. The influence of NF-κB on the induction of apoptosis was studied using pharmacological proteasome inhibitors and an inhibitor of IκBα phosphorylation to block IκBα dissociation and degradation. These inhibitors at least partially inhibited NF-κB activation and augmented CH11-induced cell death. Stable transfection and overexpression of IκBα in 3D10 cells inhibited CH11-induced NF-κB activation and completely abrogated Fas resistance. Increases in caspase-8 and caspase-3 cleavage induced by CH11 and in consequent apoptotic killing were observed in these cells. Furthermore, while Fas-stimulation of resistant control 3D10 cells led to increases in the antiapoptotic proteins cellular inhibitor of apoptosis protein-1 and X-linked inhibitor of apoptosis protein, Fas-induced apoptosis in IκBα-overexpressing cells led to the down-modulation of both of these proteins, as well as that of the Bcl-2 family protein, Bcl-xL. These data suggest that the resistance of these leukemic eosinophils to Fas-mediated killing is due to induced NF-κB activation.
L igation of death receptors of the TNFR family can initiate signaling pathways leading to cell death or cell survival. Although TNF itself was named for its ability to induce cell death, it has been known for several years that TNF-␣ stimulation also can induce activation of the transcription factor NF-B (reviewed in Ref. 1). Although activation of NF-B has been thought to be part of the apoptotic induction, recent evidence suggests that in most circumstances, NF-B activation is a prosurvival response. Many normal cells are not killed by TNF and this may be related to NF-B activation; blockade of NF-B sensitizes cells to TNF and augments induced apoptotic cell death (2)(3)(4). Fas (CD95/Apo-1) is a member of the TNFR family of proteins, but the ability of Fas to activate NF-B has been variably reported (5,6) and dissociated from any protective effect (6 -10). Indeed, death occurs in these cell types despite the activation of NF-B (9,10), and inhibition of NF-B has little effect on apoptosis (7). In leukemic or neoplastic cells, resistance to Fas has been attributed to deficiencies in constituents of the Fas pathway, including decreased surface expression of Fas (11), or to the presence of increased levels of antiapoptotic proteins such as Fas-associated phosphatase-1 (FAP-1) 3 (12) or certain members of the Bcl-2 family (13). However, to our knowledge Fas resistance in such cells has not been directly linked to activation of NF-B induced by Fas itself.
The prototypical NF-B is composed of a p65 (RelA) and a p50 subunit and is sequestered in the cytoplasm as an inactive form bound to IB inhibitory protein, particularly IB␣ (1). Upon stimulation by a variety of extracellular agents, IB␣ is phosphorylated at serines 32 and 36, leading to its polyubiquitination; this in turn leads to recognition and degradation of IB by the 26S proteasome (14 -19). The disassociation of IB exposes the nuclear localization sequence of NF-B, and it is transported into the nucleus where it can activate expression of a wide variety of genes (15)(16)(17)(18)(19). Notably, two protein families contain NF-B-inducible, antiapoptotic family members-the inhibitor of apoptosis proteins (IAPs) and the Bcl-2 family (20 -23). NF-B-mediated regulation of the prosurvival Bcl-2 protein, Bcl-x L for example, has been shown to be important in survival signaling in both B (21) and T lymphocytes (23). Recent studies have shown that NF-B target gene products of the IAP family can inhibit the proteolytic activities of caspases (reviewed in Ref. 10) and can prevent apoptosis induced by Fas ligation (24). Recently, overexpression of IB␣ in endothelial cells suppressed expression of iap genes and sensitized these cells to TNF-␣-induced apoptosis (25). In this study, we show that ligation of the Fas receptors on the eosinophilic cell line AML14.3D10 (hereafter referred to as "3D10" cells) induced a distinctive pattern of activation of NF-B, and that these cells were resistant to Fas-mediated killing. Pharmacologic blockade of NF-B activation or overexpression of the physiologic NF-B inhibitor protein IB␣ abrogated the Fas resistance of the 3D10 Section of Pulmonary and Critical Care Medicine, Department of Medicine, University of Chicago, Chicago, IL 60637 cells. In IB␣-overexpressing cells, both caspase-8 and caspase-3 were activated following anti-Fas treatment and nearly all the cells were killed, while in vector control there was little or no caspase activation and the cells remained resistant to Fas ligation. This suggests that NF-B activation is critical for protection of the 3D10 eosinophils from Fas-mediated apoptosis and that in these cells Fas itself induces NF-B activation.
Cell culture
Eosinophilic AML14 cell lines, the parental AML14 line and the AML14.3D10 subline, were generated and kindly provided by Drs. C. Paul and M. Baumann (Wright State University, Dayton, OH) (26,27). The parental AML14 cell line (pAML) was established from a patient with FAB M2 acute myeloid leukemia (26). The 3D10 subline was isolated originally as a line with an advanced eosinophil phenotype and a doubling time of 48 h without cytokine supplementation (27). Cells were maintained in RPMI 1640 medium supplemented with 8% FBS, 2 mM L-glutamine, 1% (w/v) gentamicin, 10 mM sodium pyruvate, 1 mM HEPES, and 5 ϫ 10 Ϫ5 M 2-ME (Sigma-Aldrich, St. Louis, MO). Cells were grown up to a maximum density of 1 ϫ 10 6 cells/ml at 37°C, 5% CO 2 and were passaged twice a week. After ϳ40 passages, fresh cultures were started from frozen stocks to minimize genetic drift and phenotypic changes.
Induction of apoptosis
Fas-induced cell death was determined by both trypan blue exclusion assay and flow cytometric analyses. Briefly, AML14.3D10 cells were seeded at a density of 3 ϫ 10 5 cells/ml/well in 48-well cell culture plates. The mouse IgM monoclonal anti-Fas Ab, CH11, was used primarily at a range of 100 ng/ml to 1.0 g/ml. TRAIL was added to the cells at a range of 50 -100 ng/ml final concentrations. Each cell sample was divided for trypan blue exclusion assays and for standard propidium iodide (PI) DNA analyses after 24, 48, and 72 h. Total cell death was determined by trypan blue (0.2%) exclusion using a conventional light microscope. The remaining cells were centrifuged at 200 ϫ g for 10 min and resuspended in hypotonic PI solution (50 g/ml PI in 0.1% Na citrate, 0.1% Triton X-100). To ensure cell lysis, cells were stored overnight in the dark at 4°C before flow cytometric analysis. At least 5000 nuclei were examined for each sample to determine percentage of subG1 DNA content. In preliminary experiments, hypotonic PI analyses of cell samples closely correlated with other DNA fragmentation and morphologic criteria of apoptosis. Percentages (ϮSE) of cell death or survival (as percentage of viability) reported in the results are derived from the flow cytometric analyses.
Western blot analysis
Equal amounts of protein were separated by SDS-PAGE mini-gel electrophoresis and transferred onto nitrocellulose membrane (0.2-m pore size; Sigma-Aldrich) using a semidry electrophoretic transfer system (Bio-Rad, Hercules, CA). Blots were stained with Ponceau S to check the quality of the protein and the transfer efficiency. Immunoblotting was performed according to the ECL Western blotting protocol (Amersham Pharmacia Biotech, Arlington Heights, IL). Briefly, blots were blocked in 5% nonfat dry milk in 1ϫ TBS-Tween solution for 1 h followed by a 1-h incubation with the appropriate primary Ab. Blots were then washed for 30 min with four changes of 1ϫ TBS-Tween solution followed by a 1-h incubation with the appropriate HRP-conjugated secondary Ab. Blots were washed again and incubated for 1 min with ECL detection reagents. The results were visualized by exposing blots to autoradiographic film (Kodak, Rochester, NY).
Pharmacologic inhibition of the NF-B activation
AML14.3D10 cells were cultured at a density of 1 ϫ 10 6 /ml and were preincubated for 1 h with IB␣ phosphorylation inhibitor BAY 11-7085 (29), or proteasome inhibitors LC (30), MG132 (31), or PSI (32) at a range of concentrations (0.1-20 M) before addition of TNF family ligands or Abs. Optimal doses, at which augmentation of Fas-mediated killing was greatest with the least background toxicity of inhibitors alone, were calculated and used in certain experiments as described.
Extraction of nuclear protein
Cells were passaged and grown overnight at ϳ7 ϫ 10 5 cells/ml in cell culture flasks. After the treatments, the cell nuclear extracts were prepared according to a published method (33) with some modifications. Unless indicated otherwise, all procedures were performed at 4°C. Briefly, 10 ϫ 10 6 cells were harvested by centrifugation and washed twice with ice-cold Dulbecco's PBS buffer. The pellet was resuspended in 4ϫ packed cell volume of buffer A (10 mM HEPES (pH 7.9), 1.5 mM MgCl 2 , 10 mM KCl, 0.5 mM PMSF, 0.5 mM DTT) and incubated on ice for 10 min. The supernatant was discarded after centrifugation at 1300 rpm for 7 min and 1ϫ original packed cell volume of buffer A was added. The cell suspension was transferred to a 50-ml "woodrage" centrifuge tube and centrifuged at 8500 rpm for 20 min in a Beckman JS 13.1 rotor (Beckman Instruments, Palo Alto, CA). The supernatant was removed and set aside as the cytoplasmic extract. The pellet was gently washed with buffer A an additional time and resuspended in 1ϫ original packed cell volume of buffer C (20 mM HEPES (pH 7.9), 25% glycerol, 1.5 mM MgCl 2 , 420 mM KCl, 0.2 mM EDTA, 0.5 mM PMSF, 0.5 mM DTT). The suspension was stirred on a rocking platform for 30 min and then centrifuged in a Beckman rotor JA-17 (Beckman Instruments) at 12,500 rpm for 30 min. The supernatant was collected without disturbing the pellet and placed in dialysis tubing (Life Technologies, Grand Island, NY). Dialysis was performed for 1 h against three changes of 200 ml of buffer D (20 mM HEPES (pH 7.9), 20% glycerol, 100 mM KCl, 0.2 mM EDTA, 0.5 mM PMSF, 0.5 mM DTT). Following dialysis, the nuclear extract was clarified by centrifugation at 14,000 rpm for 20 min in an Eppendorf microcentrifuge tube (Brinkman Instruments, Westbury, NY). Protease inhibitors including leupeptin, antipain, chymostatin, and pepstatin A (Sigma-Aldrich) were added immediately (5 g/ml each) to extracts before saving them at Ϫ80°C.
EMSA
The details of the EMSA have been described elsewhere (34). The procedure was performed with some modification. Double-stranded NF-B synthetic oligonucleotides 5Ј-AGT TGA GGG GAC TTT CCC AGG C-3Ј were purchased from Promega (Madison, WI) and end-labeled with [␥-32 P]ATP (Amersham Pharmacia Biotech) and T4 polynucleotide kinase (NEB, Beverly, MA). A 200-fold excess of unlabeled NF-B probe and unrelated oligonucleotide probes for CArG was used to assess the specificity of the DNA-binding reaction. Binding reactions were performed on ice in a total volume of 15 l. DNA probe (2000 cpm, 1-5 fmol) was preincubated for 15 min with 1.5 l binding buffer (50 mM HCL (pH 7.5), 20% Ficoll, 375 mM KCl, 5 mM EDTA, 5 mM DTT) and 1 g poly(dI-dC) (Promega). DNA-protein binding was initiated by adding 4 g of nuclear extract. A total of 200-fold excess of "cold" (unlabeled) NF-B probe was used as a specific competitor. Electrophoresis was performed for 3 h at 100 V in 0.5 ϫ Tris-borate-EDTA running buffer in a 4°C cold room. The dried gel was visualized via exposure to high performance autoradiography film. The supershift analyses were performed by incubating the DNA-binding reactions with optimal concentrations (determined previously) of Abs to p65, p50, RelB, and c-rel for an additional 20 min on ice before electrophoresis.
Luciferase reporter assay
Cells were transfected with the NF-B transcription reporter plasmid DNA pNF-B-Luc (Clontech Laboratories) using GenePORTER transfection kit (Gene Therapy Systems, San Diego, CA) essentially according the method described in detail below. Transfection efficiency was assessed by cotransfection with pEGFP-C1. After 24 h following transfection, cells were serum starved for 8 h and treated with either TNF-␣ (5 ng/ml) or anti-Fas Ab CH11 (1 g/ml). Cells were harvested 16 h after treatment and analyzed for luciferase activity using a luminometer and luciferase reporter assay kit obtained from BD Biosciences as previously described (35).
Stable transfection of pCMV-IB␣
Wild-type human IB␣ cDNA was cloned into a mammalian expression vector, pCMV, and was used to transfect AML14.3D10 cells by using the GenePORTER kit. Briefly, 1 ϫ 10 6 cells/well were plated on a 12-well tissue culture plate. The pEGFP-C1 plasmid DNA was used as the reporter gene and cotransfected at a ratio of 3:1. After 6 wk of selection for neomycin resistance in medium containing 1200 g/ml G418 (Sigma-Aldrich), the positively transfected 3D10 cells were examined by FACS. The dead cells were discarded by Percoll gradient centrifugation. The positive cells were subsequently maintained in media supplemented with 400 g/ml G418. Cell viability and proliferation were carefully monitored for at least 2 mo before the first experiments and during the experimental period. Over this period, the stable transfectants also exhibited almost identical viability and proliferative capacity compared with untransfected 3D10 cells.
Results
The TNF superfamily includes TNF-␣, Fas ligand, TRAIL, and IL-1 (36), and interactions between TNF family ligands and their receptors are major regulatory mechanisms for hemopoietic cells (37). Specifically, TNF-␣, Fas ligand, and TRAIL interactions with their receptors are considered to be death signals to many cells. In this study, we have examined the signaling effects of the agonist anti-Fas Ab CH11, as well as TRAIL, on 3D10 eosinophils. We analyzed caspase (-8 and -3) cleavage, cell death or survival, IB␣ degradation, and NF-B nuclear translocation following stimulation of these receptors under control conditions or in the presence of various inhibitors of NF-B activation. We also determined the effects of Fas stimulation on the expression of antiapoptotic, NF-B target proteins, Bcl-x L , c-FLIP, c-IAP1, c-IAP2, and XIAP. Finally, we examined these events in cells overexpressing the physiologic inhibitor of NF-B, IB␣.
Caspase-8 and caspase-3 activation by CH-11 is suppressed in 3D10 eosinophils
Caspase activation has been described as a critical event(s) in apoptotic cell death. TNF superfamily-induced cell apoptosis leads to the proteolytic activation of both upstream caspases (e.g., caspase-8) and downstream caspases (e.g., caspase-3) (38). Western blot assays were performed to examine caspase-8 and caspase-3 activation/cleavage induced by CH11 or TRAIL in 3D10 cells. Cells were treated with either 1 g/ml CH11 or 100 ng/ml TRAIL for 1-24 h. No substantial CH11-induced procaspase-8 or procaspase-3 degradation was observed during this period ( Fig. 2A) or during periods extending to 72 h (data not shown). Because these cells are sensitive to TRAIL, and TRAIL ligation has been shown to activate caspases-8 and -3 in sensitive cells (39,40), we used TRAIL-induced apoptosis of 3D10 cells as a "positive control" to examine cleavage of these caspases. Cleavage of procaspase-8 (both isoforms; Ref. 28) was detected as early as 1 h and cleavage of procaspase-3 was detectable by 4 -6 h after treatment with TRAIL (Fig. 2B). Anti-actin was used to monitor the equivalent protein loading in gels as shown (Fig. 2).
NF-B activation is induced by Fas ligation in 3D10 eosinophils
NF-B can be an important factor in the suppression of apoptosis in several cell types (2)(3)(4)(41)(42)(43)(44). We examined NF-B activation induced by anti-Fas, as well as TNF-␣ and TRAIL in the 3D10 eosinophils used in the apoptosis assays described above. EMSA analyses (Fig. 3) showed that NF-B is activated in 3D10 cells after a 1-h treatment with 1 g/ml of CH11, but not after treatment with 100 ng/ml of TRAIL (even after 2 h; data not shown); no translocation of NF-B is observed in control IgM-treated group (Fig. 3). TNF-␣-induced nuclear translocation of NF-B in these cells occurred as early as 5 min and reached a peak around 20 -30 min (data not shown). Treatment of Fas-susceptible parental AML14 cells did not induce NF-B translocation above baseline (untreated) levels (data not shown). Luciferase reporter assays, performed as outlined above, routinely demonstrated 3-fold or more augmentation of NF-B activity in Fas-(or TNF-␣-) stimulated 3D10 cells compared with unstimulated cells or cells treated with IgM control Ab only.
Identification of CH11-induced NF-B subunits in 3D10 eosinophils
There are five members of the NF-B/Rel family of proteins that have been found expressed in mammalian cells. These NF-B/Rel subunits are p65/RelA, c-Rel, RelB, p105/NF-B1 (which can be processed to p50), and p100/NF-B2 (which can be processed to p52; reviewed in Ref. 1). These subunits usually exist as protein dimers such as the heterodimer, p65/p50, or the homodimer, p50/ p50. We examined the NF-B subunits in 3D10 cells in supershift assays using Abs specific for p65, p50, RelB, and c-Rel. Nuclear extract was prepared from 3D10 cells treated with 1 g/ml monoclonal anti-Fas CH11 for 1 h. This treatment induces the optimal activation of NF-B as described above. These experiments revealed that the p65/p50 heterodimer is the activated form of the NF-B induced by CH11 in 3D10 eosinophils (Fig. 4). Faint bands in some lanes could be shifted by anti-p50 Ab only and could represent endogenous p50/p50 homodimers (Fig. 4, open arrow). However, these bands were inconsistent in their presence and responses in these experiments. The mouse IgM protein (1 g/ml) again showed no induction of NF-B activation (Fig. 4).
Activation of NF-B in 3D10 cells stimulated by CH11 is timeand dose-dependent
To determine details of the regulation of anti-Fas-induced NF-B activation, we examined the CH11 dose response and the time course of NF-B nuclear translocation. EMSAs were performed using the nuclear extracts prepared from 3D10 cells treated either for 1 h with different doses of CH11 ranging from 0.01-2.0 g/ml, or with 1 g/ml CH11 for periods from 10 min to 3 h. The results show that the minimum dose of CH11 required for NF-B activation at 1 h is 0.1 g/ml, and no significant increase occurs after treatment with Ͼ1 g/ml (Fig. 5A). The peak of NF-B translocation was seen at 60 min after treatment with 1 g/ml CH11 and decreases greatly after 3 h (Fig. 5B).
Blockade of NF-B activation leads to an increase in CH11induced apoptosis in 3D10 eosinophils
Comparison of the apoptotic effects of CH11 and TRAIL on 3D10 cells suggests that there is a correlation between resistance to apoptosis and NF-B activation. Treatment of 3D10 eosinophils with TRAIL leads to apoptosis but not activation of NF-B (see Figs. 1 and 3). In contrast, 3D10 cells have shown strong resistance to both TNF-␣ and CH11, both of which induce activation of NF-B. To further understand the role of NF-B activation in 3D10 cell apoptosis, inhibition of NF-B activation was performed initially with four pharmacologic inhibitors, BAY 11-7085, LC, MG132, and PSI were used to treat 3D10 cells. A range of doses of inhibitors (20, 10, 2.5, 1, 0.1, or 0.1 M) were used to pretreat cells for 1 h before adding 1 g/ml CH11. BAY 11-7085 inhibits IB␣ phosphorylation, preventing the degradation of IB␣ and release of the activated form of NF-B. LC, MG132, and PSI are proteasome inhibitors which are specific for the 20S and/or 26S proteasome complex and inhibit NF-B activation by blocking IB degradation. The dosages of inhibitors were determined by toxicity assays and inhibitor alone controls were also performed. Fig. 6A shows the results of analyses of apoptosis of 3D10 cells pretreated with BAY 11-7085 (2.5 M) before treatment with 1 g/ml CH11 for 24, 48, or 72 h. Dramatically increased induction of apoptosis was observed at all time points. Similar results were seen for LC, MG132, and PSI, although background killing of cells (with inhibitor only) were somewhat higher with MG132 and PSI (data not shown). EMSAs were performed following the treatment with the inhibitors which exhibited the least background toxicity, namely BAY 11-7085 and LC. These results demonstrated that inhibition of the NF-B activation occurs when cells were treated with the optimal doses of either 2.5 M BAY 11-7085 (Fig. 6B) or 1 M LC (data not shown). Both BAY 11-7085 and LC blocked NF-B translocation in a dose-dependent manner; the most effective blockade of NF-B translocation induced by CH11 was observed with either 10 -20 M of BAY 11-7085 or with 10 M of LC. However, background killing by inhibitors alone were greater at these latter doses. These observations correlated well with Western blot analyses of IB␣ degradation (Fig. 6C), and suggested that some maintenance of IB␣ expression (at 2.5 M and above) was sufficient to inhibit NF-B-mediated protection. Furthermore, Fasinduced NF-B activation, as measured by luciferase reporter assay, was almost completely abrogated by BAY 11-7085 treatment (data not shown).
Overexpression of IB␣ blocks CH11-induced NF-B activation and inhibits 3D10 cell resistance to CH11
To more rigorously test the role of NF-B activation in 3D10 eosinophils, we created the stably transfected cell line, AML14.3D10-IB␣, by cotransfecting 3D10 cells with pCMV-IB␣ and pEGFP-C1. After transfection, cells were selected with G418 as described above, and green fluorescence protein (GFP) expression in the cells was analyzed via flow cytometry. These data showed that ϳ86% of the cells were positive for GFP as compared with the control cells (Fig. 7). The selected cells were maintained in media with 400 g/ml of G418 as described above. The stably transfected pCMV vector control cell line was generated using the same method. Transfection of cells had no significant effect on surface Fas expression (Fig. 8D). IB␣-transfected expressed almost identical amounts of Fas as control cells, and vector-only cells remained Fas-resistant. IB␣ is the physiological inhibitor of NF-B and only upon the phosphorylation and degradation of IB␣ can NF-B p65/p50 translocation occur. The overexpression of IB␣ in 3D10 eosinophils inhibits depletion of cytosolic IB␣ as compared with control group that showed significant reduction of IB␣ after treatment with 1 g/ml of CH11 for 30 min (Fig. 8A). The CH11-induced NF-B activation was inhibited in IB␣-transfected 3D10 cells (Fig. 8B), while transfection with vector alone had no effect on NF-B activation by CH11 treatment. Many recent studies have shown that activation of NF-B can lead to the antiapoptotic proteins such as XIAP, c-IAP1, c-IAP2, survivin, and others (20). These antiapoptotic proteins can inhibit the proteolytic activity of caspases, blocking cascade activation (cleavage) and suppressing apoptosis (20). To examine the apoptotic effect of NF-B inhibition in IB␣-overexpressing cells, we compared CH11-induced caspase activation and cell death in 3D10 cells transfected with the pCMV control vector with pCMV-IB␣-transfected cells after treatment with 1 g/ml CH11. Western blot analyses revealed that while neither caspase-8 nor caspase-3 activation were observed in the vector control group (Fig. 9A), inhibition of NF-B activation via IB␣ overexpression led to a dramatic increase in cleavage of the proforms of both upstream caspase-8 and downstream caspase-3 (Fig. 9B). Enumeration of viable cells after treatment of these two groups with CH11 (1 g/ml) for 24, 48, and 72 h correlated closely with caspase activation, and Ͻ10% of IB␣-overexpressing cells were viable, while Ͼ80% of control (vector-only) eosinophils survived CH11 treatment (Fig. 9, C and D).
Fas-induced NF-B activation leads to selective increases in antiapoptotic proteins
To specifically identify potential targets of NF-B-mediated protection in Fas-stimulated 3D10 cells, we examined expression of several proteins for 12 h following CH11 treatment of cells stably transfected with the pCMV control vector or with pCMV-IB␣. The results of these analyses are shown in Fig. 10. Western blot analyses consistently showed increases in both c-IAP1 and XIAP, but not in the other potential NF-B targets, Bcl-x L , c-FLIP, or c-IAP2. Both c-IAP1 and XIAP showed increased levels of expression by 2 or 3 h, and levels above baseline were maintained for most or all of the 12-h period. Interestingly, both of these proteins, along with Bcl-x L , appeared down-regulated or degraded following anti-Fas treatment of IB␣-overexpressing cells (Fig. 10). Both Bcl-x L and XIAP were rapidly down-regulated with decreased levels evident by 2-3 h, while c-IAP1 decreases appeared somewhat later, ϳ6 -12 h (Fig. 10).
Discussion
Activation of NF-B is now an accepted mechanism of protection from apoptosis for some cell types. Inhibition of NF-B in such cells may lead to increased cell death through a variety of mechanisms. In this study, we present novel data that directly attribute Fas resistance of 3D10 eosinophils to NF-B activation resulting from Fas ligation itself, and show that inhibition of the nuclear translocation of p65/p50 negates the Fas resistance of these cells.
Previous studies have demonstrated Fas-mediated NF-B activation, but have dissociated one from the other. Using SV80 fibroblasts transfected with the cDNA encoding human Fas, Rensing-Ehl et al. (6) first demonstrated that anti-Fas induced NF-B translocation to the nucleus. However, these cells were Fas-sensitive and inhibition of NF-B had no effect on Fas-mediated cell death. Although NF-B was activated by Fas ligation in resistant human bladder carcinoma T24 cells (7), again contrary to our observations, inhibition of NF-B did not alter cell resistance or sensitivity. Furthermore, using sensitive Jurkat cells transfected with CD40-Fas fusion protein (CD40 extracellular domain and Fas intracellular and transmembrane domains), Ponton et al. (7) showed that stimulation of NF-B binding activity by extracellular Fas ligation was unrelated to Fas sensitivity. Nevertheless, in agreement with our results, both of these studies (6,7), as well as a recent report of CD40-induced NF-B regulation of Bcl-2 family proteins (21), implicate the heterodimer p65/p50 as a prominent NF-B complex in these interactions.
In another recent study, Fas ligation of dissociated cortical neuroblasts was accompanied by nuclear translocation of the RelA/ p65 subunit of NF-B as detected by immunofluorescence (10). Nevertheless, ligation of Fas killed these cells, and condensed and fragmented apoptotic nuclei also were immunoreactive for p65, directly dissociating the Fas-mediated NF-B activation from protection. Similarly, stimulation of TNFR or Fas on the surface of CEM-C7 T cells led not only to the activation of NF-B, but to apoptotic death of the cells (9). Most recently, EMSA, as well as microarray analyses of the transcriptional effects of anti-Fas (and TNF-␣) induction of HT29 colon carcinoma cells, confirmed activation of NF-B (p65/p50) by Fas ligation (45). However, again NF-B induction failed to protect these cells, and both TNF-␣ and anti-Fas induced cell death. In further contrast to our findings, the latter authors did not observe IB degradation (for up to 4 h) after stimulation and suggested that anti-Fas treatment led to NF-B activation through a different mechanism (45). In 3D10 eosinophils stimulated with anti-Fas, IB-␣ degradation progressed through the 2-h time point (Fig. 9). Thus, it is possible that IB degradation in response to Fas-ligation, and perhaps the protective capacity of Fas-mediated NF-B activation, varies according to cell type.
Upstream mechanisms of Fas-induced activation of NF-B are unknown, but receptor-associated proteins generally thought to be involved in NF-B activation by TNFR family proteins include receptor-interacting proteins and TNFR-associating proteins (46). NF-B inducing kinase may link death receptor signaling to the IB kinases (47). As we have shown in this study, the kinetics of Fas-induced NF-B (Fig. 5) differ substantially from those of TNF-␣ where maximum nuclear translocation could be seen 20 -30 min or earlier (data not shown). Our data regarding the kinetics of Fas-and TNF-induced NF-B activation confirm, in part, those of another direct comparison of Fas and TNF (9), and suggest that pathway intermediates in Fas-induced NF-B activation may differ from those of TNF or other death receptors. Another molecule which associates with the cytoplasmic region of Fas is FAP-1. Indeed, FAP-1 is the only known molecule which associates with the negative regulatory domain of Fas (48,49), and it is strongly expressed by 3D10 eosinophils (data not shown). Although the mechanism(s) by which FAP-1 inhibits apoptosis are still unclear, FAP-1 can interact with IB␣ and enhance NF-B activation (50). Current evidence suggests that Tyr 42 phosphorylation of IB␣ protects against its inducible degradation (51, 52). Nakai et al. (53) showed that FAP-1 enhanced NF-B activation via the common neurotrophic receptor in transfected 293T cells. Furthermore, they have hypothesized that dephosphorylation of Tyr 42 of IB␣ by FAP-1 leads to an increase in the "receptivity" of IB␣ for serine phosphorylation and subsequent NF-B activation (53). We are currently investigating the potential role of FAP-1 in Fas-induced NF-B in our eosinophil systems.
Finally, we are also continuing investigations of the transcriptional targets of Fas-induced NF-B. Transcriptional profiling recently has suggested that among such genes, at least two (apoptosis inhibitor 2 (c-IAP2) and the cytoprotective manganese superoxide dismutase) are up-regulated by both TNF-␣ and anti-Fas signaling (44). The former belongs to the IAP family, which has been implicated in suppression of apoptosis induced by a variety of signals (20). These proteins can directly inhibit caspases in vitro, but their in vivo roles are largely undefined. In a study of TNF-induced apoptosis of transfected HT1080 fibrosarcoma cells, Wang et al. (54) found that activation of NF-B blocked the activation of caspase-8. Furthermore, they demonstrated that c-IAP1 (and c-IAP2) may play roles in blocking the cleavage and activation of both caspase-8 and caspase-3. Both c-IAP1 and c-IAP2 have been shown to directly bind to caspase-3 and -7, and inhibited their proteolytic activation in a cell-free system (55). Recently, the susceptibility of human enterocytes to Fas-induced apoptosis was attributed to c-IAP1 and -2, and blockade of their synthesis with cycloheximide augmented Fas-mediated killing (56). XIAP also has been shown to be up-regulated by TNFR stimluation and to directly inhibit caspase-3 (and -7) in some cells (57). Our results support these observations, but the in vivo specificity of interactions (i.e., which IAP inhibits which caspase) within this group of caspases and inhibitors is still unclear.
Other antiapoptotic proteins, which may be transcriptional targets of NF-B, but of controversial relevance in Fas-mediated cell death (58 -61), include members of the growing mitochondriaassociated Bcl-2 family. NF-B can directly regulate the expres-sion of prosurvival members such as Bcl-x L and are required for rescuing certain cell types from apoptosis (21)(22)(23). Although Fasmediated apoptosis of some cells can bypass significant mitochondrial involvement and, thus, the antiapoptotic effects of Bcl-x L , in other cells Bcl-x L can contribute to inactivation of caspase-8 at the mitochondrial surface (62) or inhibit Fas-mediated apoptosis by preventing mitochondrial release of the IAP inhibitor, Smac/ DIABLO (63). Although we have not observed consistent Fasmediated up-regulation of Bcl-x L in wild-type 3D10 cells, a clear pattern of degradation or down-regulation was observed in the IB␣-transfected cells, and this could contribute to augmented death of these cells, as has been previously suggested (64). Finally, it is possible that the combined effect of several NF-B-regulated proteins may be required for rescue from Fas-mediated apoptosis.
Whether blockade of Fas-induced apoptosis occurs in 3D10 eosinophils through antiapoptotic effects of Bcl-2 proteins, via caspase inhibition, or by some other mechanism(s), it is clear that in these cells, NF-B activation is critical to cell survival following Fas ligation. This may have important implications in the therapeutic approaches using apoptotic machinery in both inflammatory diseases and hematological malignancies. Furthermore, these data suggest that the AML14.3D10 cell line may provide a useful model for studying antiapoptotic pathways involving NF-B activation via TNF family receptor ligation. | 7,299.6 | 2002-10-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
TESTING THE REPEATABILITY OF RESULTS USING THE GNSS-RTK MEASUREMENT METHOD
Stefan Miljković 1 Jelena Gučević 2 Siniša Delčev 3 Vukan Ogrizović 4 Miroslav Kuburić 5 UDK: 528,06 DOI: 10.14415/konferencijaGFS2019.101 Abstract: The paper presents the concept of the AGROS network of permanent stations and describes the procedure for implementing the service for real-time kinematic positioning (AGROS-RTK). Experimental research was conducted with the aim of testing the reliability and accuracy of the AGROS network of permanent stations in the RTK mode. Testing was carried out on a polygon of stable geodetic points. Observations were performed once a year for a period of three years. The results analysis shows that there are differences among the epochs and that the possible causes need to be further examined.
INTRODUCTION
The network of permanent stations CORS (Continuously Operating Reference Stations is a set of properly distributed GNSS receivers, operating within a single system, continuously for 24 hours a day. The main task of the system is to enable precise positioning using one GNSS receiver. Reference stations are linked to the control centre that controls their work and distributes the necessary data. Networks are most often formed at the national or regional levels and represent the reference framework of the system present in the area [6]. The network of permanent GNSS stations, officially in use on the territory of the Republic of Serbia, is called the Active Geodetic Reference Network of Serbia (AGROS) and is owned by the Republic Geodetic Authority. It is comprised of 30 7. МЕЂУНАРОДНА КОНФЕРЕНЦИЈА Савремена достигнућа у грађевинарству 23-24. април 2019. Суботица, СРБИЈА operating GNSS receivers properly distributed on the territory of the country with an average distance of about 70 km [4]. The AGROS User Centre offers several services to its users, the most commonly used being corrections for real-time kinematic positioning (AGROS-RTK). Within the AGROS-RTK service, it is possible to carry out all geodetic measurements for the purpose of geodetic surveying, as well as for various engineering projects. The prescribed accuracy allowed by the AGROS system can be achieved only if the prescribed procedure is followed during measurement, and it is therefore of utmost importance to comply with all the measurement instructions. Regarding monitoring accuracy and reliability of the results obtained by using the AGROS-RTK service, there are no established parameters that can efficiently monitor the declared accuracy at any time and place.
RESEARCH OBJECTIVE AND METHODS
The aim of the research is to perform quality control of the GNSS measurement results within the AGROS-RTK, using the methodology for setting up the geodetic basis. One of the ways to control quality is to do the measurements on a stable and reliable polygon in several time epochs. The measurements for setting up the geodetic basis are carried out according to the Professional Guidelines of the Republic Geodetic Authority [5]. All the calculations and tests for quality control purposes should be performed on the WGS84 ellipsoid, and in the reference framework of the AGROS network (ETRF2000), in the Cartesian orthogonal coordinates. The emphasis is placed on the control of the achieved results of the direct measurements, and accordingly, it is not necessary to make transformations in the plane of the state projection. Quality control of the obtained results is done by comparing several measurement epochs. The measurement plan envisages that, for duration of 30 seconds, with a 1 second interval of observation, the final coordinates for each point are obtained from each epoch. The estimation of coordinates for each measurement epoch is made according to the formulas [1]: The importance of deviation of coordinates can be determined by a statistical test of the equality of two values with known standards. The test is conducted with a probability of 95%, assuming that the measurements have a normal distribution [1]: Decision is made based on the hypothesis: where: q -quantile of normal distribution for the probability of 95%, g, r -ordinal numbers of the measurement epochs that are being tested.
Description of the experiment and the measurements
The polygon where the experiment was carried out is located on the territory of the cadastral municipality of Čajetina (CM Čajetina) located on the mountain of Zlatibor.
For the purpose of the experiment, all the necessary data of relevance were available. The basic satellite images were downloaded from public Internet portals [7] showing the position of points and the area of research (Figure 1).
The measurements were conducted using the GNSS (GPS) technology. The real-time kinematic positioning method (RTK) was used. The active geodetic reference network of the Republic of Serbia (AGROS network) was used as the basis.
Results
The estimation of definite coordinates was made for each measurement epoch separately.
The calculations were made in the Cartesian orthogonal coordinates on the WGS84 ellipsoid and in the reference framework of the AGROS network (ETRF2000).
The results of the definite coordinates are shown in Table 1. Differences of definite coordinates among the epochs are presented in the graph in Figure 2. Coordinates differences [cm]
Testing the point coordinates' matching
Statistical testing of the definite coordinates' matching among the measurement epochs was performed. The statistical test on the equality of two values with known standards was applied to the results at each measuring point, along all three coordinates. The test was carried out with a probability of 95%, assuming that the measurements had a normal distribution.
The results of testing the coordinates' differences are shown in Table 2. Table 2 has the following meaning: 'TRUE' -statistically, in two tested epochs, the coordinates can be considered equal, 'FALSE' -statistically, in two tested epochs, the coordinates CANNOT be considered equal.
DISCUSSION AND CONCLUSION
Measurements for this research were carried out in three epochs with a time interval of one year. In each epoch, permanently stabilised points were observed, which were stable and invariant from other influences during the realisation of the experiment. The measurements were conducted according to the Professional Guidelines instructed by the Republic Geodetic Authority, which stipulate the proper use of the active geodetic basis of the Republic of Serbia. In all the measurement epochs, the same GNSS receiver was used, with a valid calibration certificate. Upon processing the measurement results and calculating definite coordinates for each measurement epoch, the coordinate differences among the epochs were calculated, in all the combinations. The differences are shown in Figure 3. Table 3 shows the range of the values of definite coordinates of the points in all the epochs.
By analysing the results from Table 3 and taking into account the accuracy of measurements, it was established that the coordinates among the epochs differ significantly. Statistical testing of equality of coordinates with known standards was carried out in all the combinations of the epochs. The testing results are presented in Table 2. Based on the obtained results it can be concluded that none of the geodetic points coordinates are matching in all the epochs of observation. The reasons for significant differences in the coordinates cannot be accurately determined on the basis of such an experiment. It is evident that all the effects that come from users and equipment are reduced to a minimum by using adequate and calibrated equipment, as well as by strict adherence to the Professional Guidelines for using the AGROS network. Further analysis of the causes could be directed to the quality of corrections distributed by the AGROS network or the quality of the data obtained from the satellite. Certainly, this could be the subject of some further research. | 1,736 | 2019-01-01T00:00:00.000 | [
"Materials Science"
] |
Dispersed organic matter characteristics as an indicator of rock alteration degree of the Safyanovskoe copper-sulphide deposit (Middle Urals)
Annotation Relevance of the study is due to the importance of composition and maturity of the dispersed organic matter (DOM) as indicators of rock forming conditions, which may contribute to the paleoecological reconstruction of sedimentation conditions for rocks in the ore-bearing stratum of the Safyanovskoe deposit. Objective: to analyze the composition and characteristics of DOM in carbon-siliceous rocks of the ore-bearing stratum of the Safyanovsky copper-pyrite deposit. The obtained characteristics of the DOM make it possible to reveal the source of original accumulation and the level of its transformation under the influence of various geological processes. Methods. A detailed study of DOM was carried out by the EPR method (electron paramagnetic resonance) for powder preparations. EPR spectra were recorded at room temperature on X-band spectrometers DX-70, ESR 70-03 DX/2, and SE/X-2547 RadioPAN. The analytical procedure for studying organic matter (OM) included: determination of insoluble residue and Corg content in the rock, extraction of the chloroform bitumoids (CB) and the alcohol-benzol bitumoids (ABB), humic acids (HA), determining the group composition of CB and hydrocarbons (HC), chromatographic fractionation with determination of the sum of methane-naphthenic and aromatic fractions of hydrocarbons, GC-MS analysis of n-alkanes, cyclic and polyaromatic hydrocarbons (PAHs). GC-MS analysis was performed on the complex Hewlett Packard 6850/5973 with a quadrupole mass detector and analytical information processing software. Results. Analysis of the EPR spectra of carbon-siliceous rocks of the Safyanovskoe deposit showed the presence of the two types of carbon signal (C-org), characteristic of the plant and the animal residues. It was also found that the DOM has never been exposed to temperatures greater than 300 °C. Geochemical analysis of OM indicates that DOM is at a high maturation stage (residual organic matter (ROM) > 99%). But the type of distribution of polyaromatic hydrocarbons (PAHs) suggests that the primary OM was specifically altered at elevated temperatures. Conclusions. DOM of carbon-siliceous rocks of the Safyanovsky deposit is genetically the same, its type is mainly sapropelic and accumulation is associated with marine conditions. A characteristic feature is, on the one hand, its high polymerization, which is typical for DOM of the stage of late mesocatagenesis, and, on the other hand, its molecular composition does not allow us to talk about the maturation of DOM in the process of natural regional metamorphism.
Introduction
Safyanovskoe deposit is located within the East Urals rise in the southern part of the Rezhevskaya structural-formational zone (Fig. 1). It is localized in the altered volcanic and volcano-sedimentary rocks of acid-medium composition, which opened in the career of the Main ore deposit.
Main ore body of massive pyrite ores has a length of 400 m and a width of up to 140 m. The southern flank of the ore body rapidly tapers out, and the northern passes in a series (southwestern flank of the Safyanovskoe ore field). Here they are in contact with Devonian amphipora bituminous limestone and serpentinite of the Rezhevsky massif.
Carbon-bearing-siliceous rocks are represented by psephite-psammitic tuffites, crystal-lithocrystalloclastic and pelitolitic. The structure consists of fragments of quartz, plagioclase crystals, angular-rounded fragments of rhyodacites, shells of foraminifera, radiolarians, and accumulations of carbon-bearing organic matter (COM), carbonates as well as chlorite, mica, kaolinite [3]. Sandstones are found in the ore-hosting mass of series and have virtually identical composition, graded bedding and, sometimes, wavy contacts with siliceous-carbonaceous rocks (Fig. 2, a).
The composition and characteristics of dispersed organic matter (DOM) is an important indicator of rock facies and genesis of ore formation. With that in mind we have been studied carbon-bearing-siliceous rocks of ore-hosting strata of the Safyanovskoe copper-sulfide deposit (Middle Urals).
Materials and methods
Carbon-bearing siliceous rocks of the ore-hosting strata exposed by mine workings at the contact of limestone and (Fig. 2, b). It was determined that in the south-east direction they taper out and grade to carbon-bearing-siliceous rocks (Fig. 2, c). Out of fault zone limestone are micro granular argillaceous-detrital broken by cracks with calcite and carbon-bearing-siliceous material. In a limestone it is observed that amphipora interlayer capacity of 0.1-0.2 m occurred through 0.5-1.0 m. In order to determine the genesis of DOM we studied limestone opened by drift and carbon-bearing-siliceous rocks drift and quarry.
Carbon-siliceous formations are with carbon-bearing veinlet opened by mining, black and consist of quartz, chlorite, mica and plagioclase. There are impurities of calcite, magnesite, siderite, pyrite and COM in the rock matrix. They are identical on mineral composition to overlying formation rocks exposed by quarry. An earlier thermal analysis of the silica-carbonaceous rock samples, breccias and circum-ore metasomatites showed slightly metamorphosed organic matter (OM) of plant origin in amounts up to 6%, which burns at 200-330 °C [6] in studied samples.
The paramagnetic properties of the Safyanovskoe deposit rock containing DOM were studied by EPR (electron para- Note. The mineral composition of the rocks is defined by X-ray phase analysis on diffractometer XRD-7000 (Shimadzu) (operator O. L. Galakhova), organic content determined by the thermal method (analyst V. G. Petrishcheva). Analyses are carried out in physical and chemical test laboratory of the Institute of Geology and Geochemistry of the Ural Branch of the Russian Academy of Sciences the city of Ekaterinburg. magnetic resonance). Spectra of paramagnetic carbon radicals are relatively simple and have been described for many solids and liquid natural OM as well as products of their thermochemical transformations.
Samples of carbon-bearing-siliceous rocks of quarry were studied in the Physics of minerals laboratory of KFU by A. A. Galeev and laboratory of physical and chemical test laboratory of the Institute of Geology and Geochemistry of the Ural Branch of the Russian Academy of Sciences (IGG UB RAS) by Yu. V. Shchapova. In the Physics of minerals laboratory of KFU EPR spectra were recorded from a powder weighed quantity of 20-40 mg of initial and heated samples at temperatures of 350 and 600 °C for 30 minutes. The recording was made at room temperature in the automatic mode on a DX-70 portable spectrometer with operating frequency 9.272 GHz. In the IGG UB RAS laboratory ESR spectra were recorded on spectrometer 70-03 DX/2 at room temperature (initial test) and after samples heating up to 300 and 600 °C for 30 min. Studies of limestone and carbon-bearing-siliceous rocks exposed by mine workings were carried out in the laboratory of mineralogy of the Institute of Geology of Komi Science Centre UB RAS by V. P. Lyutoev on X-band spectrometer SE/X-2547 RadioPAN. Recording mode is identical. The results are shown in table 2.
Modern geochemical studies at the reconstruction of facial-genetic conditions commonly use biomarker analysis. The amount of hydrocarbons (HC) in the composition of the DOM and their structural features serve to characterize Note: I. R. (ir) -insoluble residue; A chl -chloroform bitumoid; А alb -alcohol-benzene bitumoid; HA -humic acid; β -bituminosity coefficient; HC -hydrocarbons; Me-Nf -methane-naphthenic fractions of hydrocarbons; Ar -aromatic hydrocarbon fraction; ROM (residuel organic matter) -degree of polymerization; CPI -oddity coefficient. Weighed quantity of sample D16/12 to 104.5 g sample weight D14/12 100 g. Sample data L.K. IV-th of the Safyanovsky quarry is given according to citation 2 [2]. the genetic type of OM and its catagenetic transformation degree.
Earlier geochemical OM analysis of carbon-bearing-siliceous rocks of the Safyanovskoe deposit (height 157 m; quarry depth 50 m) showed that it belongs to the sapropelic type [2]. We have recently carried out a geochemical OM study of rocks exposed by mine workings at a depth of 285 m. The OM component composition is shown in Table 3.
The analytical procedure used in the OM study included determination of the rock insoluble residue and content С org , chloroform extract (A chl ) and alcohol-benzene (А alb ) bitumens, humic acids ( Results and discussion Fig. 3 shows the typical ESR spectra of carbon-bearing-siliceous rocks from the Safyanovskoe deposit quarry. Analysis of the spectra showed the presence of two types of signal С org specific to plant and animal remains [7]. The spectra, as in the initial samples as well as after heating at 300 °C, reveals a signal in the С org area with a broad spectrum line (Fig. 4). The signal characteristics after annealing at 300 °C: g ~ 2,0031 ± 0.0001 and line width ∆B ~ 0.5-0.7 mT, indicating the presence of OM plant residues metamorphosed at relatively low-temperature conditions (not exceeding 300 °C) [8]. Once heated to 600 °C С org , a signal with g ~ 2.0027 ± 0.0001 appeared as a narrow spectrum line (∆B = 0.15-0.2 mT). The signals from these parameters are peculiar to the OM remains of protein series [7]. Perhaps, this OM formed as a result of spoilage microorganisms on the stage of sedimentation and early diagenesis.
Analysis of the EPR spectra of DOM rocks exposed by mining revealed its identity to quarry samples. It was metamorphosed under temperature not higher than 300 °C ( Table 2). A sample of limestone SH10/12 revealed two types of signals С org , which may indicate similar depositional conditions for carbon-bearing-siliceous formations as well as limestone. Judging by the intensity of the spectrum in sample 1346 of quarry [6] and SH16/12 (Table 2) carbon-bearing-siliceous rocks, the concentration of paramagnetic centers С org -1.3 • 10 18 spin/g, and in samples of limestone SH11/12 -2,6 • 10 18 (Table 2), which is typical of some types of coal [8]. Scanning electron microscopic images of carbonaceous streaks in siliceous-carbonaceous rock obtained using a REMMA-202M microscope (operator E. V. Nuzhdin) show a structurally uniform black surface with a shell fracture, which may be characteristic of vitrinite (Fig. 4). It can be assumed that the carbonaceous material was at a low (brownish) stage of metamorphism and is now represented by fusenized (inertinite) and vitrenized (gelified) plant residues as well as soluble compounds that were originally part of the lipoid microcomponents of resins.
In addition, in the spectra of carbon-bearing-siliceous rocks (SH16/12 and 1346) the signal E' in the center of quartz are observed (Fig. 3). Preservation of this center throughout the history of rocks also indicated the low-temperature transformation of the rock. Additional centers were identified in the EPR spectra of limestone (sample SH10/12): center SO 2 -(g = 2,005) and axial CO 2 -(g = 1,999; 2,003), which indicates the low temperature effects on limestone not exceeding 250 °C.
The EPR spectrum of the sample SH15/12 (g = 2.0026, Table 2) is typical for OM series of bitumen, which g-factor value can be attributed to oxykerit with the line width and the presence of a weak signal after heating at 600 °C. Except for the value of the g-factor, oxykerites similar to some shungites of Karelia (g = 2,0024) [9]), which are on the regressive stage of metamorphism.
As a result of chemical analysis OM of carbon-bearing-siliceous rocks of the Safyanovskoe deposit it was found that in spite of a fairly high content of С org (4.19%) and chloroform bitumoid (A chl = 0.011%), DOM is at the advanced stage of transformation (ROM > 99%) [10]. This is consistent with the absence of humic acid. [12].
Polyaromatic hydrocarbons. The distribution of PAHs is shown in Fig. 5. Based on the above data, a feature of the test sample is a high content of PAH. As part of the aromatic hydrocarbons fraction they represent approximately 65% (MPI -0.40 -transformation level index of non-substituted phenanthrene and its methyl homologues; Fl/202 -0.25 fluoranthene share in the total content of molecular group 202). А alb = 2.9), and the value of β (0.25) is much lower than 1 (Table 3), which indicates synergetic nature OM. It is known that β -is the main indicator at the identification of the genetic type bitumoid its sinergeticity or epigeneticity in relation to the host rocks. In the case of syngenetic bitumoid its value does not exceed 5-10 [11].
The hydrocarbon content of COM and OM in the rock is relatively low, but in their group composition dominates aliphatic compounds.
Normal alkanes and isoprenoids. Paraffin hydrocarbons are genetic markers that allow determining the origin of the initial organic material. The parameters characterizing the distribution of hydrocarbon alkane series (С [17][18][19] ) and (С 27-31 ) make it possible to indicate the fraction of aquatic and terrigenous biota in the formation of OM bottom sediment [12]. The molecular structure of n-alkanes and isoprenoids shows mostly hydrobionts genesis of the original OM, as indicated by the ratio of the markers of sapropel and humic compounds (С 17 /С 29 = 10.88) as well as the mono-modal nature of the n-alkanes distribution with the overwhelming dominance of low molecular weight compounds (С 15-19 ) (Fig. 5). The oddity coefficient CPI, close to 1 indicates the high degree of n-alkanes conversion and OM as a whole. Isoprenoids pristane and phytane ratio (Pr/Ph < 2 and Pr/C 17 = 0.45) is typical for marine deposition of sediments [12,13]. Terpanes and steranes. Distribution of cyclane hydrocarbons (terpanes, hopanes, cyclanes) in the studied sample also indicates mainly sapropelic genesis of OM. This is evidenced by the value of tricyclanes coefficients (C 19-20 T ric / C 23-26 ; C 23-26 T ric /C 28-31 ) and the ratio of norgopan, hopane and C 31 -homohopane that can serve as indicators of the origin of the initial organic material [14]. A data presented in Table 4 clearly demonstrate the significant role of the "sea" hydrobiont OM in the original organic material.
In the geological objects hopanes are present in the form of homologous series of compounds of С 27 -С 35 . At the stage of sedimentogenesis and early diagenesis, biogenic hopanes (ββ-hopanes) are formed, at the postdiagenetic stage, take the structural transformation of hopanes occurs with formation αβ-and βα-hopanes. For hopanes of composition of С 31 -С 35 , enantiomers of the configuration R in S are isomerized. Absence in the composition of terpanes, biohopanes and moretanes (ββ-and βα-isomers) indicates the postdiagenetic transformation of DOM. At the same time the ambiguity of hopanes maturity level of DOM (Ts/Ts + Tm, 22 S/ 22 S + 22 RC 31 ) does not allow to characterize the maturity levels of DOM as reached the thermodynamic limit. . This assumption is also supported by the distribution of sterans (Table 4), whose maturity indices are far from thermodynamic limit and do not exceed the values characteristic of the early stage katagenesis (МК 1 ).
In a homologous series of steranes (С 27 -С 30 ) dominated cholestanes (С 27 ), which reflect the contribution of aquatic organisms in the composition of initial OM. The ratios of regular and rearranged isomers in the homologous series of steranes are also aimed at determining the genetic type of the initial DOM. These data compare favorably with the above mentioned values for alkanes and terpanes.
Unlike conventional sedimentary rocks [12], PAHs are dominated by unsubstituted condensed polyaromatic compounds with four to seven rings (MM 178,202,228,252,276,278). Thus the pyrogenic compound, the formation of which is associated with a high impact on the DOM, constitutes 94% of the PAH amount (Fig. 6). Naphthidogenic PAH are typical of sedimentary rocks, being formed by the transformation of DOM during lithogenesis. However, in this case it is a minor component. This type of PAH distribution suggests that initial OM undergone specific transformation under high temperatures.
As a result of geochemical researches it can be concluded that the OM carbon-bearing-siliceous rocks of the Safyanovskoe deposits genetically the same type, mainly sapropel and its accumulation is due to sea conditions. A characteristic feature is, on the one hand, its polymerization, which is typical for the DOM of the stage of late mesokatagenesis. On the other hand, its molecular structure cannot be indicative of the maturation of DOM during natural regional metamorphism. The specifics of the geochemical parameters of DOM can be due, for example, to the effect of high temperatures associated with the intrusive processes (contact metamorphism) or post volcanic hydrothermal activity.
As noted above, in some physical and chemical parameters (sample D 15/12, [15]). Temperature of metamorphism of Shungovskoe deposit is estimated at 300-330 °C [16]. However, unlike shungites, the HC of the Safyanovskoe massive sulfide deposit contains aromatic compounds with a high content of PAH, which means a lower degree of metamorphic transformation. A geochemical study of OM of modern sulfide hydrothermal sediments of the Mid-Atlantic Ridge (i. e., in the Ashadze, Lost City, Rainbow and Broken Spur fields) showed its mixed genesis, the specificity of which is due to accelerated OM maturation processes under extreme environmental conditions [17,18]. Pyrogenic compounds in PAH composition were identified in the OM bottom sediments within the Ashadze-1 hydrothermal field [17]. This pyrolytic component is a narrow range of compounds (Fl/202 -0.7). Given the presence in the composition of cyclanes, cheilanthanes, diasteranes and geogopanes as sufficiently pyrogenic indicators of aromatic hydrocarbons, being indicators of the maturity level of the n-alkanes substance (CPI-1), the authors [7] concluded that the accelerated thermal catalytic maturation of OM in sediments of the Ashadze-1 field under the influence of the hydrothermal conditions of the environment was "expressed in the seal and the polycondensation of geochemically less stable structures. " Hydrocarbon distribution in the bottom sediments of the hydrothermal fields of Lost City and Rainbow (MAC) is char-acterized by the presence of even-numbered homologues in the low-and high-molecular-weight portions of the area, as reflected in the CPI parameter values, which average 0.86 and 0.89, respectively [19]. This is a result of reduction processes that have occurred in the hydrothermal transformation of OM. Moreover, in the sulfide sediments of the Rainbow and Broken Spur fields there are n-alkanes of abiogenically thermo-origin and sediments of active smokers characterized by high levels of long-chain n-alkanes ΣС 23 -С 35 -85.7% [18].
According to data from our research, the OM of the Safyanovskoe deposit was exposed to the low temperature -not higher than 300 °C. Since, according to the EPR data, it presedved paramagnetic tracers of plant and animal origin (Sample SH10/12, Table 2), as well as radiation centers in minerals, for some samples, in particular, limestone, temperature exposure limit was not higher than 250 ° C. Given the data [20] on the forming conditions of barite veins in the formation of ore facies of the Safyanovskoe deposit as well as in the formation temperature of chlorite wall rock [21], we can assume that the temperature range of wall-rock alteration, in the presence of rocks containing OM, ranges from 130° to 260 °C. | 4,295.2 | 2020-03-23T00:00:00.000 | [
"Geology"
] |
UvA-DARE (Digital Academic Repository) Electromagnetically induced transparency with Rydberg atoms across the Breit-Rabi regime
We present experimental results on the influence of magnetic fields and laser polarization on electromagnetically induced transparency (EIT) using Rydberg levels of 87 Rb atoms. The measurements are performed in a room temperature vapor cell with two counter-propagating laser beams at 480 nm and 780 nm in a ladder-type energy level scheme. We measure the EIT spectrum of a range of ns 1 / 2 Rydberg states for n = 19 − 27 , where the hyperfine structure can still be resolved. Our measurements span the range of magnetic fields from the low field linear Zeeman regime to the high field Paschen-Back regimes. The observed spectra are very sensitive to small changes in magnetic fields and the polarization of the laser beams. We model our observations using optical Bloch equations that take into account the full multi-level structure of the atomic states involved and the decoupling of the electronic J and nuclear I angular momenta in the Breit-Rabi regime. The numerical model yields excellent agreement with the observations. In addition to EIT related experiments, our results are relevant for experiments involving coherent excitation to Rydberg levels in the presence of magnetic fields.
Here we describe EIT experiments in a room temperature vapor cell for the 87 Rb ns-states with principal quantum number n = 19 − 27.We drive the transition from the 5s ground state level to Rydberg levels using a two-photon transition via the intermediate 5p level.The upper 5p − ns transition serves as the coupling transition, and we measure the effect on a weak, resonant probe laser tuned to the 5s − 5p transition.Despite the fact that our measurements are performed in a Doppler-broadened room-temperature vapor cell, we retrieve spectrally narrow EIT signals with a resolved Rydberg hyperfine splitting.Remarkably, the spectra change significantly already upon magnetic field variations of ∼ 0.1 G.
It is known that the polarization of the light influences the spectrum [25,26] through optical pumping effects [27].A full description must consider the multi-level structure of the atom [28], typically the hyperfine and Zeeman substructure [28][29][30].In order to explain our observations, we calculate the full density matrix for all 18 involved Zeeman levels by solving the optical Bloch equations (OBE).Fitting the solutions to our data involves averaging over the thermal velocity distribution, which is efficiently done on a supercomputer.We observe a strong influence on the spectra even when applying small magnetic fields (∼ 0.1 G), which we relate to the decoupling of the electronic J and nuclear I angular momenta.This finding is somewhat counter-intuitive, as one would expect that effect to be of major impact only at higher magnetic fields (Breit-Rabi regime).These results are important for all future applications using Rydberg excitation in the presence of magnetic fields.As an example, the so-called "magic field" of 3.23 G [31] is right in the Breit-Rabi regime for low-lying Rydberg states.At this field value the differential linear Zeeman shift between the ground state magnetic hyperfine sublevels |F, m F 〉 = |1, −1〉 and |2, 1〉 vanishes.This makes this pair of levels a good candidate qubit with suppressed sensitivity to magnetic field noise.Hence, the findings in this paper are important in the context of magnetically trapped qubits.
Experimental setup
The heart of the experimental setup [see Fig. 1 (a)] consists of two laser beams at 480 nm (coupling beam) and 780 nm (probe beam), counter-propagating in a room temperature Rb vapor cell.The laser light is provided by two commercial diode lasers (TA-SHG Pro and DLpro, Toptica).Our experimental setup is similar to the one mentioned in Ref. [32], with the addition that we use a sideband-locking scheme to stabilize the lasers to a high-finesse Fabry-Pérot cavity.This procedure yields laser linewidths of less than 10 kHz and precise control over the absolute laser frequency [33].Scanning of the laser frequencies is done by varying the corresponding sideband locking frequencies.The laser beams are spatially overlapped in the vapor cell, with a 1/e 2 beam radius of 0.9 mm and 0.5 mm for the 480 nm and 780 nm light respectively.This configuration ensures that the probe light experiences a mostly uniform intensity distribution of the coupling light, and at the same time minimizes the effect of transit time broadening.Transit time broadening, due to the finite interaction time of Rb atoms at room temperature with the laser light, is estimated to be 400 kHz for the chosen value of probe beam radius.Typical laser powers are 10 µW for the probe and 150 mW for the coupling laser.
The vapor cell is 12 cm in length and is placed inside a 11 cm long coil consisting of 80 windings, introducing a near-homogeneous longitudinal magnetic field B along most of the vapor cell.Both vapor cell and coil are surrounded by a cylinder of mu-metal with a length of 175 mm and a diameter of 100 mm.We measure with a fluxgate magnetometer that the mu-metal reduces the parallel ambient magnetic field from 550 mG to 40 mG in the center, and 54 mG at the entrance plane of the cylinder.The magnetic field in the radial direction almost completely vanishes in the center.
Before taking EIT spectra, we fix the frequency of the probe laser at the 5s 1/2 , F = 2 → 5p 3/2 , F = 2 transition of 87 Rb by adjusting the sideband frequency of the locking.This frequency is referenced to Doppler-free absorption spectroscopy in an additional Rb vapor cell.We then scan the frequency of the coupling laser across the Rydberg states ns 1/2 , F = 1, 2 for n = 19 − 27, where we can still distinguish the individual hyperfine levels [see Fig. 1(b)].The frequency is scanned by stepping the locking sideband frequency, typically in equal steps of a few tens of kHz.After each step, we measure the transmission of the probe laser with a photo diode.An optical chopper in the coupling laser beam is used in combination with lock-in detection of the probe transmission to enhance the signal-to-noise ratio.We take one spectrum for each chosen magnetic field value inside the vapor cell.
Theoretical model
We investigate EIT in a configuration of four independent hyperfine levels as depicted in Fig. 1, consisting of the ground state 5s 1/2 , F = 2, the intermediate state 5p 3/2 , F = 2 and the Rydberg levels ns 1/2 , F = 1, 2 for n = 19 − 27.As expected from earlier findings [25,26], we observe that the EIT spectrum changes with different polarizations of probe and coupling laser.Therefore, we incorporate the substructure of magnetic Zeeman-levels for all the involved hyperfine states.Additionally, we measure a strong influence on the spectrum when applying a longitudinal magnetic field to the vapor cell.The changes are already noticeable for small magnetic fields of around 100 mG, and depend on the direction of the applied field.We therefore take into account the couplings and level shifts of magnetic sublevels leading to the Breit-Rabi diagram for the Rydberg manifold.
In Ref. [32] the spectrum of the two Rydberg hyperfine levels ns 1/2 , F = 1, 2 for n = 20−25 is fitted by the sum of two individual solutions to the analytical model of a three level ladder system.In other references, including [27], the Zeeman substructure is accounted for by a sum For the Rydberg level the F, m F states are only good quantum numbers in the limit of low magnetic fields).We show the excitation paths for the combination of (σ + ,σ + ) polarization for the probe and coupling light respectively.The gray (light) lines show all considered decay paths for atomic populations in the excited states.The atomic levels 5s 1/2 , F = 1 and 5p 3/2 , F = 2 do not participate directly in the EIT ladder scheme, but they are populated by decay of atomic population.
over the involved levels for a given light polarization, weighted by the corresponding Clebsch-Gordan coefficients.Neither approach can explain the influence of the magnetic field that we see in our experiment.Therefore, we consider the full dynamics of the density matrix of all the 18 Zeeman levels of the four hyperfine states depicted in Fig. 1(c).The atomic levels 5p 3/2 , F = 0, 1, 3 are only included indirectly as a decay channel for the atomic population in the Rydberg state, subsequently decaying to either 5s 1/2 , F = 1 or 5s 1/2 , F = 2. Atomic population decaying to 5s 1/2 , F = 1 is treated as loss, as these atoms no longer participate in the excitation dynamics.Due to the geometry of our experiment (the laser beams propagate parallel to the magnetic field B), we can only achieve either σ + or σ − polarization in the quantization axis set by B. Hence, we limit our analysis to a combination of (σ + , σ + ) or (σ + , σ − ) polarization for probe and coupling laser [note: the cases (σ − , σ − ) and (σ − , σ + ) correspond to an inversion of the magnetic field].
We describe the dynamics of the system including the atom-light interaction, spontaneous decay and other decoherence effects by the master equation yielding a set of linear differential equations (Optical Bloch equations).Here, the Hamiltonian H describes the coherent part of the dynamics, whereas the Lindblad superoperators decay ( ) and deph ( ) describe effects causing decoherence.
The Hamiltonian
We decompose the Hamiltonian as H = H A +H M +H AL , where the individual terms describe the field-free atomic energies, the magnetic energy and the atom-light interaction for all involved levels.As a basis set we choose the magnetic sublevels F, m F expressed in terms of the total angular momentum F and the magnetic quantum number m F .While F, m F are not good quantum numbers for the Rydberg levels, we find this basis nevertheless convenient.The (magnetic-field free) atomic Hamiltonian is written using the dressed basis states and the rotating-wave approximation (RWA).It has a simple diagonal form (setting ħ h = 1), ( Here we defined the following symbols: ∆ p (∆ c ) is the detuning of the probe (coupling) laser, the latter defined relative to the F = 1, m F = 0 Rydberg state, P 5s,F =2 is a projection operator onto the 5s 1/2 , F = 2 subspace, and similar for the P ns,F projection operators.The 5p 3/2 , F = 2 intermediate level has been arbitrarily chosen as the zero of energy.Finally, A ns is the hyperfine splitting in the Rydberg level.
For the 5s and 5p subspaces the magnetic Hamiltonian H M is written as H M = g F µ B F z B, with F z the z component of the total angular momentum operator F, and choosing the magnetic field as B = Bẑ.In our basis set, this results in H M |F, m F 〉 = g F µ B m F B|F, m F 〉.An important aspect for the Rydberg states is that the atomic energies experience a transition from a linear energy dependency in m F at small magnetic fields to a decoupling of the magnetic quantum number m I and m J at high magnetic fields, called the Paschen-Back regime.The transition between these regimes, the Breit-Rabi regime, is shown for the example of 23s 1/2 in Fig. 2(a).For the Rydberg levels, we write H M = g S µ B S z B + g I µ B I z B (as J z = S z for the ns 1/2 states).
Here S z and I z are the z components of the electron spin S and nuclear spin I.In the following we neglect the second, nuclear spin term.The first, electronic spin term has diagonal as well as off-diagonal matrix elements in the chosen |F, m F 〉 basis.The off-diagonal elements couple states of equal m F but unequal F , The diagonalization of H A + H M in the Rydberg ns subspace yields the Breit-Rabi diagram shown in Fig. 2(a).We find that these off-diagonal elements are crucial to accurately describe the measured EIT spectra.If we tentatively express the Rydberg Zeeman energy linear in m F , we cannot reproduce our experimental observations.Remarkably, the off-diagonal elements contribute significantly already at small magnetic fields around 100 mG, which is much less than the hyperfine field (ħ hA 20s /µ B ≈ 5 G) and therefore far from the Paschen-Back regime.
The matrix elements of the atom-laser interaction Hamiltonian H AL are given by the usual products of a reduced dipole matrix element and a Clebsch-Gordan coefficient.For the 5s − 5p transition, we can write the matrix elements of H AL as with ε q the component of the laser amplitude with polarization q = ±1.Similar expressions apply for the 5p − ns transitions.In this case we write Ω c for the Rabi frequency.
Dissipative terms
The second term in the sum of Eq. ( 1) accounts for the spontaneous decay and optical pumping.
It can be written by means of the Lindblad superoperator decay ( ) where The summation is performed over all allowed pairs {i, f } of F, m F sublevels.The decay rate Γ f i is expressed as the product of the decay rate of the involved hyperfine-level (Γ 5p , Γ ns,F =1 and Γ ns,F =2 ) and the square of the corresponding Clebsch-Gordan coefficient.
For final states f outside the considered subspace of 18 levels we omit the term C f i C † f i , which thus leads to loss of total atom population.For example, atomic population in the intermediate state 5p 3/2 , F = 2 can decay to either the 5s 1/2 , F = 1 or F = 2 ground state, where the former is treated as loss of atoms.As we treat atomic population decaying to 5s 1/2 , F = 1 as a loss mechanism, we omit the term C f i C † f i in Eq. ( 6) for this level.For simplicity, we assume that atomic population in the Rydberg states predominantly decays to the 5p 3/2 level.We further simplify the problem by assuming that the atomic population decaying to 5p, F = 2 undergoes an immediate subsequent decay to either the 5s 1/2 , F = 1 or F = 2 ground state.This is justified by the fact that the 5p, F = 2 levels are far off-resonant with respect to the probe laser, and that Γ 5p Γ ns,F =1 , Γ ns,F =2 .The third term deph ( ) in Eq. ( 1) describes all dephasing effects, including the influence of the finite laser linewidth of the probe γ p and coupling γ c laser.For simplicity, we include additional broadening effects such as transit time broadening and collision-induced broadening in γ c .In this case, we express deph as where C p = −P 5s + P 5p + P ns and C c = P 5s + P 5p − P ns are expressed in terms of the projection operators as defined earlier.
Steady state solution and susceptibility
If we solve for the steady state ( ˙ = 0) of, for example, the system depicted in Fig. 1(c), we obtain the obvious result that the atomic population resides in the dark states 5s 1/2 , F = 2, m F = 2 and 5s 1/2 , F = 1.Hence, this simple steady-state solution cannot explain our experimental data.In order to find an adequate description of the excitation dynamics, we follow two approaches: (1) Starting from an equal distribution among the ground state Zeeman levels, we calculate the time-dependent solution of the OBE, and evaluate it at the average time that an atom resides in the probe beam (τ ≈ 3 µs at room temperature), or (2), we assume constant fluxes of atoms leaving and entering the probe beam, the latter refilling the atomic population in the magnetic ground states.The flux into or out of the beam can in principle be estimated as Φ = ( 1 /4)nvA, with n the atom density, v = 8k B T /πm the average thermal velocity and A = πD L the surface area of a beam of diameter D in a cell of length L. For the simulation we are merely interested in setting Φ = 0, to ensure that the steady state is not a dark state.The precise value of Φ is then an overall multiplier to the amplitude of all simulated signals.
Thus we describe the departure and arrival of atoms by adding ∂ /∂ t = (P 5s,F =2 − )/τ to the optical Bloch equations.
For the latter approach we obtain a steady-state solution with atomic population also being in non-dark states.Fig. 2(b) shows simulated spectra obtained for both approaches.It should be noted there that we subtract a background spectrum (with ∆ c being far off-resonant) from the time-dependent solution.We can conclude that both approaches yield similar results.As the second approach is closer to the experimental reality, we proceed with this one for the rest of this work.
The probe absorption is proportional to the imaginary part of the susceptibility χ.We relate the susceptibility χ of the probe transition to the density matrix [1].For the probe transition with polarization q = ±1, we look at the elements i j corresponding to a transition from a ground state |g i 〉 = |F = 2, m F 〉 to an intermediate state |e j 〉 = |F = 2, m F + q〉, with Clebsch-Gordan coefficient c i j .We approximate the probe absorption as follows, Here N (v) is a one-dimensional Maxwell-Boltzmann velocity distribution for the atoms in the vapor cell at room temperature.The elements i j become velocity dependent through the Doppler shifts We numerically evaluate the integral in Eq. ( 8) for a sufficiently large v max , effectively averaging our expression over the velocity distribution of the atoms.
Computational methods
We implement numerical solvers for both the time-dependent and the steady-state model using Fortran modules to solve the master equation [Eq.(1)], employing routines from the odepack library to solve the resulting system of complex differential equations.These Fortran modules are combined with a Python wrapper for the velocity-class integration of Eq. ( 8) as well as for the loading of experimental data, fitting the model to experimental traces and storing the results.For a given experimental trace the measured data will be sampled for a fixed range of coupling frequencies using spline interpolation if necessary to gain control over the sampling density for numerical performance.We then call out to the Fortran solver to obtain solutions to the model on an appropriate grid of probe-and coupling frequencies.These are integrated in Python over a range of velocity classes, taking appropriate Doppler shifts into account.Finally the result is compared to the experimental trace.Fitting is performed using the lmfit routines in Python.
In fitting the experimental data we initially determine a magnetic field calibration based on the data for 20s presented below in Fig. 3.This field calibration is used for all subsequent fits presented here.In fitting the data for a given principal quantum number n and polarization, we always fit all traces (measured at different applied magnetic fields) with the same set of parameters and the field calibration obtained in the fit for 20s.We generally fit a linear combination of both the (σ + , σ + ) and the (σ − , σ + ) cases to account for imperfect polarization.The free parameters varied in the steady-state fits are the Rabi frequencies of the red and blue transitions, Ω p and Ω c respectively for both polarizations, the effective linewidths of these transitions γ p and γ c , the hyperfine splitting of the state, the refilling rate for the ground-state as well as a global amplitude of the signal and an absolute frequency offset.This number of fitting parameters may seem rather large, however one set of parameters describes up to 41 individual traces (in Fig. 3).Furthermore, not all parameters are equally significant.Of primary interest are the hyperfine splittings, for which we find A ns × n * 3 /2π = 36.3(4)GHz [with n * = (n − δ) the effective principal quantum number].The fitted Rabi frequencies (given in the caption of Fig. 3) are consistent with the estimated intensities of the laser beams.The fitted effective linewidths γ c , γ p were in the few 100 kHz range, which is plausible and difficult to check independently.The refilling rate and the global amplitude were essentially interchangeable.
The system of complex differential equations is large due to the 18 involved Zeeman levels.Calling the odepack library during the fitting procedure is therefore computationally intensive.When performing the velocity class integration necessary to obtain a single data point, we need to solve the system of equations for each velocity class separately.This further increases the computational complexity.In order to obtain results on acceptable timescales, we use the supercomputing capabilities of the Lisa Compute Cluster (as part of the SURFsara Research Capacity Computing Services).The fitting routine for a given Rydberg state and a given polarization is allocated to one node of the Lisa Cluster, which consists of 16 independent cores.Running the program for about 5 days on one node gives a sufficient amount of iterations to obtain acceptable fitting results.By employing different nodes for different states at the same time, we can evaluate the data in parallel.
Experimental results
Our measurements are based on acquiring the EIT signal at a specific detuning ∆ c of the coupling laser whilst keeping the probe laser at a constant frequency.Scanning the detuning ∆ c as described in Sec. 2 at a specific applied magnetic field value, we acquire a magnetic field dependent EIT spectrum.In order to verify that electric stray fields do not cause the observed changes to the spectrum, we temporarily introduced a vapor cell with electric field plates inside (not shown in Fig. 1) to measure the influence of electric fields.These plates allow for applying a near-homogeneous electric field (compare [32]) inside the cell.For small applied electric fields (a few V cm −1 ) we do not observe a change in the spectral features besides an overall frequency shift due to the electric Stark effect.We aim at investigating the Breit-Rabi transition of the Rydberg states' magnetic sublevels, from a linear behavior in m F at low magnetic fields to a decoupling of m F into its components m I and m J at higher magnetic fields (the Paschen-Back regime).We probe this transition for the 20s 1/2 Rydberg level by applying a range of magnetic fields from −15 G to 15 G and measuring EIT spectra.These spectra constitute the density plots shown in Fig. 3, which are based on measurements for either (σ + , σ − ) or (σ + , σ + ) probe and coupling laser polarization.The choice of either σ + or σ − polarized light leads to the simplest description of the system's dynamics, as the laser light polarization cannot have a component in the magnetic field direction (see Sec. 2).We verify for selected EIT spectra that the spectrum for the (σ − , σ + )/(σ − , σ − ) configuration closely resembles the one at (σ + , σ − )/(σ + , σ + ) after inverting the magnetic field.Hence, the resulting magnetic field dependence can be obtained by simply mirroring the data in Fig. 3 about the frequency axis.Furthermore, by creating an equal superposition of σ + and σ − polarization for both lasers, we obtain a spectrum which resembles a mixture of both data sets shown.Independent of these findings, we allow for a small admixture of the opposite polarization in the fitting procedure (see Sec. 3.4).This accounts for the fact that we always have imperfect polarizations in the actual experimental apparatus.For example, the change in polarization introduced by the waveplates in the optical setup before the vapor cell [compare Fig. 1(a)] is wavelength dependent (e.g. when changing between different n).Also, the glass cell itself might introduce further modifications of the laser polarization which is difficult to predict.
Both data sets show a multitude of different lines, originating from the two hyperfine levels F = 1 and F = 2 of the Rydberg state, which are resolved at magnetic fields close to 0 G.In order to gain a qualitative understanding of the data, one can identify that two photons with (σ + , σ − ) and (σ + , σ + ) polarization lead to a change of ∆m F = 0 and ∆m F = 2, respectively.Thus, in the case of (σ + , σ − ) we expect the transition frequencies to stay roughly constant with increasing magnetic field, whereas for (σ + , σ + ) the transition frequencies are expected to increase with the applied magnetic field.Indeed, this expected behavior is visible in the data sets shown by the most pronounced lines in each plot.At higher magnetic fields (> 5 G) the frequencies of the observed experimental lines shift linearly with the applied magnetic field.This can be well understood in terms of the linear energy shift of the ground state m F levels, and the linear shift of the Rydberg state m J levels in the Paschen-Back regime [see Fig. 2(a)].Hence, the transition frequency between these levels is also linear in the applied magnetic field.The multitude of different magnetic sublevels involved [compare to Fig. 1(c)] lead to a range of different transition frequencies, which show a different magnetic field dependence.This is reflected by the difference in slope of the experimental lines.It should be noted that the measured spectra are not a trivial reproduction of the simple Breit-Rabi diagram, as it also contains the magnetic field substructure of the ground and intermediate levels.
Besides the qualitative description, we also provide a theoretical account based on solving Eq. (1) for the system under investigation and using the fitting routine as described in Sec.3.4.We show the theoretical result for both combinations of laser polarization in Fig. 3. Comparing the theoretical predictions and the actual data, we find that it matches very well for the full range of applied magnetic fields.All major experimental lines are reproduced, as are their relative strength and magnetic field dependence.Our model also describes the non-linear behavior in the Breit-Rabi regime at magnetic fields between 1. . . 5 G equally well as the near linear behavior for magnetic fields in the Paschen-Back regime.Overall, the good agreement between measurement and theoretical simulation verifies our theoretical assumptions made in Sec. 3.
In order to examine the Breit-Rabi regime of the Rydberg magnetic sublevels in more detail, we investigate the response of the EIT spectrum to small changes in the applied magnetic field.Therefore, we acquire EIT spectra at nine equidistant magnetic field values in the range from −0.8. . .0.8 G.We present these spectra for the 19s 1/2 , the 21s 1/2 and the 23s 1/2 Rydberg level and different combination of probe and coupling laser polarization in Fig. 4. The F = 1 and F = 2 hyperfine levels are visible as two distinct peaks, separated by the hyperfine-splitting of the respective Rydberg state.The acquired spectrum for the 19s 1/2 Rydberg state only shows a weak influence of the applied magnetic fields.The influence is much more pronounced for the 21s 1/2 and 23s 1/2 Rydberg levels.In the latter case we can observe an inversion of the relative peak height with changing the magnetic field polarization from negative to positive values.
Furthermore, we present the simulated EIT signal for the respective Rydberg states.Again, the simulation is based on fitting the result of Eq. ( 1) to the data set under investigation (see Sec. 3.4).As for the measurement in Fig. 4, the theoretical prediction closely reproduces the main features of the measured spectra as relative peak height and magnetic field dependence.The inversion of the relative peak height for the 23s 1/2 state also appears in the simulated spectra.Given the excellent agreement with the simulation, this behavior can be well understood from the presence of off-diagonal terms as given by Eq. ( 4) in the magnetic Hamiltonian H M .These terms result from the decoupling of the J and I quantum numbers of the Rydberg states in the Breit-Rabi regime, and introduce an effective mixing of the F states.This effect increases with decreasing hyperfine-splitting, which explains the differences between the spectra of the 19s 1/2 and 23s 1/2 state.Hence, we can indirectly observe the Breit-Rabi transition in our spectrum, even at small magnetic field values.
Discussion
Looking at the spectra in Fig. 3, it is obvious that the transition from low to high magnetic fields is not a simple reproduction of the Breit-Rabi diagram of the Rydberg levels as shown in Fig. 2(a).The reason is that the spectrum is also influenced by the level shifts of ground and intermediate states' Zeeman substructure, optical pumping effects and the residual Dopplerbroadening.However, the spectrum clearly reproduces the selection rules introduced by the laser light polarization, and shows that the high field behavior is a linear function of the applied magnetic field.This is a direct result of the Paschen-Back regime for the Rydberg levels (linear in m J ) and the linear energy shift of the ground state levels (in m F ).A remarkable observation is that magnetic fields, small compared to the hyperfine field A ns /µ B , strongly influence the spectra.This influence increases with decreasing hyperfine splitting A ns of the Rydberg levels, as can be seen by comparing the n = 19 and n = 23 Rydberg level in Fig. 4. For n = 23 (and also for n = 24−27) the change in magnetic field (−0.8. . .0.8 G) leads to a complete inversion of the relative height between the peaks attributed to F = 1 and F = 2.As discussed earlier we can attribute this to the influence of the off-diagonal elements in Eq. ( 4), which are a direct consequence of the decoupling of the total angular momentum F into the components J and I in the Breit-Rabi regime.We verified this by calculating the corresponding spectra based on a model where the Rydberg states shift linearly in energy with m F .The result did not reproduce the observed change in peak height, but solely predicts a frequency shift of the total spectrum.This shift is observed for the spectrum at n = 19, where the hyperfine splitting is relatively large (A 19s /2π ≈ 9 MHz) so that the influence of the off-diagonal elements is less pronounced.
Despite the spectrum's complexity [compared to Fig. 2(a)], it is nevertheless possible to understand our results quantitatively.While we cannot simply extract the hyperfine splitting A ns of the Rydberg levels at B = 0, in our fitting routine, we use A ns as a fitting parameter for the complete data set at a given n.For the rescaled hyperfine splittings we find A ns × n * 3 /2π = 36.3(4)GHz, similar to [32], but with slightly less scatter.In [32] EIT signals were fitted by the sum of two individual solutions to the analytical model of a three-level ladder system.The resulting (scaled) hyperfine splittings varied by about 3 percent.
Our measurements also show that precise values for the Rydberg hyperfine splittings can be obtained in room-temperature vapor cells.There are several options to further improve our measurements in future experiments.The magnetic shielding can be improved by embedding the vapor cell in a longer and narrower, mu-metal cylinder.Better magnetic field control is possible using a longer solenoid producing more homogeneous magnetic fields.A reduction of the number of fitting parameters appears feasible, as we found that the overall amplitude and refilling rate are interchangeable, and the red laser linewidth could essentially be fixed.The use of wider laser beams would reduce the influence of transit time broadening.Better control of the laser light polarization is also still possible, for example using in-situ measurement with a polarimeter.
Conclusion
Our measurements show that the EIT spectrum for the ns 1/2 Rydberg states with n = 19 − 27 is strongly influenced by the presence of small magnetic fields (< 1 G) (see Fig. 4).Furthermore, the polarization of the involved laser beams changes the measured spectrum strongly (see Fig. 3).We investigate the EIT spectrum of the 20s 1/2 Rydberg state for a wide range of magnetic field values (Fig. 3), showing a transition from two resolvable hyperfine levels to a multitude of lines with a linear frequency scaling.The experimental observations are well reproduced by the theoretical approach provided in Sec. 3. Our theoretical model accounts for the multi-level structure of the 5s 1/2 , F =2 ground state, the 5p 3/2 , F =2 intermediate state and the two Rydberg states ns 1/2 , F =1, 2.An essential part of the modeling is also the averaging over the thermal velocity distribution in the vapor cell.A crucial aspect for the Rydberg states is the decoupling of the F angular momentum into its components I and J in the Breit-Rabi regime.From the measurements in Fig. 3 we can retrieve the Rydberg states' behavior, both at small magnetic fields and in the Paschen-Back regime, where the magnetic sublevels group according to their m J quantum number (see Fig. 2).The behavior of the magnetic sublevels in the Breit-Rabi regime also accounts for the strong changes observed in the spectrum for the magnetic fields below 1 G presented in Fig. 4.While we cannot resolve individual magnetic sublevels in the measurements at low magnetic fields, we can still clearly identify their influence on the spectrum, based on the excellent agreement with our theoretical model.This sensitivity for weak magnetic fields makes it important to have a detailed understanding in a variety of applications of EIT in thermal vapors.Examples of such applications include photon storage and retrieval, nonlinear optics, the generation and manipulation of single-photons, quantum information science, Rydberg polaritons, etc. [3][4][5][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]
Figure 2 :
Figure 2: (a) Calculated atomic energies of the magnetic sublevels of 23s 1/2 , F = 1 and F = 2 in the Breit-Rabi regime.For small magnetic fields ( 1 G) the atomic levels of the hyperfinestates F = 1, 2 are labeled by the magnetic quantum numbers m F and shift linearly with B. For higher magnetic fields (> 3 G) the nuclear spin I and the electron angular momentum J decouple, and the magnetic levels group according to m J .(b) Simulated EIT spectra to compare the time-dependent and the steady-state solutions of the Optical Bloch equations.
Figure 3 :
Figure 3: Measured (LIA: Lock-in amplifier signal) and simulated EIT spectra for the 20s 1/2 Rydberg level for both the (σ + , σ − ) and (σ + , σ + ) combination of probe and coupling laser polarization.The density plot of the data shown consists of 31 and 41 individual EIT spectra, respectively, with a frequency resolution of 80 kHz in ∆ c .As we have no absolute reference for the 0 MHz mark in the experiment, it is here chosen midway between the two two-photon resonances at zero field.Each spectrum is taken at a different magnetic field value ranging from −15. . .15 G.For magnetic fields between −5. . . 5 G the detuning was scanned over a smaller range, −20. . .20 MHz, because outside this range no spectroscopic features could be observed.The simulated data is based on the fitted theory parameters evaluated at the same magnetic fields and frequencies as the data.The fitted hyperfine splittings are 7.70 and 7.71 MHz, for (σ + , σ − ) and (σ + , σ + ), respectively.The fitted Rabi frequencies (Ω p /2π, Ω c /2π) are (14.7,5.1) MHz and (6.8, 5.1) MHz, respectively.
19s σ + σ + 21s σ + σ + 23s σ + σ - Figure 4: Measured (LIA: Lock-in amplifier signal) and simulated EIT spectra for the 19s 1/2 , the 21s 1/2 and the 23s 1/2 Rydberg level and different combination of probe and coupling laser polarization.The measured spectra are taken at nine equidistant magnetic field values in the range from −0.8. . .0.8 G.The data of one Rydberg state are fitted with a single set of parameters, resulting in the theoretical spectra shown beneath the respective Rydberg state.The fitted hyperfine splittings are 9.13, 6.34, and 4.68 MHz (left to right).The fitted Rabi frequencies (Ω p /2π, Ω c /2π) are (10.7,3.8) MHz, (11.0, 5.5) MHz, and (7.3, 4.0) MHz.Note: the measured signal of the 21s 1/2 state is truncated above 10 V by the data acquisition system.The sharp peaks in the 23s 1/2 signals are spurious, due to electronic noise. | 8,555.4 | 0001-01-01T00:00:00.000 | [
"Physics"
] |
Exploiting Waste towards More Sustainable Flame-Retardant Solutions for Polymers: A Review
The development of sustainable flame retardants is gaining momentum due to their enhanced safety attributes and environmental compatibility. One effective strategy is to use waste materials as a primary source of chemical components, which can help mitigate environmental issues associated with traditional flame retardants. This paper reviews recent research in flame retardancy for waste flame retardants, categorizing them based on waste types like industrial, food, and plant waste. The paper focuses on recent advancements in this area, focusing on their impact on the thermal stability, flame retardancy, smoke suppression, and mechanical properties of polymeric materials. The study also provides a summary of functionalization methodologies used and key factors involved in modifying polymer systems. Finally, their major challenges and prospects for the future are identified.
Introduction
Polymeric materials are essential in daily life, industrial facilities, and medical applications [1].However, their flammability leads to significant risks to human life and property [2].As a result, public awareness of the flame-retardant properties of polymers has increased [3][4][5].The growing awareness of environmental issues, the energy crisis, and advancements in science and technology have led to new requirements for flame-retardant polymeric materials [6,7].Sustainable flame retardants, obtained from renewable sources and manufactured using environmentally friendly chemical processes, have gained scientific attention [8,9].These flame retardants have negligible adverse effects on human well-being and the ecosystem, offering a potential solution for enhancing fire safety while maintaining sustainability.
One potential source of flame retardants is waste material.Numerous waste materials include inherent natural chemicals that exhibit flame-retardant properties.By harnessing these materials, it becomes possible to mitigate landfill accumulation while simultaneously establishing a more sustainable reserve of flame retardants.Waste materials that have been studied for their potential use as flame retardants include industry, food and plant waste.Scientists have found that these materials can be effective at slowing down the spread of fire when added to other materials.Researchers have explored the use of agricultural waste materials like rice husk, wheat straw, and maize stalks for their flame-retardant properties.For a typical case, Wang et al. [10] found that adding corn stalk biochar (CSB) to highdensity polyethylene (HDPE) could enhance its flame retardancy.Meanwhile, the limiting oxygen index (LOI) remained at 25.5%.The presence of CSB at a 60.0% concentration significantly decreased the peak heat release rate (pHRR) and total heat release (THR) of HDPE composites, with a 46.1% decrease in pHRR and a 44.6% reduction in THR compared to pure HDPE.
Classification and Application of Waste-Based Flame Retardant
Waste-based flame-retardant additives are sustainable alternatives to traditional flameretardant additives.They are derived from waste products and can be categorized based on their source or chemical composition.Examples include fly ash, steel slag, eggshell, bagasse, oyster shell powder, waste cooking oil, fish scales, fish deoxyribonucleic acid, coffee grounds, rice husk, cellulose nanofibers, lignin, and so on.These materials are produced from industries, food, or plants.
Industrial Wastes
As civilization advances, industrial waste, including fly ash and steel slag, poses a threat to the environment and human health.However, these wastes have potential for recycling due to their diverse flame-retardant components, making them suitable for manufacturing flame-retardant polymers.Following are some examples.
Fly Ash (FA)
The expansion of the power sector has led to a significant increase in FA emissions, causing significant contamination of land, water, and air [27].FA is a byproduct of thermal power plants and is primarily composed of silicon dioxide (SiO 2 ), aluminum oxide (Al 2 O 3 ), and iron oxide (Fe 2 O 3 ).The principal uses of FA include building materials, construction, roads, fill material, and agricultural techniques [28,29].Its unique chemistry and structure have a big impact on its flame retardancy.Scientific experts are interested in the appropriate treatment and exploitation of FA, which is present in various composites (Table 1).Note: Loading ratio represents flame retardant/fly ash ratio.
PU.The addition of FA to polyurethane (PU) was used as a synergistic agent to achieve a specific flame-retardant performance.Usta's [30] research on rigid polyurethane foams (RPUFs) with FA and an intumescent flame retardant (IFR) showed the total heat release (THR) of RPUF/FA was 19% lower than RPUF.The incorporation of IFR and FA/IFR into RPUF reduced THR by 26% and 33%, respectively.This indicated IFR had a good synergistic effect with FA.Furthermore, Zhou et al.'s [31] study on thermoplastic polyurethane (TPU) showed that substituting IFRs with FA could improve the flame-retardant effect.As shown in Figure 2a,b, the peak heart release rate (pHRR) of TPU/25 wt% IFR composites was reduced by 77.4% compared to pure TPU, and total smoke production (TSP) was reduced by 15.7%.Moreover, the pHRR and TSP of TPU/20 wt% IFR/5 wt% FA were reduced by 91.1% and 56.7%, respectively, compared to pure TPU.Thus, a modest quantity of FA paired with IFRs might improve the fire security of TPU materials.EP.The mixing of FA with epoxy resin (EP) was used as a synergistic agent to obtain a specific flame-retardant effect.Zanoletti et al. [32] found that FA, stabilized through a simple stabilization process, was a promising alternative to traditional flame retardants.Furthermore, the inclusion of FA influenced the electrical attributes of composites.Nguyen [33] examined the additive effects of nanoclay and FA on mechanical properties, flame retardancy, and electrical characteristics.The results showed that nanocomposites had tensile forces of 64.1 MPa, flexural forces of 89.3 MPa, compressive forces of 215.2 MPa, and impact strength of Izod 14.5 kJ/m 2 , with an LOI of 26.8% of fire-retardant material at a combined ratio of 40.0%FA and 3.0% nanoclay.The inclusion of nanoclay in the material produced a winding electric channel, limiting the spread of electric power plants and thereby affecting the electrical properties of the composites.
Polyolefin.Song et al. [34] used FA as a synergist to improve flame resistance in intumescent flame-retardant PP containing hydroxymethylated lignin.As shown in Figure 3, the addition of 0.5% FA to the polypropylene (PP) composite (0.5FA/IFR/PP) significantly improved flame retardancy, increased the LOI from 28% to 33%, and passed the V-0 rate in the UL-94 test.FA also reduced the pHRR of pure PP by 58% and achieved a higher char residue.Others.FA might be present in a diverse range of polymers beyond those previously indicated.(1) Polystyrene (PS).The electrospinning procedure resulted in continuous PS fibers with excellent FA particle dispersion, as shown in Figure 4. Park et al. [35] studied the flame-retardant properties of these composite nanofibers, which increased the LOI value of a polystyrene membrane when FA particles were added.Furthermore, the FA-PS composite membrane shrank and self-extinguished after being removed from the fire source.
(2) Ethylene vinyl acetate (EVA).FA might be utilized as a starting material for the synthesis of flame retardants.Li et al. [36] synthesized a smoke-suppression and flame-retardant layered double hydroxide (LDH) containing Mg-Al-Fe ternary, which was investigated in EVA composites.The results showed that most samples had a V-0 rate in the UL-94 test and the highest LOI of up to 28.5%.The EVA sample had the lowest pHRR and THR.
Steel Slag
Steel slag, a common solid waste from industry, is a result of China's rapid steel industry growth, leading to annual emissions exceeding 100 million tons [37].Technical constraints hinder its comprehensive utilization, causing it to pollute the environment and encroach on land.Steel slag is primarily composed of metal oxides like silicon oxide (SiO 2 ), calcium oxide (CaO), aluminum oxide (Al 2 O 3 ), and manganese dioxide (MnO 2 ) [38], which can limit the amount of smoke, harmful chemicals, and heat release [39][40][41].
Steel slag was an ingredient in modified rigid PU foam (RPUF) composites (Table 2).Yang et al. [42,43] investigated the utilization of steel slag to modify RPUF with typical flame retardants.As shown in Figure 5a,b, the study found that adding steel slag to RPUF improved thermal stability and reduced heat release, with a 1:1 ratio resulting in lower THR and pHRR compared to the pure sample.As shown in Figure 5c, Tang et al.'s [44] research also showed that when modified steel slag and expandable graphite were mixed into RPUF, the modified steel slag enhanced the rate of expansion while lowering the coefficient of thermal conductivity.Additionally, as shown in Figure 5d,e, a 10% combination of modified steel slag and expandable graphite reduced the pHRR and THR of RPUF composites by 55% and 47%, respectively.
Food Wastes
Food waste, including eggshell, bagasse, banana peel powder, oyster shell powder, waste cooking oil, fish scales, DNA, and coffee grounds, is increasing due to improved living standards.These wastes contain valuable ingredients like calcium carbonate (CaCO 3 ), cellulose, hemicellulose, and lignin, which have potential for making flame-resistant polymer compounds and are being explored as potential sources.
Eggshell
Eggshell, an aviculture byproduct, has been found to have reclamation potential due to its chemical composition, low cost, light weight, and environmental advantages [45].
Scientists have applied ES to polymers as synergistic fillers (Table 3), with CaCO 3 nanopowder being the most common nanofiller in industrial coatings [46,47].Processed ES may be utilized to replace commercial CaCO 3 without reducing coating quality [48,49].Notably, eggshells must be carefully cleaned before being used as fillers.Otherwise, untreated eggshells with various components often alter the flame retardancy of polymers.There are two approaches to including eggshells as synergistic agents: direct addition and adding after conversion.
(1) Direct addition as biofillers Eggshell has been employed as a synergistic agent in coatings.Yew et al. [50][51][52] developed an effective intumescent fire-protective coating using eggshell powder as a new biofiller.Water-repellent properties, homogeneous foaming structure, and adhesive strength were all advantages of the coating.A sufficient quantity of nanobiofiller increased fireproofing efficiency as well as mechanical characteristics.Other shell might be utilized in coatings in place of eggshell, which could be employed as a synergistic agent.Wang et al. [53] studied three different shell biofillers: eggshell, conch shell, and clamshell (CMS).They were cleaned, ultrasonicated, and pulverized before being applied to intumescent fire-resistant coatings.The study found that CMS had the highest synergistic impact and decreased pHRR and THR by 23.1% and 32.2%, respectively, as shown in Figure 6.Eggshell can be employed as a fire retardant in polylactic acid (PLA) composites, according to a study by Urtekin et al. [54].They observed that raising the quantity of eggshell in composites increased its Young's modulus, thermal stability, and char residue.Furthermore, the LOI was 34.5% with 10% eggshell in composites and was the V-0 level with the eggshell in the IFR system for PLA.The addition of chicken eggshell to intumescent flame retardant (IFR) dramatically decreased heat release and smoke formation, resulting in thermally stable and intumescent char when flaming, according to the research [55].As demonstrated in Figure 7a,b, the pHRR and THR of EP composites dropped by 42.2% and 35.3%, respectively.
In addition, eggshell might improve the mechanical characteristics of PP composites while also functioning as a fireproof synergist.Younis et al. [57] developed a product using recycled waste polypropylene (WPP) and waste chicken eggshell (WCES) as biofilters.Adding 10 phr of WCES to WPP/WCES composites increased their tensile and flexural strength by 15% and 8%, respectively, compared to WPP composites.Adding magnesium hydroxide ((Mg(OH) 2 ) and WCES to the composites increased their tensile and flexural strength, suggesting that WCES and Mg(OH) 2 collaborate to improve the composite's mechanical properties.(
2) Adding after conversion to calcium-containing compounds
The production of hydroxyapatite (E-HAP) uses heated ESs to produce calcium oxide (E-CaO).Jirimali et al. [58] discovered that adding E-CaO/E-HAP to linear low-density polyethylene (LLDPE) considerably enhanced the thermal resistance and flame retardancy of the composites.Furthermore, the composite containing E-HAP nanopowder outperformed its excellent mechanical characteristics.
Oualha et al. [59,60] developed a straightforward, quick, and inexpensive technique for converting chicken eggshell waste into lamellar calcium hydroxide particles (Ceg-Ca(OH) 2 ).As shown in Figure 8a, the method dropped pHRR by creating dense char.As shown in Figure 8b, the addition of zinc borate reduced the pHRR even more while enhancing the quality of the char.They also created biomaterial calcium hydroxide nanoparticles (Ceg-Ca(OH) 2 ) from eggshell collected and magnesium hydroxide nanoparticles (Seaw-Mg(OH) 2 ).As shown in Figure 8c, partial replacement of 40 wt% Seaw-Mg(OH) 2 nanoparticles with Ceg-Ca(OH) 2 resulted in significant fireproof action and an 85.9% drop in the pHRR of the ethylene-vinyl acetate copolymer (EVA) composite.In brief, eggshell is high in calcium carbonate and, owing to its inherent qualities, has the ability to function as synergists.However, more research is required to maximize its utilization in various products as well as comprehend its long-term performance.Note: Loading ratio represents flame retardant/eggshell ratio.
Bagasse
Bagasse, which is composed of the components lignin, hemicellulose, and cellulose, has potential to enhance mechanical and physical characteristics, prevent combustion, and minimize the usage of synthetic flame retardants [61].Its academic value lies in its use in coatings and flame-retardancy enhancement for composites, particularly in EP [62][63][64] (Table 4).Note: Loading ratio represents flame retardant/bagasse filler ratio.
EP.The addition of bagasse to EP was used as a synergistic agent to achieve a specific fire-resistance effect.Shen et al. [65,66] studied the use of agricultural waste bagasse as a synergistic agent to enhance the flame retardancy of EP.They combined bagasse@epoxy of triglycidyl isocyanurate (TGIC)@DOPO with EP to create an interpenetrating network (IPN) composite.The composite was found to be highly flame-retardant.In addition, Chen et al. [67] employed layer-by-layer (LbL) assembly to construct an ecologically friendly fire-resistant EP, demonstrating that the integration of 6BL@BF dropped pHRR and THR by 64.6% and 13.2%, respectively, when compared to unprocessed bagasse (Figure 9).Also, putting chitosan/APP on the surface of the bagasse made it easier for 6BL@BF and the expandable graphite matrix to connect, which made the bend and tensile strength much higher.Coatings.Bagasse, a waste material, can be used in coating systems due to its high flame retardancy.Research by Zhan et al. [68] developed a waterborne intumescent fireretardant coating using waste bagasse as a filler.As shown in Figure 10, the coating decreased its backside temperature from 397 • C to 223 • C and had a deep char layer with 35.6% carbon content, making it more resistant to oxidation.Furthermore, the coating containing 2% bagasse fared remarkably well in both water-resistant and mechanical characteristics testing.
Banana Peel Powder (BPP)
Powdered banana peel is an agricultural byproduct made from discarded banana peel, which generates 30 million tons of waste annually [69].There is a plentiful supply of raw materials and it is usable to lessen environmental problems brought on by improper waste disposal [70].BPP's main components are cellulose, hemicellulose, and lignin, all with numerous hydroxyls [71].Its high carbon content makes it suitable for char-forming and flame-retardant additives in PLA and textiles [72] (Table 5).Note: Loading ratio represents flame retardant/BPP ratio.
PLA. BPP can be used as a filler in flame-retardant PLA composites, according to a study by Kong et al. [73].The composites were created using 5 wt% microencapsulated ammonium polyphosphate (MCAPP) and 15 wt% BPP.As shown in Figure 11a,b, the composites demonstrated better thermal resistance, self-extinguishing, and anti-drip properties, and a 10.5% reduction in pHRR.The composites were also helpful in the production of high-quality char in the solid phase and worked as fire retardants in the gas phase.Furthermore, Kong et al. [74] tested a new biobased flame retardant (PA-B) created from BPP and phytic acid (PA).The LOI climbed dramatically to 37.5% when 15.0 wt% PA-B was added to the PLA matrix, achieving the V-0 level in the UL-94 test, and dripping troubles were greatly decreased.
Textile.Basak et al. [75] studied BPP, coconut shell extract (CSE), and pomegranate rind extract (PRE) as fire-resistant additives.According to the research, increasing the extract content enhanced the LOI of the treated textile.The burning speed of the PREtreated textile was 18.29 mm/min, which was much lower than that of the CSE-and BPP-treated textiles.Furthermore, all treated textiles had an attractive natural color and there was no detrimental influence on the tensile strength.
Oyster Shell Powder (OSP)
Oyster shell, a food waste with 96% calcium carbonate, acts as a fire-retardant additive [76].When decomposed, it produces CaO and CO 2 , which can extinguish fires by blocking oxygen access.Oyster shell powder is popular in composites, particularly when used in conjunction with flame retardants to enhance their flame retardancy.
The mixing of OSP with TPU was used as a synergistic agent to obtain excellent flameretardant performance.Chen et al. [56,77,78] studied the synthesis of composites made from OSP and traditional flame retardants like ammonium polyphosphate (APP) and isopropyl titanate.The results showed that OSP and flame retardants effectively reduced smoke and heat release in TPU.A thick carbon layer emerged on the composite surface, preventing flame propagation and minimizing flammable gas production.As shown in Figure 12a,b, (Table 6) the pHRR and THR decreased by 92.2% and 75.0%, respectively.Furthermore, OSP modification also enhanced the flame retardancy of composites.The study also explored OS@MP, a flame-retardant made from OSP and melamine polyphosphate (MP).The noncombustible gases created by OS@MP and the char developed on the composites increased the fire-resistant properties of TPU.The pHRR and THR of the samples with 10.0 wt% OS@MP were reduced by 90.4% and 48.7%, respectively.Note: Loading ratio represents flame retardant/oyster shell powder ratio.
Waste Cooking Oil (WCO)
Waste cooking oil is an inexpensive, popular derivative of virgin oils [79], and is produced massively in China, with an average of 500 million tons produced annually [80].Improper disposal of food waste poses a threat to the environment [81,82], prompting scientific researchers to focus on the efficient treatment and use of cooking oil [83][84][85].
Recent research has explored the use of WCO as a potential raw material.Asare et al. [86] created WCO-polyol with a suitable hydroxyl number and the ability to form RPUF.They increased flame retardancy by blending dimethyl methyl phosphonate (DMMP) or expandable graphite (EG) at a higher concentration.The results of the study demonstrated a considerable enhancement in fire resistance, with the WCO-based RPUF igniting in 93 s and losing 46.0% of its weight, as illustrated in Figure 13a,b.As demonstrated in Figure 13c, the addition of 10.7 wt% DMMP decreased igniting time and weight loss to 8.5 s and 3%, respectively, while 16.7 wt% EG lowered igniting time and weight reduction to 12.5 s and 5.0%, respectively.In summary, WCO was processed and used in combination with flame retardants to produce in a flame-retardant composite.As a result, it has massive application potential.2.2.6.Fish Scales and Fish DNA Fish scales, which are composed of collagen and hydroxyapatite, are a biological flame retardant owing to their capacity to emit nonflammable gases when burned [87].These properties reduce the flammability of materials, making them a potential alternative to traditional fire-retardant additives [88].DNA, a naturally existing and ecologically beneficial fire retardant, is made up of sodium phosphate backbone categories, deoxyribose components, and hydrogen-bonded nucleobases.Researchers have used fish scales and DNA in EP composites to replace hazardous phosphorus or halogen-based additives, making them a promising alternative to traditional additives [89] (Table 7).FSs can enhance the fire resistance of composites as a synergistic agent.By adding FSs to APP, Liu et al. [90] found that the LOI went from 21.2% to 36.2% and the UL-94 test went from fail to V-0 rate.This indicated that the composite was less likely to catch fire at high temperatures.Furthermore, Zabihi et al. [91,92] employed fishing sector waste DNA to change the structure of clay.A thicker carbon layer might result in a considerable drop in THR and pHRR and a rise in tensile strength.In addition, they modified graphene nanomaterials using DNA waste from the fishing industry.They discovered that adding only 10% of the additives enhanced LOI by 86%, 80%, and 61%, respectively, as well as achieving a V-0 rate in the UL-94 test in EP, PVA, and PS composites.This "multilayer" char residue synergistically enhanced the flame retardancy of polymer nanocomposites.
Coffee Grounds
Coffee grounds, a biodegradable and ecologically benign substance, have been reported to be a rich source of industrially essential sugars and polyphenols [93,94].The notion of recycling them as polymer fire-resistance fillers is likely to gain attention [95].
Chemical modification of coffee grounds can enhance the flame retardancy of composites.Vahabi et al. [96] developed highly efficient flame-retardant fillers from spent coffee grounds (SCGs) and chemically modified them with phosphorus (P-SCG), resulting in a 39.2% decrease in pHRR and an 11.8% decrease in THR.Furthermore, coffee grounds can also improve the mechanical characteristics and fire resistance of composites.Nguyen et al. [97] studied EP composites containing SCGs, revealing their mechanical characteristics.The inclusion of SCGs enhanced the composite's tensile strength, flexural strength, impact strength, and compressive strength.In addition, when combined with glass fiber (GF), SCGs might raise the LOI of the composite while simultaneously decreasing the combustion rate of UL-94 HB.
Plant Waste
Plant waste, such as rice husk, cellulose nanofibers, and lignin, is increasing as people's living standards rise.Because these wastes include important flame-retardant ingredients, studies on their potential usage as fire-resistant fillers in different polymers are being explored.
Rice Husk (RH)
RH is a kind of hull that is used to preserve grains or seeds.It is made of rigid materials, is insoluble in water, and has high silica content.Because of its fireproof components, RH has significant potential in polymers and is employed in a variety of composites (Table 8).
EP. RHs with chemical modifications can enhance the flame retardancy of composites.Krishnadevi et al. [98,99] found that functionalizing RHs improved composite flame retardancy.They discovered that amine-terminated cyclophosphazene-and 3-aminopropyltrimethoxysilane-functionalized rice husk ash (RHA) made EP composites better at resisting fire.The special mix of both phosphorous and nitrogen in the phosphazene ring and silica in RHA made the pHRR, THR, and LOI of EP composites much better, and attained a V-0 level in UL-94 test.Meanwhile, the use of RH in the EP composite also provided significant flame retardancy.Kavitha et al. [100] studied the thermal stability and flame-retardant characteristics of an EP composite enhanced with RH.The composite with 11.0 wt% RH showed improved thermal stability and attained a V-0 level in the UL-94 test.Xu et al. [101] studied the use of magnesium phytate (Mg-Phyt) as a biobased flame retardant.They found that combining Mg-Phyt with RHA enhanced its flame retardancy.As a consequence, silica-rich char with excellent thermal stability was produced, decreasing heat release into the EP matrix and flammable gas emissions.
PP. Schirp et al. [102] discovered that adding RH to a PP matrix lowered the heat emission rate, resulting in a decline in the pHRR and THR of the composites.Furthermore, Almiron et al. [103] discovered that when volcanic ash and RHA were combined with PP, they boosted the fireproof capabilities of PP, resulting in a decrease in pHRR and THR of PP composites.
PLA.Researchers have used chemical modification techniques to study the effect of RH on the flame-retardant properties of PLA composites.Yiga et al. [104] found that modified RHs surpass unmodified RHs in flame-resistant fiber-reinforced PLA composites.Tipachan et al. [105] established a synergy between layered double hydroxide (PKL_DS), rice husk ash silica (SiRHA), and a blend of the two particles that significantly improved PLA's fireproof capability.PLA nanocomposites with 10 wt% PKL_DS and 5 wt% SiRHA had an LOI of 32.8% and a V-0 level in the UL-94 test with anti-dripping activity.
EVA. Matta et al. [106] investigated three types of biochar: soft wood, oil seed rape, and RH.They mixed biochars at concentrations of 15%, 20%, and 40% in an EVA copolymer.The results showed a decrease in pHRR and THR while increasing additives.The pHRR and THR of EVA composites with 40% RH decreased by 70% and 21%, respectively, as shown in Figure 14a.HDPE.Zhao et al. [107] found that adding RH to polymer composites decreased their flammability.The addition of RH delayed thermal oxidation by 40 • C and provided a flame-retardant effect.HDPE composites with 70% RH exhibited a 65.8% decrease in pHRR and a 22.7% decrease in THR, as shown in Figure 14b.
PU foam.The thermal stability, flame retardancy, and mechanical characteristics of RH-reinforced PU foams were examined by Phan et al. [108].They observed that RHs increased flame retardancy and reduced smoke generation, resulting in a 34.1% decrease in the pHRR of the composite, as shown in Figure 14c.
Coating.Nasir et al. [109,110] studied the combustion and thermal stability of an intumescent coating system using rice husk ash (RHS), eggshell, TiO 2 , and Al(OH) 3 .They found that incorporating RHA and TiO 2 into a waterborne intumescent coating improved fire resistance by reducing HRR and combustion heat.Moreover, Abdullah et al. [111] found that increasing RHA content increased porosity and surface roughness and played a crucial role in the creation of an intumescent char, as shown in Figure 15.Note: Loading ratio represents flame retardant/rice husk ratio.
Cellulose Nanofibers (CNFs)
Cellulose nanofibers, a sustainable, high-volume fiber of cellulose with sizes ranging from 10 to 100 nm and lengths ranging from a few to tens of micrometers, are gaining interest from researchers and industry due to their abundance, sustainable nature, and excellent mechanical characteristics, which can be used in flame-retardant composites [112][113][114][115][116].After treatment of the surface, CNFs may be employed for flame-retardant additives in a range of composites (Table 9).Note: Loading ratio represents flame retardant/flame retardant additive ratio.
RPUF.After surface treatment, CNFs can enhance fire resistance in RPUF composites.Członka et al. [117] discovered that 2% eucalyptus fiber treated with maleic anhydride, alkali, and silane surface modification enhanced the mechanical and thermal characteristics of RPUF, as shown in Figure 16.The silane-treated fibers improved the mechanical characteristics of RPUF composites.Furthermore, the pHRR and TSR of RPUF composites were reduced.PLA.Suparanon et al. [118] found that CNFs can enhance the flame retardancy of PLA composites after surface treatment.They extracted microcrystalline cellulose (MCC) from oil palm empty fruit bunches (OPEFB) and synthesized it as polylactide composite additives.The synergistic effect of tricresyl phosphate (TCP) and OPMC improved the composites' impact strength and flame retardancy.The composite with the additive had a 38.5% LOI and obtained a V-0 level in the UL-94 test.In addition, Feng et al. [119] studied phosphorus-nitrogen-based polymers on CNFs and came up with PN-FR@CNF, a system that did not catch fire.When 10 wt% PN-FR@CNF was added to PLA composites, they attained a V-0 level in the UL-94 test, their pHRR went down, and their tensile strength improved.The research also discovered that changing CNFs might improve the mechanical characteristics of composites.Furthermore, Yin et al. [120] combined CNFs with green additives to generate APP@CNF, an environmentally friendly fire-retardant additive.The composite, which included 5 wt% APP@CNF, passed the V-0 level in the UL-94 test and had an excellent LOI of 27.5%, which improved flame retardancy.The composite also reduced pHRR and THR by 13.6% and 19.3%, respectively, while increasing the impact strength from 7.63 kJ/m 2 to 11.8 kJ/m 2 .
Lignin
Lignin, which is plentiful in nature and extensively dispersed in plant-supporting tissues such as wood and bark, has tremendous promise as an ecologically benign flameretardant resource owing to its excessive carbon content and its multifunctional groups [121][122][123], as shown in Table 10.
EP. Ding et al. [124] found that straw lignin can be used as a partial replacement for bisphenol A in EPs, resulting in excellent thermal stability.In addition, Dai et al. [125] reviewed lignin with high smoke-suppression capabilities utilizing modified biomass.A Lig-F/EP composite with high phosphorus content achieved the best flame retardancy, obtaining a V-0 level in the UL-94 test and reducing pHRR and the generation of smoke by 46.6% and 52.8%, respectively, as shown in Figure 17.PLA.The flame retardancy of lignin might be enhanced by grafting modification.Yang et al. [126] produced lignin-derived multifunctional bioadditives (TP-g-lignin) by grafting a phosphorus/nitrogen-containing vinyl monomer (TP) to a lignin.The addition of 5 wt% TP-g-lignin to PLA achieved a V-0 level in the UL-94 test.Furthermore, Liu et al. [127] investigated a lignin-derived flame retardant by grafting polyphosphoramide onto lignin.The composite with 8 wt% lignin-derived additives obtained an LOI of 25.8% and a V-0 level in the UL-94 test and reduced THR by 8.4%.
PP. Liu et al. [128] investigated a biobased flame retardant derived from conventional lignin grafted with P, N, and copper components for wood-plastic composites.They discovered that functionalized lignin (F-lignin) was more efficient than unmodified lignin (O-lignin) in enhancing thermal stability and flame retardancy.F-lignin slowed combustion, decreased heat release, and lowered smoke generation.Composites containing 5 wt% F-lignin PP reduced pHRR by 9% and THR by 25%, respectively.
PA.In their study of the flame retardancy of polyamides (PAs) using kraft lignin and the APP synergistic effect, Cayla et al. [129] discovered that kraft lignin slowed the thermal decomposition of PA composites and lowered pHRR by 66.0% compared with pure PA.Note: Loading ratio represents flame retardant/lignin ratio.
Other Wastes
In addition to the three types of waste mentioned above, many other wastes contain valuable flame-retardant components.As a result, they could be applied in the field of polymer flame retardants [130].The flame retardancy of composites is somewhat impacted by wool and biochar compositions.Das et al. [131] investigated biochar and wool composites in conjunction with a halogen-free flame retardant.The results showed that biochar and wool composites significantly lowered the pHRR, produced less smoke, and had a higher mass loss rate than pure PP.Furthermore, wool hybridization improved LOI.The pHRR and THR of the composites decreased by 73.3% and 9.0%, respectively.In addition, certain biobased waste has shown positive outcomes in conventional intumescent flame-retardant coatings.Wang et al. [132] investigated conch shell biofiller (CSBF), which was created by washing, ultrasonically pulverizing, and pulverizing conch shell, and then used in waterborne intumescent flame-retardant coatings.The pHRR and THR decreased by 24.8% and 29.6%, respectively, when compared to a reference sample.As a consequence, CSBF increased the coatings' heat stability and formation of char performance.Furthermore, the flame retardancy of the composite was also increased by adding waste foam that was intrinsically flame-resistant.Wang et al. [133] investigated thermoset polymer foam waste leftovers by pulverizing melamine formaldehyde (MF) foam with intrinsic fire resistance and adding it as the flame-retardant filler to PUF.The researchers discovered that introducing MF foam powder might greatly lower the HRR and combustibility of PU foam without sacrificing mechanical qualities.Natural fibers derived from renewable resources have relatively low manufacturing costs and are completely biodegradable, providing great benefits for the final qualities of the composites [134,135].Sanchez-Olivares et al. [136][137][138] researched natural keratin fibers, coconut fibers and agave fibers for fillers in thermoplastic starch-polyester.The results showed that the composites had a good flame-retardant effect.Leather is among the most ancient, widely used materials worldwide.Moreover, feathers contain abundant keratin fibers.Furthermore, feathers are full of keratin fibers.Wrześniewska-Tosik et al. [139,140] studied combinations of elastic polyurethane (EPUR) with milled chicken feathers.The findings showed that composites containing feathers might increase their flame retardancy.Additionally, Battig et al. [141] investigated the use of leather waste (LW) as filler in flame-retardant composites composed of polymers.The findings indicated that EVA composites incorporating LW had 53.0%lower pHRR than pure EVA.
Conclusions and Perspective
In summary, waste-based flame retardants have seen rapid development in recent years.Due to the different flame-retardant components in certain industrial wastes, food wastes, and plant wastes, they have been employed in the development of flame-retardant polymeric materials.These studies suggest that wastes could be promising alternative flame-retardant materials, particularly for use as flame-retardant additives or synergists, and show the following main advantages and significance.
(i) Reducing waste: By using waste materials as the source of additives for flameretardant actions, we can reduce the amount of waste that goes to landfills or incinerators.This is important because waste disposal may cause serious environmental problems.
(ii) Cost-effective: Using waste materials to make composite flame retardants is an economical and promising method, since raw resources are often less costly than virgin materials.The utilization of waste materials to make sustainable flame-retardant compounds would aid in waste reduction and the promotion of a circular economy.(iii) Sustainable: Using waste materials to make flame-retardant substances is a sustainable technique that would help to lessen our dependency on nonrenewable resources.
Researchers limit the quantity of waste that goes to dumpsters while simultaneously developing more sustainable and ecologically friendly products by using the aforementioned waste elements to generate useful flame retardants.However, the research on utilizing waste materials as flame retardants is still in its early stage.There are currently few mainstream waste flame retardants that achieve high flame-retardant efficiency when used alone.In addition, the current limitation on such flame retardants is that the introduction of these fillers often does not bring high value-added functions to the composites, such as improvement in mechanical performance.To further develop novel high-performance waste-based flame retardants, we proposed the following.
(1) Developing possibilities for more types of waste utilization.More waste-based flame retardants should be produced and investigated.Agricultural wastes such as corn stalks can be converted into biochar, which has high flame-retardant properties, while other natural materials, e.g., cellulose-based wastes, which can be extracted from corn cobs and wheat straw, and lignocellulosic wastes, such as sawdust and wood chips, can be used to produce sustainable flame retardants.(2) Comprehensive analysis of the performance and efficacy of waste derivative flame retardants.In order to screen or develop waste-derived flame retardants with high performance, versatility, and significant economic value, researchers need to conduct a comprehensive cost-effectiveness analysis of objective flame retardants, which can include a systematic evaluation of mechanical properties, cost analysis, flame retardancy, and other possible value-added functions.It is worth pointing out that life cycle assessment (LCA) can be used as a practical and systematic method for evaluating related flame retardants.LCA is a well-known process that is documented in international guidelines (ISO 14040, ISO 14044).The socioeconomic and environmental consequences of the whole value chain for any kind of waste-based flameproof product should be examined using LCA and recognized criteria.
Overall, sustainable flame retardants will play a critical role in achieving a balance between fire safety, cost, and environmental concerns.Ongoing research and development efforts in this area will be key to finding safe, effective, and sustainable solutions for waste flame-retardant additives.
Figure 13 .
Figure 13.Diagrams depicting the influence of DMMP and EG on the horizontal burning of RPUF in (a) burning time and (b) reduction in weight after the ignition source was removed.(c) Images of WCO-polyurethane foams following horizontal igniting tests: DMMP and EG.Redrawn from [86].Copyright (2022) American Chemical Society.
Table 1 .
Data of the composites with fly ash as fillers.
Table 2 .
Data of the composites with steel slag as fillers.
Table 3 .
Data of the composites with eggshell as flame-retardant filler.
Table 4 .
Data of the composites with bagasse as fillers.
Table 5 .
Data of the composites with banana peel powder as fillers.
Table 6 .
Data of the composites with oyster shell powder as fillers.
Table 7 .
Data of the composites with fish scales and DNA as fillers.
Table 8 .
Data of the composites with rice husk as fillers.
Table 9 .
Data of the composites with cellulose nanofibers as fillers.
Table 10 .
Data of the composites with lignin as fillers. | 7,987 | 2024-05-01T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Five Methods of Exoplanet Detection
The study of exoplanetary systems can help us understand the formation and evolution of the solar system itself and search for terrestrial planets in the habitable and extrasolar lives in exoplanetary systems. Exoplanets have become an important area of astrophysics in the last two decades. This paper reviews five different methods to detect exoplanets, including direct imaging, astrometry, radial velocity, transit event observation, and microlensing. These approaches could expand the sample size of exoplanets and further our understanding of the types, formation and evolution of exoplanets.
Introduction
Whether there are other Earth-like planets with life in the universe or whether there are exoplanets has always been attracting the attention of researchers. It was not until 1995 that a paper published by Mayor & Queloz in Nature changed people's imagination of exoplanets, and researchers discovered a planet with a minimum mass of 51% of the mass of Jupiter through long-term measurement of the radial velocity of the sun-like star 51 Peg [1]. This discovery has started the prelude to the study of extrasolar planets. In addition, as early as 1992, Wolszczan and Frail detected three planetary companion objects orbiting the pulsar PSRB1257+12 [2]. However, due to the extremely strong magnetic field of the host star, the surrounding planets are not suitable for reproducing life like the Earth. On the other hand, these planets may be formed by the remains of the precursor stars that formed pulsars after the explosion, rather than formed together with the precursor stars or after the precursor stars entered the main sequence stage.
After just a few decades, by March 21, 2021, 4699 exoplanets have been confirmed (www.exoplanet.eu). Before 2009, the launch of the Kepler Space Telescope by the National Aeronautics and Space Administration (NASA), the exoplanets were detected dominantly by the radial velocity method, while after the release of the data from the Kepler space survey, the detection was immediately dominated by the predation event observation method. All this is attributed to the superb design of the Kepler Space Telescope, which has unprecedented pointing accuracy and photometric accuracy at the scale of one millionth (ppm), thus ensuring the ability to detect terrestrial planets and even smaller exoplanets. The Transiting Exoplanet Survey Satellite (TESS), which was launched aboard a SpaceX Falcon 9 rocket on April 18, 2018, is the next step in searching for planets beyond our solar At present, the extrasolar planet detection technology covers not only the pulsar timing method and the radial velocity method that detected the first exoplanets. It also covers the newly developed gravitational microlensing method, transit event observation method, direct imaging method, and astrometry. These detection methods have been widely used and improved in the ongoing ground-based and space-based exoplanet search projects [3,4]. In the following sections, we will describe each of these detection methods.
Direct Imaging Method
Exoplanet imaging usually means the point source image formed by the light of the host star reflected in target source detection and is different from the resolution of the planet surface. When the radius of the planet is large and the orbit radius around the host star is relatively large, the method of adaptive optics and coronagraph direct imaging can be used to detect exoplanets. This is because, in this case, the planet is bright enough to be detected, and the host star is far enough away from the planet for the telescope to be able to distinguish it. In 2004, Chauvin et al. detected the exoplanet 2M1207b discovered with the direct imaging method for the first time by using Chile's Paranal Observatory's Very Large Telescope [5] (as shown in Figure 1). The mass and effectiveness and temperature are predicted as 5 ± 2 M jup and 1250 ± 200 K. The brightness ratio between the planet and the host star depends on the planet's size, the distance between the planet and the star, and the scattering characteristics of the planet's surface. The brightness ratio between the planet and the host star at wavelength λ is [6]: where p(λ) is the geometric albedo, and g(α) is the phase function of the planet. The brightness ratio between the planet and the host star is usually very small. For example, the brightness ratio between a Jupiter-like planet and a sun-like star in a system with a distance of 11 pc and the angular distance of 0.52 arcsec from us is 10 -8~1 0 −9 . Figure 1. The CCD frame of 2M1207b [5].
3.Astrometry Method
Gatewood expounded the idea of using astrometry to detect exoplanets [7]. As shown in Figure 2, until 2010, Muterspaugh et al. detected the exoplanet HD 176051b by astrometry for the first time [8]. Astrometry is to detect planets by directly detecting the position of the host star perturbed by the planet's gravitational force, so it can get the planet's mass and orbital inclination [3]. The host star's orbit around the center of mass of the planet and the star is projected onto the sky plane usually as an ellipse, and the angular semi-major axis α of the ellipse is [6]: where a is the semi-major axis of the planetary orbit, and d is the distance from the system to the observer. The observable α is directly proportional to the mass of the planet and the semi-major axis of the orbit and inversely proportional to the distance. From the above derivation, the observation of at least one complete orbit is the key to the astrometric detection of exoplanets. More periodic observations can improve the confidence of the method.
4.Radial Velocity Method
So far, about a fifth of all known exoplanets has been discovered by the radial velocity method. For planets with large masses and short periods, the radial velocity method is applicable. The essence of this method is to detect exoplanets by measuring the periodic variation of the star relative to the center of mass of the system. Figure 3 shows the radial velocity of the first solar-like exoplanet, 51 Peg [1]. Currently, the detection accuracy of the radial velocity method has exceeded 1 m/s, which can detect short period terrestrial planets surrounding M-type dwarfs [9,10]. In general, the orbits of the planets are assumed to be elliptical. The reasons for this assumption can be based on the orbits of the planets in our solar system and the current study of other exoplanets. One can use the center of mass of the star and the planet as the origin and the long axis of the planet's orbit as the x-axis to establish the Cartesian coordinate system [6]. As shown in Figure 4, the position of the planet is where is the distance from the planet to the center of mass. Based on the elliptic equation, can be replaced by: where represents the angle between the line between the planet and the star and the line between the orbit pericentre and the center of mass, a is the semi-major axis of the planet's orbit while e is the eccentricity of the planet's orbit. In addition, can be transformed into a function of time t with the following equation: M=E-e sinE , as a reference time, which indicates the time when the planet passes through the pericentre while is the orbital period of the planet. In actual observations, the orbit of a planet one observes is a projection of the planet's orbit in the celestial coordinate system. As shown in Figure 5, the line of sight of the observer is parallel to the zaxis, and a Cartesian coordinate system is built with the center of mass as the origin, so the position of the planet can be characterized as [6]: x ‾=r cos Ω cos v+ω -sin Ω sin v+ω cos i , y ‾=r sin Ω cos v+ω + cos Ω sin v+ω cos i , z ‾=rsin(v+ω)sin i .
Take the derivative of time over to get the radial velocity of the planet, multiplied by the mass ratio of the planet to the star * that can get the radial velocity Vr of the star around the barycentre of the system is obtained: where is the radial velocity of the barycentre, G is a constant of the gravitation. Because ≪ * , approximating the expression to the radial velocity, the minimum mass of a planet can be expressed (in terms of the mass of the earth): Figure 4 Schematic diagram of tracks of exoplanet and its corresponding parameters [11].
Transit event observation method
More than 3,000 exoplanets have been discovered by detecting signals from periodic planetary transits. In this method, the observer, the planet, and the host star are close to collinear. When the planet is between the observer and the host star, the planet will pass through the front of the host star, causing part of the star's light to be blocked by the planet. This blocking will be shown in the photometric measurement of the star. This is called a transit event. Figure 6 shows the eclipse light curve of the first exoplanet observed by Charbonneau et al. using the eclipse event observation method [12]. By analyzing the solar eclipse curve of the planet, not only the radius of the planet but also the inclination of the plane of the orbit and the long half axis of the orbit can be obtained.
In the above part, we obtained the line of sight velocity dependency on the orbit of the parameters and the planet's mass. Based on the above relation, we can easily obtain the partial relation of the eclipse light curves. The plane formed by-xoy is the sky plane, and the distance from the trajectory of the planet in this plane to the origin multiply by a factor of 1+M p /M * is the distance r sky from the center of the planetary disk to the center of the stellar disk [6]: If the planet transit event occur, the required rsky minimum value r sky min satisfies r sky min < R p +R * . Taking r sky derivative v and making it zero(0), then we can obtain r sky , and take r sky satisfactory needed condition: a 2 1-e 2 2 (1+e cosv) 3 (1+e cosv)sin(2γ+2ω)sin 2 i-2e sinf 1-sin 2 isin 2 (v+ω) =0 .
For the transit eclipse system, the planet's orbital plane is straightly close to the line direction, thus the last term of sum in Equation 18 is quite a small quantity. In the actual transit eclipse curve dealing process, we can ignore it. Take the approximate result of Equation 18 to Equation 16 to obtain r sky min : In the field of exoplanet research, the collision parameter b is usually used to characterize the planet's eclipse trajectory distance and the center of the star's disk as shown in Figure 7, and b is defined as: Due to the duration of the eclipse T 14 ≤ p, therefore we can reasonably assume that the planet is moving at a customary speed when the eclipse occurs, so the distance between the planet and star remains unchanged. The distance between the planet and star ̅ is well approximated: For Equation 25, the light becomes extremely small (corresponds to r sky min ) t = T min integral to the fourth touch of the planets and stars t = T 4 . One can obtain: Among them, v 4 shows the true apse angle of the planet at the fourth contact of the planet and the star. The fourth contact and the time the light becomes extremely small, the difference is exactly half of the duration T 14 . This time, the distance between the planet and astrolabe is R p +R * . Therefore, the following relationship exists, (28) For a system composed of a Jupiter-like planet and a sun-like star, it is approximately 0.001. For a system composed of a red short star and a Jupiter-like planet, M p /M * is approximately 0.01. So in the later calculations, M p /M * will take two for approximation, then the physical parameters and model parameters have the following relationship [6]: Planet to star radius ratio The inclination of the planet's orbit The semi major axis of the planet's orbit in units of the star radius Figure 7. The illustration of the eclipse light curve with its parameters [14]. . (32)
Gravity Microlensing Effect
Gravity lensing effect refers to that in accordance with the order of observer, lens celestial body and background light source, when the three are close to collinear, the light of background light source will be deflected in the process of reaching the observer due to the gravitational action of lens celestial body, thus producing an effect similar to the light gathering effect of the lens. If the lenticular body is the host star of an exoplanet, the planet will have a similar lensing effect on the light from the background source.
The gravitational lensing effect produced by planets is weaker than the gravitational lensing effect produced by other massive bodies, called the microgravitational lensing effect. Figure 8 shows the gravitational lensing event of the exoplanet system OGLE-2005-BLG-390. This study demonstrates the feasibility of microlensing for exoplanet detection [15]. This detection method can be used to detect planets with earth-like mass. A disadvantage of gravitational lensing is that it usually lasts for a short time, usually only a few days to a few weeks, and is rarely observed again, so it cannot be cross-verified with other detection methods.
Conclusion
In this review, we review the current methods of exoplanet detection, including direct imaging, astrometry, radial velocity, transit event observation, and microlensing (we do not review pulsar timing). Compared with the other methods, we give a more detailed mathematical and physical description of methods radial velocity, transit event observation, which also found the vast majority of exoplanets. With these methods, researchers have already found thousands of exoplanets, paving the way for further efforts to find habitable exoplanets. | 3,250.8 | 2021-01-01T00:00:00.000 | [
"Geology",
"Physics"
] |
Personalized Dosimetry in Targeted Radiation Therapy: A Look to Methods, Tools and Critical Aspects
Targeted radiation therapy (TRT) is a strategy increasingly adopted for the treatment of different types of cancer. The urge for optimization, as stated by the European Council Directive (2013/59/EURATOM), requires the implementation of a personalized dosimetric approach, similar to what already happens in external beam radiation therapy (EBRT). The purpose of this paper is to provide a thorough introduction to the field of personalized dosimetry in TRT, explaining its rationale in the context of optimization and describing the currently available methodologies. After listing the main therapies currently employed, the clinical workflow for the absorbed dose calculation is described, based on works of the most experienced authors in the literature and recent guidelines. Moreover, the widespread software packages for internal dosimetry are presented and critical aspects discussed. Overall, a selection of the most important and recent articles about this topic is provided.
Introduction
Targeted radiation therapy (TRT) is one of the available options for the treatment of benign, malignant or inflammatory diseases [1]. TRT is based on the administration of a radiopharmaceutical, i.e., an agent composed of an α/β-emitting radionuclide usually bound to an active or inert vector molecule. Radiopharmaceuticals are mostly injected intravenously or through intra-arterial or oral access. TRTs can be divided into two subgroups, depending on how the radiopharmaceutical reaches the tumoral site: molecular radiation therapy (MRT) [2] and selective intra-arterial radiation therapy (SIRT) [3]. The former uses the natural tropism of a radionuclide (e.g., 131 I for the thyroid or 223 Ra for bone tissue) or a vector molecule (e.g., PSMA or DOTATATE) selected according to biochemical properties of the disease (e.g., overexpression of specific receptors) so that the radiopharmaceutical binds preferentially to the target cells. The latter, instead, exploits the tumor vascularization: the radiopharmaceutical is directly injected, in the form of microspheres, into the tumor arterial blood circle and no active molecule is needed. The damage to the targeted cells is obtained through the emitted radiation and is expressed in terms of a physical quantity called absorbed dose, which is defined as the radiation energy absorbed per unit mass.
At present, especially in MRT, most hospital centers administer a fixed or weight-scaled amount of activity, regardless of the absorbed dose to the volumes of interest (VOIs) [4]. However, the same administered activity corresponds to different absorbed doses in normal organs and target tissues of different patients and also among various lesions of the same patient [5,6]. Moreover, since there is increasing evidence that the treatment outcome-both in terms of efficacy and toxicity-is related to the radiation released, i.e., the absorbed dose rather than to the administered activity [7], the fixed-activity approach can easily lead to undertreatment or overtreatment of patients. This plays against the principle of optimization required by the European Council Directive (2013/59/EURATOM) [8], according to which TRT treatment must be optimized for each patient, similar to what is already used in external beam radiation therapy (EBRT).
Dosimetry is particularly important for therapies aimed to treat cancer, which are rapidly developing and for which the risk-benefit ratio has still to be carefully evaluated. As a consequence, this paper focuses on those applications, ignoring therapies employed for the treatment of benign or inflammatory diseases.
In the context of TRT, a treatment planning consists in prescribing the activity to administer based on threshold doses to organs at risk (OARs) and, if possible, to lesions [4]. The implementation of personalized dosimetric protocols is thus fundamental. Currently, threshold doses are extrapolated from EBRT literature for which normal tissue complication probability (NTCP) curves are available. However, a proper optimization requires the development of radiobiological models specific for each TRT, since there is increasing evidence suggesting that the mechanisms of cellular response to low vs. high dose-rate exposures are different [9][10][11].
After the treatment, post-therapy dosimetry should also be performed in order to verify the delivered absorbed dose.
The European regulation directly impacts the work of medical physicists and nuclear medicine physicians, and the request for educational resources regarding internal dosimetry is particularly high. Arising from this need, the present paper focuses on the available methods and tools for personalized dosimetry in TRT and is addressed to physicists, medical physicists and everyone working in or approaching this field, with the aim to provide a general overview of this rapidly evolving and active discipline.
Radiopharmaceuticals in TRT
Different radiopharmaceuticals can be used in TRT, according to the treated disease [12]. Those most applied are summarized in Table 1.
The calculation of the absorbed dose requires the assessment of the activity distribution throughout the patient body over time. In some cases, the radionuclide used for the therapy presents itself an emission channel (e.g., γ, e + ) or a paramagnetic component that can be used for tracking the substance with the current imaging systems (planar scintigraphy, SPECT, PET, MRI). In this scenario, if the treatment consists of a single administration (e.g., 131 I for thyroid diseases, 131 I-MoAbs for lymphomas), dosimetry is usually performed before therapy administering a tracer activity, i.e., a low amount of the radiopharmaceutical itself, focusing attention on not altering the target uptake of the subsequent therapy, i.e., avoiding the so-called "stunning effect" [13,14]. Furthermore, the difference in the setup sensitivity when moving from diagnostic images at low activity to treatment images at high activity could in principle introduce errors in the assessment of the absorbed dose [15].
If multiple cycles are planned, and if the radionuclide is suitable, dosimetry can be performed during the therapy, at the first and/or subsequent cycles (e.g., 177 Lu-DOTATATE for neuroendocrine tumors, 177 Lu-PSMA for prostate cancer). Dosimetry performed only after a single cycle is usually associated with the hypothesis that the radiopharmaceutical biodistribution in subsequent cycles remains unaltered, but this approximation could introduce errors in the evaluation of the absorbed dose [15][16][17].
If the radionuclide is not suitable for dosimetry (i.e., presenting a lack or very low abundance of gamma or positron emissions), instead, other radionuclides with a physical half-life (T 1/2 ) compatible with the biological half-life of the vector and similar chemical properties represent an important option for provisional dosimetry in MRT (e.g., 111 In-DOTATOC/ 90 Y-DOTATOC for neuroendocrine tumors). In this scenario, the inability of the surrogate radiopharmaceutical to reproduce the exact pharmacokinetics of the therapeutic compound must be carefully evaluated [18].
In SIRT, since the effective half-life of the surrogate radiopharmaceutical is equal to the physical one, a proper distribution mimicking requires that the emitting compound comes in similar size and number of the therapeutic microspheres (e.g., 99m Tc-MAA for 90 Y radioembolization). In this particular case, furthermore, the high activity concentration of 90 Y in the tumoral regions during the treatment also allows performance of a post-therapy dosimetry with 90 Y-PET images, despite the extremely low positron channel branching ratio. The comparison of provisional and post-therapy dosimetry remains a matter of investigation [19,20], and strongly depends on the intra-arterial procedure and repeatability.
The lack or very low abundance of gamma emission represents a typical drawback of alpha emitter therapies such as 223 Ra, 225 Ac and 213 Bi-MoAbs and 225 Ac-PSMA, jeopardizing the feasibility or reliability of individual dosimetry. In order to overcome this problem, some authors have proposed extrapolations from the dosimetry using the same molecule and different radionuclides, i.e., assuming similar uptake and retention [21,22]. As a general rule, the reliability of the dosimetry information derived must always be assessed specifically for each type of radiolabeled molecule, since different radionuclides might influence the radiochemical stability and the biokinetics. This issue can be addressed in preclinical studies.
One Activity Does Not Fit All
A one-size-fits-all approach based on the administration of a fixed or weight-scaled amount of activity is usually adopted in the field of radiation therapy from internal sources. Clinical trials based on escalating administered activities are used to collect data on possible toxicity (especially early) and efficacy, from which define standard activity amounts to administer (e.g., SIRT with 90 Y-microspheres, 177 Lu-DOTATATE). However, different studies demonstrated that the same administered activity corresponds to a wide range of absorbed doses to target and nontarget volumes, caused by the interpatient variation of metabolism (Table 2). In addition, since for an increasing number of therapies there is reason to believe that the absorbed dose is related to the treatment outcome (Table 3), the wide range of absorbed doses per injected activity suggests a risk of under dosage of tumoral lesions and an overexposure to normal tissues for a portion of patients and that the potential of the radiopharmaceutical is not fully exploited.
The choice of using one-size-fits-all protocols was comprehensible in the early years of TRT, considering the simplicity of the method and the lack of studies regarding internal dosimetry, but currently it is highly questionable. Although a number of questions have to be further investigated, most results in literature suggest that the one-size-fits-all approach appears inadequate and does not guarantee either optimization or best standard of care for the patient [33]. Bone marrow absorbed dose rescaled in terms of patient-specific trabecular volume [49] 223 Ra-Cl2 (mCRPC) 14 Tumor response ( 99m Tc-HDP) Tumor absorbed dose in the first cycle [50]
How to Calculate the Absorbed Dose
The calculation of the absorbed dose to normal organs and target regions requires two major ingredients: (i) The activity distribution inside the patient body over time to assess, through time integration, the total number of disintegrations occurring in each region of interest (cumulated activity); (ii) The computation of conversion factors, or direct MC simulations, for changing cumulated activity into absorbed dose.
Therefore, this calculation should not be thought of as an isolated process but as part of a dosimetric workflow, the main steps of which are represented in Figure 1.
SIRT represents a particularly simple context for dose calculation. In fact, the microspheres reach the tumoral site exploiting liver vascularization and they remain trapped in the lesion vessels; therefore, metabolic kinetic is not present and imaging at multiple timepoints is not needed. As a consequence, some steps of the dosimetric workflow described in Figure 1 are not required for the dose calculation.
Activity Measurement
As preliminary operation, the activity inside the vials for pre-therapeutic imaging or treatment have to be measured with a dose calibrator [51]. An accurate analysis of its sensitivity in response to different geometries is suggested, along with a proper calibration for each radionuclide of interest.
The activity measurement with the dose calibrator allows not only assessment of the administered activity analyzing the vial residual, but also performance verification of the nominal value provided by the supplier and to measure the activity inside phantoms for scanner calibration or image quantification. (i) The activity distribution inside the patient body over time to assess, through time integration, the total number of disintegrations occurring in each region of interest (cumulated activity); (ii) The computation of conversion factors, or direct MC simulations, for changing cumulated activity into absorbed dose.
Therefore, this calculation should not be thought of as an isolated process but as part of a dosimetric workflow, the main steps of which are represented in Figure 1. SIRT represents a particularly simple context for dose calculation. In fact, the microspheres reach the tumoral site exploiting liver vascularization and they remain trapped in the lesion vessels; therefore, metabolic kinetic is not present and imaging at multiple timepoints is not needed. As a consequence, some steps of the dosimetric workflow described in Figure 1 are not required for the dose calculation.
Activity Measurement
As preliminary operation, the activity inside the vials for pre-therapeutic imaging or treatment have to be measured with a dose calibrator [51]. An accurate analysis of its sensitivity in response to different geometries is suggested, along with a proper calibration for each radionuclide of interest.
The activity measurement with the dose calibrator allows not only assessment of the administered activity analyzing the vial residual, but also performance verification of the nominal value provided by the supplier and to measure the activity inside phantoms for scanner calibration or image quantification.
Scanner Calibration
The first step in the dosimetric workflow is the determination of a calibration factor for the scanner, i.e., a factor for the conversion of the count rate (cps) into absolute activity (MBq). Different methods for the assessment of this factor have been proposed and are currently used [52]. The standard calibration procedure consists of preparing a radioactive sample of well-determined activity (measured with the dose calibrator) and detecting its count rate. The ratio between the count rate and the known activity gives the calibration factor.
The calibration factor obviously depends on the considered radionuclide, on the size and shape of the sample (point-source or active phantom) and on the imaging technique employed (planar scintigraphy or tomography). Furthermore, it is good practice to assess it periodically in order to detect any possible variation.
Scanner Calibration
The first step in the dosimetric workflow is the determination of a calibration factor for the scanner, i.e., a factor for the conversion of the count rate (cps) into absolute activity (MBq). Different methods for the assessment of this factor have been proposed and are currently used [52]. The standard calibration procedure consists of preparing a radioactive sample of well-determined activity (measured with the dose calibrator) and detecting its count rate. The ratio between the count rate and the known activity gives the calibration factor.
The calibration factor obviously depends on the considered radionuclide, on the size and shape of the sample (point-source or active phantom) and on the imaging technique employed (planar scintigraphy or tomography). Furthermore, it is good practice to assess it periodically in order to detect any possible variation.
Patient Image Acquisition
According to the endpoint, type of treatment and institute machine availability, different imaging protocols can be used in order to assess the activity distribution throughout the patient body: planar, hybrid and 3D protocol [53].
Planar protocol requires the acquisition of sequential whole-body 2D images. This approach does not allow for a reliable activity determination in overlapping structures and it enables only the calculation of the mean absorbed dose to organs or lesions. As advantages, it is very fast and easy to implement. Furthermore, it easily provides wholebody images, which are fundamental, e.g., in the presence of diffuse metastasis.
In order to overcome the limitations of the planar protocol and to determine the absorbed dose at the voxel level, i.e., to obtain dose maps, one additional SPECT/CT or PET/CT can be acquired at one of the timepoints to quantify the activity and then combine it with the activity variation vs. time derived from serial planar images (hybrid protocol) [54,55]. As a third option, multiple SPECT/CTs or PET/CTs can be used (3D protocol) for complete 3D information, although more than one bed per time might be required due to the limited field of view.
Activity Quantification
Once the images have been acquired according to one of the protocols noted above, the absolute activity inside each of the source regions, i.e., regions in which the activity cumulates significantly, has to be determined. In the case of SPECT or PET images, raw data acquired in projections have to be reconstructed. Different iterative algorithms have been developed and are currently used in clinical routine for this purpose [56]. Furthermore, since the imaging procedure inevitably introduces errors both in terms of loss and displacement of the signal, a set of corrections should be applied to the images in order to recover the true counts. Those corrections include attenuation, scatter, dead time, collimator-detector response and partial volume effect. Different methods are available for image correction, either for planar and tomographic images [57][58][59]. The same kind of image corrections should also be applied during the scanner calibration procedure.
Registration and Segmentation
Volume of interest (VOI) or region of interest (ROI) must be defined on a reference scan using manual, semiautomatic or automatic tools. Then, images at different timepoints are registered to the reference scan using rigid (translation and rotation) and/or elastic algorithms and segmentations are propagated. As an alternative, segmentation can be performed for each of the timepoint images.
For SPECT/CT or PET/CT acquisition, the registration algorithm is usually applied to the CT scan series and then merged/fused with SPECT or PET image, with the CT less affected by a possible error due to the higher spatial resolution. Although different and increasingly advanced methods have been developed, registration is still challenging, especially in case of organ motion, variation in shape and size.
In SIRT, since in principle the kinetic process is not involved, imaging at multiple timepoints is not needed (only physical half-life applies) and registration refers only to multiple imaging modalities.
Time Activity Curve (TAC) Fit and Time Integrated Activity (TIA) Assessment
After segmentation and registration, the activity in source regions at different timepoints is known and the cumulated activity A r S inside the source regions can be determined. Analytical methods or linear interpolation (trapezoidal method) are commonly used for approximating the time-activity curve (TAC) between the first and last experimental timepoints. In the first case, sums of exponentials are often used as mathematical functions for fitting the time-activity curves A r S (t) [60]: where A j (0) is the initial activity value of the jth exponential component, λ is the physical decay constant related to the physical half-life T 1/2 of the radionuclide through the relationship λ = 0.693/T 1/2 and λ j is the biologic elimination constant corresponding to the biologic half-life T 1/2,j (λ j = 0.693/T 1/2,j ) of the jth exponential component; λ + λ j represents the effective elimination constant. Constant, linear, or analytical fit are usually proposed for the extrapolation of the time-activity curve before the first experimental timepoint, whereas analytical or physical decay are options for extrapolating the curve after the last timepoint ( Figure 2). Appropriate extrapolation of the curve in order to obtain the time-integrated activity (TIA) is crucial for dosimetry accuracy [61]. In order to avoid possible significant errors, therefore, the European Association of Nuclear Medicine (EANM) guidelines recommend that the fractional contribution of the TIA from the extrapolations should be less than 20% [62].
Integration of the TAC could be performed both at the organ, i.e., considering the mean activity inside a macroscopic source region, and at the voxel level, i.e., considering the mean activity inside a single voxel. In the first case, a single pharmacokinetics behavior, i.e., unique TAC parameters, is assessed for the whole region; in the second case, instead, different behaviors are determined for each voxel.
As for registration between different timepoints, TAC fit and integration does not refer to SIRT.
Integration of the TAC could be performed both at the organ, i.e., considering the mean activity inside a macroscopic source region, and at the voxel level, i.e., considering the mean activity inside a single voxel. In the first case, a single pharmacokinetics behavior, i.e., unique TAC parameters, is assessed for the whole region; in the second case, instead, different behaviors are determined for each voxel.
As for registration between different timepoints, TAC fit and integration does not refer to SIRT.
Absorbed Dose Conversion
Different methods for converting the cumulated activities into absorbed dose are available.
The MIRD formalism, developed by the Medical Internal Radiation Dose (MIRD) Committee of the Society of Nuclear Medicine (SNM), was the first dosimetric tool and it allowed the assessment of the mean absorbed doses in organs and tumors [63]. The fundamental equation of the MIRD approach follows:
Absorbed Dose Conversion
Different methods for converting the cumulated activities A r S into absorbed dose are available.
The MIRD formalism, developed by the Medical Internal Radiation Dose (MIRD) Committee of the Society of Nuclear Medicine (SNM), was the first dosimetric tool and it allowed the assessment of the mean absorbed doses in organs and tumors [63]. The fundamental equation of the MIRD approach follows: where D(r T ) is the absorbed dose delivered to the target region r T , A(r S ) is the timeintegrated activity (TIA) in the source region r S and S(r T ← r S ) is the mean absorbed dose to r T per unit activity present in r S , called S-value. The TIAs reflect the patient-specific biodistribution of the considered radiopharmaceutical while the S-values are instead based exclusively on the physical features of the radionuclide selected for the treatment and on the characteristics of the target and source regions. They can be expressed as where E i is the emitted energy (mean or individual) for the ith nuclear transition, Y i is the probability of the ith nuclear transition, φ i (r T ← r S ) is the fraction of E i emitted within the source tissue r S that is absorbed in the target tissue r T and m r T is the mass of the target region: S-values for different radionuclides and source-target combinations have been calculated using MC codes and reference anthropomorphic computational phantoms with homogeneous density for each tissue and uniform activity inside each region. For tumors, a spherical model is provided with S-values for spherical regions of various volumes. However, as general phantoms cannot model the many diverse scenarios (i.e., number of tumors, site and volume), any cross-fire contributions arising from regions other than the tumor itself are set to zero.
Despite its strict assumptions of tissue homogeneity and uniform activity distribution, the MIRD approach is still diffused worldwide, due to its fast calculations and the possibility to use planar images in addition to SPECT/PET. In order to overcome its limitations, however, methods which consider the patient-specific activity distribution derived from functional images (SPECT or PET) have been developed [64]. Those methods provide, in addition to mean doses to target organs and lesion, dose maps and dose-volume histograms (DVHs).
The most accurate technique for absorbed dose assessment, at least theoretically [65], is a direct Monte Carlo simulation of radiation transport, which can account for both nonuniform activity and heterogeneous tissues (Figure 3). In this approach, the activity map is used to sample the decay location and the transport of the radiation emitted from those sites into the patient-specific geometry derived from the CT, which straightly allows for the calculation of the energy deposited in each voxel. MC simulations require numerous parameters and the computation time can be intensive, depending on the number of primaries selected for the simulation and on the hardware specifics. The MC codes most used for internal dosimetry are GATE/Geant4 [66], EGSnrc [67], MCNPX [68,69] and Fluka [70].
tumor itself are set to zero.
Despite its strict assumptions of tissue homogeneity and uniform activity distribution, the MIRD approach is still diffused worldwide, due to its fast calculations and the possibility to use planar images in addition to SPECT/PET. In order to overcome its limitations, however, methods which consider the patient-specific activity distribution derived from functional images (SPECT or PET) have been developed [64]. Those methods provide, in addition to mean doses to target organs and lesion, dose maps and dose-volume histograms (DVHs).
The most accurate technique for absorbed dose assessment, at least theoretically [65], is a direct Monte Carlo simulation of radiation transport, which can account for both nonuniform activity and heterogeneous tissues (Figure 3). In this approach, the activity map is used to sample the decay location and the transport of the radiation emitted from those sites into the patient-specific geometry derived from the CT, which straightly allows for the calculation of the energy deposited in each voxel. MC simulations require numerous parameters and the computation time can be intensive, depending on the number of primaries selected for the simulation and on the hardware specifics. The MC codes most used for internal dosimetry are GATE/Geant4 [66], EGSnrc [67], MCNPX [68,69] and Fluka [70]. After MC, convolution with dose-point kernel (DPK) or voxel S-values (VSV) is in principle the most accurate method for dose calculation at the voxel level. A DPK represents the mean absorbed dose per transition (mGy/MBq/s) at a given radial distance from an isotropic point source located within a homogeneous, infinite medium (typically water). DPKs are continuous functions and in order to be used for voxel dosimetry they must be discretized and adapted to the voxel geometry. This could be done either considering source and target voxels as collapsed to the voxel centroid or as entire volumes. In this last case, a multidimensional integration of DPKs over the source and target voxels must be performed. The convolution with the cumulated activity map results in a dose map.
The VSV approach was introduced by the MIRD Pamphlet No. 17 and is the analogue at the voxel level of the MIRD formalism at the organ level. In fact, nothing in principle prevents the MIRD schema to be applied to smaller volumes, i.e., sub-organs or even cells, except for the fact that the resolution of the PET or SPECT images must be adequate. The main equation is therefore a generalization of (2): where voxel k is the target voxel and voxel h is one of the N source voxels. VSV are calculated with a direct MC simulation of radiation transport into a homogeneous, infinite medium and into voxelized geometry. The convolution of those factors with the activity map gives the 3D dose distribution.
VSV have been calculated for different voxel sizes, radionuclides and medium [64,71]. For the most used radionuclides, VSV kernels are usually sized 7 × 7 × 7 or 11 × 11 × 11. In order to overcome the limitations of VSV tabulations due to the voxel size and shape, other methods have been developed, such as the fine resolution and resampling method developed by Dieudonné et al. [72], the analytical model by Amato [73] or the DPK integration [74].
The assumption of homogeneous tissue compositions in DPK and VSV calculation may lead to substantial errors in dose distributions when regions of the body with high tissue heterogeneities are considered (i.e., air-tissue or bone-tissue interfaces). In those cases, direct MC simulations can be used.
Finally, the local energy deposition method (LDM) assumes that all the energy is absorbed in the source voxel. As a consequence, the LDM can be used when the voxel dimension is greater than the radiation range, typically when alpha and short-range beta emitters are considered. For photons or "high" energy beta emitters with range larger than voxel dimensions, the convolution of VSV or a direct MC simulation should be more adequate, at least theoretically. However, due to the limited resolution of SPECT and PET images, the LDM might be preferable in some cases, as the former is considered to cause a further blurring of the images and for the latter the longer computation time and complexity might not be justified [65,75,76].
Dose-Rate Integration
As an alternative to the activity integration, time integration of the dose rate could be also performed. In this case, activity maps at each time point are converted into doserate maps using one of the methods described above and absorbed dose is obtained by integrating the dose rate ( Figure 4). Although there are not yet studies investigating this point, it seems that absorbed dose values obtained by time-integrating the absorbed dose rate are more reliable than those obtained through the integration of the TAC. One possible reason could be that the dose rate can be thought of as a smoother function, being normalized for the mass, and thus easier to integrate. In addition, the activity integration is performed assuming that the VOIs in which the activity is evaluated at each timepoint are the same, but patient movement, organ deformation and the low resolution of the images make it impossible. On the contrary, being that the dose rate is already divided by the mass, it becomes independent of the possible variation of the VOI volumes in the serial images. Finally, it must be considered that in therapy the absorbed dose rate could be probably a more significant dosimetric quantity as compared to the absorbed dose to be associated with the radiation effect, from a radiobiological perspective. As a consequence, it seems more reasonable to perform the calculation of the absorbed dose rate instead of that of the cumulated activities. This issue is presently under investigation.
Uncertainties
Each step of the dosimetric workflow described above is associated with a specific uncertainty (e.g., uncertainty in the volume determination, number of counts, calibration factor, fitting parameters and S factors), all of which are combined in a complex manner to determine the global uncertainty related to the absorbed dose itself. The EANM provided guidelines for calculating the absorbed dose uncertainty based on the law of propagation of uncertainties [77]. Following those indications, Finocchiaro et al., e.g., estimated Although there are not yet studies investigating this point, it seems that absorbed dose values obtained by time-integrating the absorbed dose rate are more reliable than those obtained through the integration of the TAC. One possible reason could be that the dose rate can be thought of as a smoother function, being normalized for the mass, and thus easier to integrate. In addition, the activity integration is performed assuming that the VOIs in which the activity is evaluated at each timepoint are the same, but patient movement, organ deformation and the low resolution of the images make it impossible. On the contrary, being that the dose rate is already divided by the mass, it becomes independent of the possible variation of the VOI volumes in the serial images. Finally, it must be considered that in therapy the absorbed dose rate could be probably a more significant dosimetric quantity as compared to the absorbed dose to be associated with the radiation effect, from a radiobiological perspective. As a consequence, it seems more reasonable to perform the calculation of the absorbed dose rate instead of that of the cumulated activities. This issue is presently under investigation.
Uncertainties
Each step of the dosimetric workflow described above is associated with a specific uncertainty (e.g., uncertainty in the volume determination, number of counts, calibration factor, fitting parameters and S factors), all of which are combined in a complex manner to determine the global uncertainty related to the absorbed dose itself. The EANM provided guidelines for calculating the absorbed dose uncertainty based on the law of propagation of uncertainties [77]. Following those indications, Finocchiaro et al., e.g., estimated the uncertainty associated with the tumor absorbed dose in patient treated with 177 Lu-DOTATATE, which exceeded 100% [78].
Although an uncertainty analysis is not currently included in the clinical routine, its practice would help to identify and reduce errors, and make data collected in different centers comparable [79].
The Potential Role of Artificial Intelligence in TRT
Artificial intelligence (AI) is increasingly applied in medical physics [80], including internal dosimetry [81,82]. For TRT, the most significant application of AI consists in the development of fully automated segmentation tools [83][84][85], which contribute not only to reduce the overall time required for the dosimetry evaluation, but also to reduce the user-dependent variability associated with the absorbed dose.
The involvement of AI into the other steps of the dosimetric workflow (e.g., image acquisition, reconstruction, registration and absorbed dose conversion) is very challenging and currently under investigation. Some authors used AI to develop density specific S-values able to overcome the limitation of standard VSV calculated into homogeneous mediums with the aim to enhance the dose calculation accuracy [86,87]. Other groups studied the accuracy of AI methods to convert the activity images directly into doserate maps [88,89]. Finally, the possibility of predicting the absorbed dose starting from diagnostic images was also investigated [90]. The validation of this method, however, would require a consistent number of training datasets for different therapies and clinical situations, still not available or adequately collected. The assessment of robust and more standardized dosimetry methods, which is a priority and matter of most present efforts by the internal dosimetry community, will certainly open the way to the implementation of IA in TRT dosimetry.
Software Packages for Internal Dosimetry
Many software packages for internal dosimetry have been developed throughout the last years, either homemade or commercially available. Most are meant for research purposes only, but some detain the Food and Drug Administration (FDA) approval and/or the conformité européenne (CE) marking and are intended for clinical use. Different packages may address different parts of the dosimetric workflow. Some only provide tools for converting TIA into absorbed dose, others include registration and segmentation and others also allow for quantifying the activity, including algorithms for image reconstruction and correction.
Software packages can be basically divided into two groups: a first-generation which perform dosimetry at the organ level, i.e., providing only mean doses to organs and lesions, and more sophisticated and recent packages that perform dosimetry at the voxel level, which also provide a dose map and DVH. In the first group, the activity distribution inside the source regions is supposed to be uniform, whereas the programs of the second group use the patient activity distribution in SPECT or PET images to derive the dose map.
Organ Level or Phantom-Based Software Packages
Software packages at the organ level use the traditional MIRD scheme and assume homogeneous tissue density and composition and a uniform activity distribution inside each source organ or region of interest. Libraries with specific absorbed fractions (SAFs) and the S-values previously calculated with MC codes for various anthropomorphic phantoms [91][92][93] are loaded into the software, along with radionuclide decay data [94,95]. The user needs to select the radionuclide and the phantoms of interest and to provide the time-integrated activity coefficients (TIACs), i.e., the TIA divided by the administered activity, for each of the source organs. Output data are the average absorbed doses and/or the equivalent doses per unit administered activity for each of the target organs.
Anthropomorphic phantoms incorporate reference data [96] reproducing the average characteristics of a population. Most of the available software, however, include the possibility to take a first step into personalization correcting the absorbed dose for the patient-specific organ masses [97]. A further step in this direction is provided by MIRDcalc, which employs weight-based phantoms accounting for different geometries. In such phantoms the organ masses and S-values are linearly scaled from the two reference phantoms (ICRP phantoms) closest by mass to the patient (https://mirdsoft.org/mirdcalc, accessed on 12 December 2021) [98,99].
Since tumors are not included in anthropomorphic phantoms, S-factors for isolated (i.e., considering only self-dose) unit density spheres of different volumes have been calculated and loaded into most of the software packages.
In Table 4 the most widely used software packages for dosimetry at the organ level are summarized, along with their main characteristics. Among the software packages mentioned above, OLINDA/EXM [102,103] and 3D-RD-S (http://rapiddosimetry.com, accessed on 12 December 2021) are the ones providing a tool to perform kinetic data analysis. OLINDA/EXM allows the user to enter kinetic data and fits them with a multiexponential model to derive the integral activity. Parameters about the goodness of the fit are not provided, and the accuracy of the residence time estimation is left to user expertise. 3D-RD-S was recently developed and offers different options for fitting and integrating the TAC, providing parameters about the goodness of the fit and uncertainties.
A new free software at the organ level is being developed by the OPENDOSE collaboration [104], meant for diagnostic applications and radiation protection and presently under validation. As a novelty, it allows the user to choose among different radiation spectra, different phantoms (with associated radiation protection parameters) and different SAF values, including those provided by OPENDOSE itself with uncertainty estimates [105]. Software packages at the organ level have been a most important tool for internal dosimetry in the last 30 years, allowing its diffusion and daily practice within medical centers for their low cost, user-friendliness, extremely short calculation time and no special computer requisites. However, since uniform activity distribution within source organs is assumed, patient-specific dosimetric approaches, i.e., approaches in which the activity distribution derived directly from SPECT or PET images is maintained, are currently required in order to perform a more reliable personalized dosimetry.
Voxel Level or Patient-Based Software Packages
Different software packages for the voxel level dosimetry (3D or hybrid) have been recently developed, both for MRT and SIRT. Although they are usually classified on the basis of the method used to create the dose map, software packages for dosimetry typically differ not only in this but also in the tools they offer for registration, segmentation and integration of the TACs. Comparisons among different software packages are thus needed to point out these differences and possibly reduce them in view of a standardization of the dosimetric procedure [106][107][108].
The main commercial software packages for 3D dosimetry are reported in Table 5.
Discussion
The implementation of protocols for patient-specific dosimetry-intended as both treatment planning (pre-therapy) and verification (post-therapy) dosimetry-requires economic investments from hospitals (e.g., money for advanced and adequate technical devices, software packages and qualified personnel), educational efforts and patient compliance for further analysis and investigations [15,109]. This practice, thus, should prove to be advantageous in terms of costs and benefits in order to be introduced into the clinical routine.
According to a recent European survey, the implementation of dosimetry-based treatment planning is poorly diffused [109]. In most cases, the absence of dose-effect correlations-and thus of dose thresholds for OAR toxicity or tumor response-seems to justify the standard practice based on fixed or weighted-based activity. However, the lack of dose-effect evidence could be in some cases only illusory. Unsuitable dosimetric methods (e.g., related to calibration, image corrections, fitting, use of surrogate radiopharmaceuticals for imaging), definition of the endpoint, range of absorbed doses and number of patients, could all be factors responsible for hiding correlations. Furthermore, the absorbed dose might not be the ideal quantity to consider. The transition from the absorbed dose as main parameter to start with to radiobiological quantities and models able to take into account DNA damage and repair mechanisms, the number and frequency of treatment cycles and other radiobiological effects, is thus a further challenge [110].
Multicentric trials are mandatory to explore the absorbed dose-effect correlation, overcoming the problem of the small number of patients [79]. Trials, in turns, require standardization of the dosimetry calculation methodology, together with the outcome/toxicity definition and data acquisition.
Although at present each software package offers its own strategy for calculating the absorbed dose without well-established rationales or accuracy parameters to comply with, the whole scientific community-software manufacturers included-is making a great effort to disseminate recommendations and education materials with the shared goals of standardizing the methodology, identifying and, whenever possible, reducing the sources of error related to the absorbed dose calculation and improving the traceability of the dosimetric data [105,[111][112][113]. From this perspective, the EANM provided several guidelines for dosimetry reporting, uncertainty analysis and dosimetric methods specific for some TRT therapies [51,62,77,114]. Other resources are in preparation.
Despite difficulties, many different dosimetry-guided treatment planning protocols have been already developed for various therapies. Threshold doses are usually extrapolated from EBRT experience and possibly adapted to TRT in order to consider the different characteristics of the radiation delivery in the two treatment modalities (e.g., high dose rate vs. low dose rate) [9,115]. Furthermore, since TRT is often used for the treatment of metastatic diseases that present many lesions with highly different uptake and retention properties, treatments are usually planned based on OARs' dose constraints more than lesions' [7]. A possible approach for overcoming this problem could be to consider a "whole-body" tumor absorbed dose instead of the index lesion absorbed doses, as proposed by Violet et al. [48].
In addition, for a few therapies, the first promising signs are emerging, showing that the implementation of treatment planning based on dosimetry brings great benefits for the patients compared to the standard approach [43,116,117]. Garske-Román et al., e.g., proved that patients in which kidney-absorbed dose reached 23 Gy had a longer overall survival (54 vs. 25 months) and progression-free survival (33 vs. 15 months) compared to patients treated according to the standard protocol. For other therapies, instead, randomized controlled trials demonstrating the advantages of dosimetry-based versus fixed-activity approaches have not yet been performed.
In conclusion, although proper dosimetry-guided treatment planning could not be applied yet in most therapies due to the lack of dose constraints or to challenging dosimetry, the increasing evidence of absorbed radiation dose-effect relationships and the recent successes of a dosimetry-based administration approach appear to be sufficient reasons to stimulate further research on the development of personalized treatment planning.
In addition to dosimetry for treatment planning, post-treatment dosimetry is also to be considered as part of a personalized treatment. Since retreatment is frequently an option, the absorbed dose already delivered to OARs should be verified for each patient, according to the European Council Directive 2013/59/EURATOM. Moreover, although lesions and bone marrow dosimetry together with dosimetry for alpha-targeted therapy are still challenging [118], the increasing availability of new radiopharmaceuticals, technological improvement of imaging equipment and software for patient-based dosimetry makes patient-specific dosimetry for verification increasingly more feasible.
Overall, the practice of internal dosimetry perfectly fits with the aim of precision and personalization, which are the major claims of medicine in the present era [109].
Conclusions
This paper provides an examination of methods and techniques used in TRT, with a focus on personalized dosimetry. Conceptual and practical tools for those who approach the field of internal dosimetry are presented, along with references of interest, especially regarding absorbed-dose-biological-effect studies. In order to fulfill the optimization principle and the recent European Council Directive 2013/59/EURATOM, personalized dosimetry in TRT should be integrated in clinical practice, as occurs in EBRT. Standardization of the dosimetric methodology, of the outcome/toxicity definition and of the data collecting procedure is required, in order to enrich the patient clinical data and make accurate multicentric studies feasible. These studies are fundamental to establish potential absorbed dose constraints for organs at risk and target tissues and, therefore, to improve patient outcomes and reduce long term costs. | 9,755.2 | 2022-02-01T00:00:00.000 | [
"Medicine",
"Engineering",
"Physics"
] |
On the estimation of vertical air velocity and detection of atmospheric turbulence from the ascent rate of balloon soundings
. Vertical ascent rate V B of meteorological bal-loons is sometimes used for retrieving vertical air velocity W , an important parameter for meteorological applications, but at the cost of crude hypotheses on atmospheric turbulence and without the possibility of formally validating the models from concurrent measurements. From simultaneous radar and unmanned aerial vehicle (UAV) measurements of turbulent kinetic energy dissipation rates ε , we show that V B can be strongly affected by turbulence, even above the convective boundary layer. For “weak” turbulence (here ε (cid:46) 10 − 4 m 2 s − 3 ), the fluctuations of V B were found to be fully consistent with W fluctuations measured by middle and upper atmosphere (MU) radar, indicating that an estimate of W can indeed be retrieved from V B if the free balloon lift is determined. In contrast, stronger turbulence intensity systematically implies an increase in V B , not associated with an increase in W according to radar data, very likely due to the decrease in the turbulence drag coefficient of the balloon. From the statistical analysis of data gathered from 376 bal-loons launched every 3 h at Bengkulu (Indonesia), positive V B disturbances, mainly observed in the
Abstract. Vertical ascent rate V B of meteorological balloons is sometimes used for retrieving vertical air velocity W , an important parameter for meteorological applications, but at the cost of crude hypotheses on atmospheric turbulence and without the possibility of formally validating the models from concurrent measurements. From simultaneous radar and unmanned aerial vehicle (UAV) measurements of turbulent kinetic energy dissipation rates ε, we show that V B can be strongly affected by turbulence, even above the convective boundary layer. For "weak" turbulence (here ε 10 −4 m 2 s −3 ), the fluctuations of V B were found to be fully consistent with W fluctuations measured by middle and upper atmosphere (MU) radar, indicating that an estimate of W can indeed be retrieved from V B if the free balloon lift is determined. In contrast, stronger turbulence intensity systematically implies an increase in V B , not associated with an increase in W according to radar data, very likely due to the decrease in the turbulence drag coefficient of the balloon. From the statistical analysis of data gathered from 376 balloons launched every 3 h at Bengkulu (Indonesia), positive V B disturbances, mainly observed in the troposphere, were found to be clearly associated with Ri 0.25, usually indicative of turbulence, confirming the case studies. The analysis also revealed the superimposition of additional positive and negative disturbances for Ri 0.25 likely due to Kelvin-Helmholtz waves and large-scale billows. From this experimental evidence, we conclude that the ascent rate of meteorological balloons, with the current performance of radiosondes in terms of altitude accuracy, can potentially be used for the detection of turbulence. The presence of turbulence complicates the estimation of W , and misinterpretations of V B fluctuations can be made if localized turbulence effects are ignored.
Introduction
The vertical ascent rates V B of meteorological balloons are mainly the combination of the free lift and fluctuations due to vertical air velocities and variations in atmospheric turbulence drag effects. Despite balloons' frequent use all over the world, few studies have tried to extract information from V B . Most of these studies have focused on the estimation of the vertical air velocity because this parameter is very important for many meteorological applications (e.g., Wang et al., 2009) and for the characterization of internal gravity waves (e.g., McHugh et al., 2008). Evidence of internal gravity wave fluctuations in balloon ascent rates was reported by Corby (1957), Reid (1972), and Lalas and Einaudi (1980). Shutts et al. (1988) and Reeder et al. (1999) described large amplitude gravity waves in the stratosphere from the analyses of V B . However, the models or methods used for retrieving vertical air velocity from balloon ascent rates are often based on crude assumptions about atmospheric turbulence: it is either considered more or less uniform or neglected above the planetary boundary layer. Johansson and Bergström (2005) estimated the height of boundary layers from V B considering that V B is mainly affected by turbulence in convective boundary layers. In fact, the free stratified atmosphere usually reveals a "sheet and layer" structure (e.g., Fritts et al., 2003) consisting of more or less deep layers of turbulence (a few hundred meters) separated by quieter and generally statically stable regions. In such conditions, turbulence intensity, often quantified by turbulence kinetic energy dissipation rates, can vary over several orders of magnitude with height and can reach levels similar to those met in the convective atmospheric boundary layers (e.g., Luce et al., 2019).
In addition, most studies did not validate their estimations from concurrent measurements of vertical air velocities, making their models and hypotheses uncertain (e.g., McHugh et al., 2008;Gallice et al., 2011). Gallice et al. (2011) proposed a model to describe balloon ascent rates in the presence of free-stream turbulence. Even if the variations in the drag coefficient with altitude were taken into account, their expression of the drag coefficient was based on a mean turbulent state, and, thus, the model did not consider the possibility of localized layers of turbulence, as acknowledged by the authors. Wang et al. (2009) retrieved vertical air velocity from radiosondes and dropsondes assuming that turbulence has a negligible effect above the convective boundary layer such that the drag coefficient was nearly constant. Comparisons with wind profiler data (their Fig. 7) showed poor agreement. Most profiles revealed oscillations, indicative of gravity waves. McHugh et al. (2008) noted large (always positive) variations in balloon ascent rate around the tropopause over Hawaii and interpreted these localized peaks as strong increases in W due to mountain waves around their critical levels. Independent measurements could not validate this interpretation, and possible turbulence effects were not considered when interpreting observations. Houchi et al. (2015) used a model similar to Wang et al.'s (2009) model for statistical estimates of the vertical air velocity. The authors assumed that the balloon ascent rate is the sum of the ascent rate in still air and vertical air velocity.
Modeling the ascent of balloons is not an easy task, especially if the free-stream turbulence effects are not correctly taken into account. In the present work, we study the effects of turbulence on V B from experimental data. For this purpose, vertical profiles of V B are compared with profiles of turbulence kinetic energy (TKE) dissipation rate ε estimated from unmanned aerial vehicle (UAV) data and from 46.5 MHz middle and upper atmosphere (MU) radar data. These data were gathered during Shigaraki UAV-Radar Experiment (ShUREX) campaigns at the Shigaraki MU observatory (Kantha et al., 2017). In addition, the MU radar provided coincident estimates of vertical air velocities so that quantitative comparisons with V B could be made. We found that a balloon is likely a good "W sensor" in the case of light turbulence only: under the conditions of our experiment, V B is affected by turbulence and thus cannot be used for estimating W when ε 10 −4 m 2 s −3 (1 mW kg −1 ). Therefore, a balloon is potentially more a "turbulence sensor" than a "W sensor", and very large errors in W can arise if the presence of free-stream turbulence is not properly considered. Alternately, statistics on the occurrence of atmospheric turbulence could be made from balloon ascent rates if the contribution of air motion is accurately taken into account. This alternative purpose seems to be more achievable than retrieving W , except at stratospheric heights and during very calm tropospheric conditions, as shown by earlier studies, and likely during deep convective storms during which strong vertical motions are expected.
The effects of turbulence on the balloon ascent rate can be understood considering that this parameter in still air is given by (Gallice et al., 2011) where R is the radius of the volume-equivalent sphere, g is the acceleration of gravity, ρ a is the air density, and m tot is the total mass of the balloon-radiosonde system. The drag coefficient, c D , depends on the Reynolds number associated with the balloon Re = ρ a V z R/µ, where µ is the dynamic viscosity of air. The variation in c D with Re for a perfect sphere in the absence of atmospheric turbulence and for various values of turbulence intensity T u , defined as the ratio of the standard deviation of the incident air velocity fluctuations to the mean incident air velocity (e.g., Son et al., 2010), is shown in Fig. 1 of Gallice et al. (2011). c D suddenly decreases by a factor 4 to 5 above a critical value of Re (called drag crisis) so that V z can increase by a factor 2 or more. In the presence of atmospheric turbulence, the drag crisis is displaced toward lower values of Re so that c D can be reduced when crossing a turbulent layer. Recently, Söder et al. (2019) compared a profile of Re with a profile of balloon ascent rate (their Fig. A1) and clearly showed the existence of a drag crisis about Re ∼ 4 × 10 5 in close agreement with the theoretical expectation for a sphere ( Fig. 1 of Gallice et al., 2011). Gallice et al. (2011) proposed another (smoother) model from experimental data with a more realistic shape of balloons and with more complete consideration of the heat imbalance between balloon and atmosphere. Their drag curve presented qualitative similarities with the curves by Son et al. (2010) for a mean turbulent state of the atmosphere at T u = 6 % and T u = 8 %. The fact that the model proposed by Gallice et al. does not consider the variability of turbulence with height is likely a weak point because turbulence is generally confined to layers of variable depth in the troposphere and the stratosphere.
In Sect. 2, we briefly describe the methods used for retrieving the atmospheric parameters analyzed in the present study. In Sect. 3, we show comparison results between V B , vertical velocity measured by MU radar, energy dissipation rate, and Richardson number profiles from three case studies selected from ShUREX2017. These comparisons clearly indicate that turbulence effects dominate the balloon ascent rate. The results of a statistical analysis from 376 balloons and based on the intimate relationship between turbulence and Richardson number Ri are shown in Sect. 4. They confirm that V B Atmos. Meas. Tech., 13,[1989][1990][1991][1992][1993][1994][1995][1996][1997][1998][1999]2020 www.atmos-meas-tech.net/13/1989/2020/ is dominated by turbulence effects when Ri 0.25. Finally, conclusions of this work are given in Sect. 5.
Estimation of V B
Rubber balloons 200 g in weight and manufactured by TO-TEX were equipped with Vaisala RS92SGPD radiosondes for pressure, temperature, relative humidity and horizontal wind measurements during ShUREX campaigns. Their ascent rate V B was calculated from z/ t where z is the GPS altitude of the radiosondes and t = 2 s. A 20 s rectangular window was applied to V B to reduce the noise, likely due to pendulum effects, self-induced balloon motions and other potential causes. For the case studies, we focused on the data from the ground (384 m a.s.l. at MU Observatory) up to the altitude of 7.0 km a.s.l. This is primarily because (1) the datasets were originally processed for comparisons with data from UAVs, which did not fly above altitudes of a few kilometers; (2) a limited height range makes the description of individual turbulent events less tedious; (3) the increasing horizontal distance between the radar and balloons with height due to strong horizontal winds becomes an important factor of uncertainty when doing comparisons; and (4) the signal-to-noise ratio (SNR) of radar measurements statistically decreases with height in the troposphere and low SNR values produce additional uncertainties.
Detection of turbulence from TKE dissipation rate ε
The TKE dissipation rate ε is a key parameter describing the intensity of dynamic turbulence. It is thus well adapted for the present purpose, i.e., the identification of turbulent layers when the balloons were flying. Values of ε can be calculated from UAV data using two methods described by Luce et al. (2019). A direct estimate is obtained from onedimensional (1D) spectra of streamwise wind fluctuation measurements. An indirect estimate is deduced from the temperature structure function parameter C 2 T calculated from 1D temperature spectra. Similar levels of ε and ε(C 2 T ) give credence to the results since the two estimates are independent. In addition, consecutive profiles can be obtained during UAV ascents and descents, depending on the configuration of the flights. Therefore, both vertical profiles of ε and ε(C 2 T ) during ascents and descents will be shown when available.
The TKE dissipation rate can also be estimated from MU radar data using the variance σ 2 of Doppler spectrum peaks produced by turbulence. It is based on an empirical model proposed by Luce et al. (2018) and validated from comparisons with UAV-derived ε. The expression of the model is ε (MU) = σ 3 /L out where L out ∼ 60 m. In the present work, an estimate of ε (MU) at a given altitude z is obtained from an average of the values of σ 2 over a centered-in-time 2 min window (about 30 values since radar profiles were obtained every ∼ 4 s) around the time that the altitude z was reached by the radiosonde (see also Fig. 1 of Luce et al., 2018, for a schematic). This procedure should ensure that the estimates of ε are representative of those met by the balloons, assuming horizontal homogeneity over a distance at least equal to the horizontal distance separating the balloons and the radar (up to ∼ 30 km; see Sect. 3). The horizontal distance between UAV and balloon measurements did not exceed ∼ 10 km up to the altitude of ∼ 4.0 km. Considering that all the turbulent events analyzed in the present study persisted for more than 1 h and were likely associated with meso-or synoptic-scale dynamics, the procedure may appear unnecessary, but it is crucial for the vertical velocity (see Sect. 3).
Consequently, we have three independent estimates of ε in the vicinity of the balloon flights. The two UAV estimates are obtained from the ground up to ∼ 4.0 km and the radar estimates in the height range 1.27-7.0 km. The radar and UAV estimates overlap between 1.27 and ∼ 4.0 km and are complementary outside this range.
Estimation of vertical velocity profiles from radar data
Vertical velocities W can also be directly measured by Doppler spectra when the radar beam is vertical (e.g., Röttger and Larsen, 1990). Pseudo-vertical profiles of W were reconstructed in the same way as ε (MU) by averaging over a 2 min window centered on the time that the altitude z was reached by the radiosonde. This 2 min averaging was applied in order to reduce the statistical estimation errors and is suitable for detecting W fluctuations of periods significantly larger than 2 min. As shown by, e.g., Muschinski (1996), Worthington et al. (2001) and Yamamoto et al. (2003), W can be biased by a few tens of centimeters per second or more because of refractivity-surface tilts produced by Kelvin-Helmholtz or internal gravity waves. However, this potential bias cannot explain the large differences of a few meters per second be-tween W and the vertical air velocities deduced from V B (see Sect. 3).
Case studies
Three balloon flights (hereafter called V6, V14 and V16, which correspond to UAV flight numbers SH14, SH29 and SH31, respectively) performed during ShUREX2017 on 18 and 26 June 2017 are analyzed in detail. Figure 1 shows the horizontal trajectories of the balloons up to the altitude of 7.0 km a.s.l. The nearly circular patterns of the UAV trajectories are also shown. The MU radar is at the position (0, 0).
The balloons were intentionally underinflated with respect to standard procedures in order to get a mean ascent rate of ∼ 2 m s −1 similar to the vertical ascent rate of the UAVs. V6, V14 and V16 reached the altitude of 7.0 km a.s.l. within about 33, 52 and 53 min, respectively, and their mean vertical ascent rates were about 3.3, 2.1 and 2.1 m s −1 . V6 each drifted by less than 15 km southwestward when reaching the altitude of 7.0 km. V14 and V16 drifted by about 30 km mainly eastward due to the influence of the subtropical jet stream.
Analysis of the radar data
Time-height cross sections of MU radar Doppler variance σ 2 (m 2 s −2 ), echo power (dB) and vertical velocity (m s −1 ) around the times of the UAV and balloon flights in the height range 1.27-7.0 km are shown in Figs. 2, 3 and 4 for V14, V16 and V6, respectively (they are not shown in time order for ease of the description made below). The red and blue lines indicate the altitude of the UAVs and balloons vs. time, respectively. For easy reference, the most prominent and persisting turbulent layers identified from enhanced Doppler variance (or ε (MU)) and UAV-derived ε are labeled. The source of these layers is sometimes recognizable from the morphology of the corresponding radar echoes in the high-resolution power images. When this is the case, the labels indicate the nature of the instabilities that gave rise to turbulence; otherwise the labels are "T1", "T2", etc. "KHI", "MCT" and "CBL" refer to sheared-flow Kelvin-Helmholtz instability (e.g., Fukao et al., 2011), mid-level cloud-base turbulence (e.g., Kudo et al., 2015) and convective boundary layer, respectively. The presence of saturated air is also indicated by the label "cloud". Note that enhanced σ 2 does not necessarily imply enhanced echoes (e.g., T1 in Fig. 2 and T2 in Fig. 4) because turbulence can sometimes produce faint echoes surrounded by enhanced echoes at their edges (e.g., McKelley et al., 2005). The CBL in Fig. 2 is only guessed because the CBL top only slightly exceeded the altitude of the first radar gate, but it was confirmed by the UAV observations.
The V14 case was characterized by weak turbulence except below ∼ 1.3 km (CBL) and above ∼ 5.0 km (MCT) . The white vertical lines are due to radar stops. See, e.g., Luce et al. (2018) for more details about these figures. Labels refer to the location of turbulent layers. (Fig. 2). The atmosphere was weakly turbulent between the CBL and MCT, but two events (T1 and T2) persisted around 2.3 km and between 4.0 and 4.5 km. The V16 case was also characterized by weak turbulence below 3.5-4.0 km and at least three well-defined layers associated with MCT and two instabilities within clouds (T2 and T3 in Fig. 3). The V6 case showed enhanced turbulence at almost all altitudes (Fig. 4), but distinct layers can be clearly noted: MCT around 5.0 km, KHI around 3.5 km (braided structures are clearly visible around 15:00 LT), and less intense events around 2.5 km (T2) and just above the cloud base (T3). Turbulent layers (T1) detected from UAV data below 1.27 km are not indicated on the figures.
Profile comparisons
The results of comparisons between V B and atmospheric parameter profiles are shown for V14, V16 and V6 in Figs. 5, 6 and 7, respectively. Panels (a) show vertical velocity profiles from MU radar data and radiosondes. Panels (b) and (d) show UAV-and radar-derived ε profiles in linear and logarithmic scales, respectively. Both representations are shown for ease of analysis. Panels (c) show Richardson number Ri profiles defined as Ri = N 2 /S 2 , where N is the Brunt-Väisälä frequency and S the vertical shear of horizontal wind, estimated from balloon data at 20 and 100 m resolution. Two vertical resolutions are used because Ri is scale-dependent (Balsley et al., 2008). The balloon ascent rate in still air V z was estimated from the difference between W and V B when turbulence was weak and the Richardson number was high. V z was found to be 1.8, 1.8 and 2.3 m s −1 for V14, V16 and V6, respectively, and V Bc = V B − V z is shown in the figures. Indeed, the vertical fluctuations of V Bc coincide well with those of W outside the labeled turbulent layers, indicating that the variations in balloon ascent rate are dominated by the vertical air motions when turbulence is "sufficiently weak". This is particularly evident in Fig. 6 in the height range 1.3-3.8 km where the wavy fluctuations in W (of ∼ 0.5 m s −1 in amplitude) coincide very well with those of V Bc . Several radar estimates of W are shown for different time lags. Each time lag is a multiple of ∼ 9 min, which corresponds to the apparent period of the wave in the radar image (Fig. 3). The fluctuations of W and V Bc are in phase. The W profile suggests that the oscillations still occurred above 3.8 km in the MCT layer. The V Bc profile indicates enhanced values of up to +1.8 m s −1 at 5.5 km that are clearly not related to vertical air motions.
In contrast, wherever UAV-and radar-derived ε estimates are enhanced in the labeled height ranges, V Bc is also enhanced and V Bc and W strongly differ. Note that the UAV profiles of ε during ascents and descents are very similar and there is a good agreement with the radar-derived profiles obtained during the balloon flights. Therefore, we can reasonably assume that these profiles are representative of the turbulence conditions met by the balloons. In general, the height ranges of enhanced ε coincide with minima of Ri, close to the critical value of 0.25, as expected for shear-generated turbulence (e.g., KHI in Fig. 7), or even less than 0, expected for MCT. Ri is not necessarily small over the whole depth of the layers (e.g., around 6.0 km in Fig. 5) and is surprisingly high for the whole depth of T2 in Fig. 7, but the overall results remain consistent. A puzzling result can be noted above the cloud base (6.0 km) during V6 (Fig. 7, as indicated by "??") where a strong increase in V Bc (∼ 4 m s −1 ) was neither associated with an increase in W nor an increase in turbulence according to MU radar observations. A slowdown of the balloon due to precipitation loading would rather be expected. This thus remains unexplained and, by default, we must invoke horizontal inhomogeneity of W and/or turbulence intensity over the horizontal distance between the radar and the balloon (∼ 10 km). Similar features were not observed in clouds during V14 and V16.
These case studies provide experimental evidence that turbulence can strongly increase the balloon ascent rate, very likely through the decrease in the drag coefficient. The observed V Bc is thus the combination of turbulence effects and vertical air velocities. Because W fluctuations appear significantly weaker than V Bc fluctuations, turbulence effects are likely dominant. On some occasions, an increase in V Bc might be due solely to turbulence effects, as in T1 of V14 (Fig. 5) since W does not show any particular variations in the range of T1.
In the present cases, ε ∼ 10 −4 m 2 s −3 seems to be a threshold below which turbulence does not seem to affect the balloon ascent rate significantly. However, this value is likely specific to the present observations and may not be applicable to other conditions.
Statistics
The case studies strongly suggest that increased balloon ascent rates are generally related to minimum values of the Richardson number (negative or smaller than ∼ 0.25 consistent with convective overturning or shear-generated instabilities in stratified conditions, respectively). This observation can be confirmed by analyzing the relationship between V Bc and Ri from a large amount of data. For this purpose, we used data from 376 radiosondes launched every 3 h in Indonesia (Bengkulu, November-December 2015) during a preliminary Years of the Maritime Continent (YMC) campaign (e.g., Kinoshita et al., 2019). The choice of this dataset is arbitrary, but it ensures that the same type of balloons (TOTEX-TA 200) and radiosondes (RS92SGPD) were used with similar procedures of balloon inflation for all the datasets. Figure 8 shows all the V B profiles with a slight offset for legibility. The balloons were inflated in order to get a mean ascent rate Atmos. Meas. Tech., 13,[1989][1990][1991][1992][1993][1994][1995][1996][1997][1998][1999]2020 www.atmos-meas-tech.net/13/1989/2020/ of 5 m s −1 (free lift). During the period of observations, the tropical tropopause layer (TTL) was often characterized by a strong temperature inversion just above the cold point temperature (CPT) around the altitude of 16-17 km (blue dots in Fig. 8) and a secondary temperature inversion of similar intensity at slightly lower altitude (red dots). For ease of statistical analysis, we refer to altitude ranges 0-16.3 km as troposphere and altitude ranges above 17.2 km (up to the top of the radiosoundings) as stratosphere.
The profiles of V B often display multiple peaks of variable widths in the troposphere especially in its upper part. In the stratosphere, the profiles are much smoother and show either weak variations or nearly monochromatic fluctuations undoubtedly due to internal gravity waves (Tsuda et al., 1994). Therefore, we suggest that the variations in V B with height are primarily due to vertical air motions in the stratosphere and mainly due to turbulence effects in the troposphere. To assess this hypothesis, we analyzed the relationship between Ri and V Bc (V B corrected from the free lift). We calculated (moist) Ri = N 2 m /S 2 , where N 2 m is the squared moist Brunt-Väisälä frequency using expression (5) of Kirshbaum and Durran (2004) at a vertical resolution of 50 m, a reasonable trade-off between 20 and 100 m used for the case studies. Because V B seems to be weakly affected by turbulence in the First, the scatterplot of V Bc vs. Ri shows a very significant maximum around and below the critical value of Ri c ∼ 0.25 in the troposphere (Fig. 9a). This is an indirect confirmation that V Bc peaks are indeed due to turbulence (Fig. 9a), considering that small Ri values are generally associated with turbulence. Second, this increase is accompanied by a larger scatter. There is no similar tendency in the stratosphere (Fig. 9b) because Ri rarely dropped below Ri c , in accordance with the absence of significant turbulence ascertained from the profiles of V B . The increased variability of V Bc with decreasing Ri in Fig. 9b should mainly be due to waves.
In order to emphasize the tendency shown by Fig. 9a and b, averaged values of V Bc in Ri value bands of 0.25 in width, V Bc , are shown in Fig. 9c and d, respectively. For Ri 1, V Bc is roughly constant but slightly negative: ∼ −0.2 m s −1 (Fig. 9c) because V B ST is likely not exactly the ascent rate in still air in the troposphere. This is not an important issue for the present purpose. When Ri drops below Ri c , V Bc increases by ∼ +0.9 m s −1 and remains high when Ri < 0 (Fig. 9a). The values for Ri < Ri c are not reliable in the Atmos. Meas. Tech., 13,[1989][1990][1991][1992][1993][1994][1995][1996][1997][1998][1999]2020 www.atmos-meas-tech.net/13/1989/2020/ Figure 10 show V Bc − V Bc vs. Ri for the troposphere. A larger scatter is observed between Ri = 0 and Ri c = 0.25. The broadening of the scatter was attributed to turbulence by Houchi et al. (2015). However, the broadening cannot be explained by a decrease in the drag coefficient because it is necessarily due to both positive and negative vertical velocities. The broadening is thus more likely due to turbulent billows of scales much larger than the balloon size. In addition, Kelvin-Helmholtz (KH) waves can also produce updrafts and downdrafts of up to a few meters per second when Ri reaches Ri c (see, e.g., Fukao et al., 2011). Therefore, the enhanced variability of V Bc when Ri is small (Fig. 9a) is presumably the combination of turbulence effects and vertical air motion disturbances produced by large-scale billows and KH waves.
Finally, it can be noted that the scatterplot of V Bc − V Bc (Fig. 10) is not symmetrical about 0 for Ri > 1 (for which turbulence is expected to be suppressed) and suggests peaks of V B (without corresponding negative disturbances) even in the absence of turbulence. However, this result must be tempered by the fact that turbulence can be observed even if the estimation of Ri at a given resolution is not small (see, e.g., Fig. 7, T2). Measurement and estimation errors of temperature, humidity and winds cannot be discarded on some occasions and N 2 m may not be the appropriate parameter for all conditions. For all these reasons, this observation may not be indicative of more complex interactions between the balloon and the surrounding atmosphere.
Discussion and conclusions
We have found that the possibility of retrieving the vertical air velocity W from radiosonde ascent rate V B highly depends on the turbulent state of the atmosphere. In turbulent layers generated by shear or convective instabilities, W can-not be measured because V B is very likely affected by the decrease in the drag coefficient c D of the balloon. In contrast, in the calm regions of the atmosphere, the fluctuations of V B are dominated by the fluctuations of W . These conditions were probably met by, e.g., Corby (1957) and Reid (1972) and are most likely met in the lower stratosphere (Shutts et al., 1988;Reeder et al., 1999). This was also the case during the conditions analyzed by Wang et al. (2009) above the CBL. However, in light of our observations, we speculate that Wang et al. (2009) also detected turbulent layers: localized increases in V B (up to ∼ 2 m s −1 ) observed in the height range 8-10 km (their Fig. 1) may be attributed to turbulent layers. McHugh et al. (2008) interpreted isolated peaks of V B of several meters per second of amplitude near the tropopause and at the jet-stream level in terms of W disturbances around critical levels associated with mountain waves. The absence of corresponding negative disturbances was explained by the threedimensional nature of the flow. Even though our hypothesis remains speculative in the absence of additional and independent measurements of vertical air velocity, we suggest that turbulence effects may have also contributed to the observed increase in ascent rates since critical levels are generally associated with turbulence. A careful scrutiny of their Figs. 3-7 indicates that V B increased at altitudes where the horizontal wind shear was enhanced and temperature gradient was close to adiabatic (so that Ri was likely small). Houchi et al. (2015) attributed the spread of the height increment "dz" probability density function to "turbulence". The authors likely implicitly referred to advection by large-scale billows. The decrease in the drag coefficient due to turbulence can explain the upward-only motion anomaly noticed by the authors.
It turns out that V B can also potentially be used for the detection of turbulence in the free atmosphere if the increase in V B can be separated from the contribution of W . Turbulence is frequent in the free atmosphere but also very variable with height and generally distributed in layers, especially in stratified conditions. This feature was likely not well appreciated by Gallice et al. (2011). The authors themselves recognized that their model cannot work if localized turbulencethey proposed the example of turbulence generated by gravity wave breaking -occurs.
The amplitude of the V B disturbances should depend on the variation in c D with the Reynolds number, the intensity of turbulence and on the scales of turbulence with respect to the balloon size so that it might be difficult or even impossible to retrieve turbulence parameters solely from V B measurements. However, further comparisons such as shown in Sect. 3 might be useful for establishing empirical rules on the turbulence detection threshold.
Data availability. The balloon data are archived at the YMC Data Archive Center maintained by JAMSTEC (http://www.jamstec.go. jp/ymc/ymc_data.html, JAMSTEC, 2020). The radar and UAV data are still under processing for other purposes. | 7,567 | 2019-09-30T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Sunflower Oilcake as a Potential Source for the Development of Edible Membranes
Sunflower oilcake flour (SFOC) resulting from the cold extraction of oil is a rich source of valuable bio-components that stimulated the development of novel, biodegradable and edible films. The films were prepared by incorporating different concentration of sunflower oilcakes (0.1–0.5 g). The obtained films were characterized in terms of physical, water-affinity, antimicrobial and morphological properties. The edible-film properties were affected significantly by the presence and the level of SFOC added. The water vapor permeability and water vapor transmission rate improved with the amount of SFOC added. However, the solubility, oxygen and grease barrier were slightly lower than control film. SEM analysis revealed a rougher but continuous structure with the increases in sunflower oilcake. Moreover, the films with different SFOC levels were opaque, thus presenting good protection against UV radiation. Overall, the SFOC can be use as raw material to produce edible films with suitable properties and microbiological stability for food-packaging applications.
Introduction
Food packaging is very important because it provides nutritional information for consumers and protection against potential damage (physical) and environment contamination (chemical and microbiological factors) [1]. Usually, the materials used for packaging are glass, paper, metals, plastics and polymeric materials. From these, plastics are preferable due to their good material properties (low cost, good tensile strength and good protection against moisture, oxygen, unpleasant odor and microorganisms) [2]. The major disadvantage of plastic is the crucial effect on the environment: non-degradability and non-renewability [3].
Edible and biodegradable films are upcoming alternative packaging materials for reducing plastic waste while improving stability, quality, safety and the variety offered to consumers [4].
Edible packaging materials are obtained from natural polymers such as polysaccharides, lipids, proteins or a combination of these [5]. The continuous development of edible film has been created by various researchers to match conventional plastic films [6]. Polysaccharides (cellulose, starch, pectin, alginates and chitosan) are the most popular natural polymer used in the production of edible films [7]. Alginates are isolated from brown seaweeds and, due to their properties (thickening; stabilizing; film-forming; suspending; resistance to solvents, oil and grease), are competent materials for the development of edible films [8,9].
The application of edible films as a packaging material is influenced by their characteristics, such as structural, biological, optical and barrier. The edible films should exhibit a good barrier against scents, vapor, oil and water, as well as oxygen and light degradation (inhibition of lipid oxidation, delaying moisture loss, prevention of discoloration, maintenance of the products appearance during marketing), excellent solubility and antimicrobial properties (improvement of the quality and shelf life of the products) [10]. Sensory properties are also important in the production of packaging. Thus, edible films can be considered for commercialization if they fulfill the most important factor, i.e., edibility. To be eaten as part of food, all ingredients in the edible films should be GRAS (general recognized as safe) and used within the limitations specified by the U.S. Food and Drug Administration (FDA) [11,12].
In recent decades, due to the increase in the popularity of sustainability and environment-protection concepts, it is important in industrial production to develop new strategies designed to make the best use of all resources without creating any wastes [13,14]. Oilseeds are grains that, due to their high fat content (>40%), are used primary in most countries as sources for vegetable oils [15]. The oilseed industry generate large amounts of byproducts that are currently underused [16]. Sunflower oilcake (SFOC) is a byproduct that remains after the cold extraction of oil from sunflower seeds [17]. SFOC contains significant amount of residual oil (1-23.6%), proteins (19.93-44.9%), minerals (4.69-8%), fibers (13.07-33.4%) and carbohydrates (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28).2%) [18]. It can be used in both human and animal diets due to its rich content in bioactive compounds [18]. Possible methods of valorization include: the isolation of high-value compounds and their further utilization in foods; as a substrate for the production of fuels, surfactants, enzymes and antibiotics; and as feeds [19][20][21]. In recent year the utilization of residues and byproducts from the food industry has aroused great interest for the production of edible packaging material [22]. Byproduct valorization allow the reuse of these materials into the supply chain, thus adding more value to foods, also reducing cost and risks regarding their disposal in the environment [13,20,23].
Suput et al. [24,25] investigated the possibility of using whole sunflower oilcake in the development of biopolymer films and the effects of pH, temperature and essential oil on them. The films were firm, smooth, shiny and dark brownish/greenish, with a sunflower fragrance. Water vapor permeability and solubility were uniform but decreased with increases in temperature. The tensile strength and elongation at break were fairly low, but antioxidant properties and increased elongation at break were obtained when adding 0.25-1% parsley and rosemary oils. All of these properties make the films sufficient for application in the food industry. The films had also a good barrier to UV.
In the literature were found studies when other oilcakes were used in the production of films, such as pumpkin, hemp and rapeseed. Popovic [26,27] obtained pumpkin-oilcakebased films with adequate characteristics. With the increase in pumpkin oilcake amount were obtained films with the best tensile strength, elongation at break and solubility properties. Due to the content in antinutritional factors, the proteins were extracted from hempseed and rapeseed oilcake. With 50% glycerol (as plasticizer) and the extracted proteins were obtained high-performance films at pH 12 [28]. The realization of onlyrapeseed-proteins-based films cannot be done because the films presented bad mechanical and antimicrobial properties; thus, gelatin, chitosan and agarose must be used [29,30].
In order to propose an added-value outlet for the SFOC, the present work was aimed at investigating the potential of SFOC in the preparation of new edible packaging materials. Thus, films with different amounts of SFOC were examined to determine the effect of the oilcake on the water-affinity, mechanical, optical, barrier and structural properties of the SFOC-based films.
Materials
Sunflower oilcake (SFOC) was collected from a local factory in Suceava, Romania. The material was ground to a fraction of less than 180 µm and stored at room temperature until further use.
All chemicals used in this paper were of analytical grade and were purchased from Carl Roth (Karlsruhe, Germany).
Film Development
Films were developed by a wet cast method using sodium alginate, sunflower oilcake and glycerol as plasticizer. The sodium alginate was dissolved in distilled water (1 g in 100 mL) at 50 • C for 1 h using a constant-temperature magnetic stirrer DLAB MS-H-PRO + (Beijing, China). After complete dissolution, 0.5 g of glycerol and different proportions (0.1-0.5 g) of SFOC were added in the solution. Film formulations are presented in Table 1. The film solutions were poured into Petri dishes and dried at 50 • C in an air oven for 48 h. The obtained films were kept in sealed envelopes at 20 • C and 50% relative humidity (RH) before further tests.
SFOC Characterization
Edible films are an integral part of the edible portion of food products, so they should follow the required regulation for food ingredients. Therefore, for every ingredient introduced in the film, its safety must be clearly demonstrated. The safety of the sunflower oilcake was demonstrated with the following analysis: water activity, spectroscopic methods and ELISA method.
Water activity index (a w ) was measured using an AquaLab 4TE water activity meter (Meter Group, Pullman, WA, USA) [31].
The ELISA (enzyme-linked immunosorbent assay) method was determined using kits provided by ProGnosis Biotech S.A. (Larissa, Greece) [32]. The samples were analyzed for the content of zearalenone, ochratoxin A, aflatoxin B1 and deoxynivalenol.
The mineral elements were determined with coupled mass spectrometer (Agilent Technologies 7500 Series, Santa Clara, CA, USA) in order to highlight the possible contamination of the sample with heavy metals.
Affinity to Water
Water content (WC) was determined by gravimetric method [27]. A piece of each film (3 cm × 3 cm) was dried at 110 • C for 24 h in a laboratory oven, ZRD-A5055 (Zhicheng Analysis Instruments, Shanghai, China). WC was calculated using Equation (1): where W 0 is the mass of the film before drying (g), and W 1 is the mass of the film after drying (g). The water activity of the films was determined according to the methods described previously.
The water solubility (WS) of the control and SFOC films was determined by immersing 3 cm × 3 cm specimens into 30 mL of water. The solution was gently stirred at room temperature for 8 h. The remaining films were filtered and dried at 100 • C for 24 h. The results were expressed as a percentage of the films before and after solubilization [33][34][35].
Barrier Properties
The water vapor permeability (WVP) of films was determined using a gravimetric method (ASTM E96-96M, 2016). The films were sealed on plastic Petri dishes filled with calcium chloride (RH = 0%) up to 1 cm from the film underside. The membranes were placed inside a desiccator containing saturated sodium chloride solution (RH = 75%). Due to the fact that the RH inside the dishes was lower than that from outside, the WVP was recorded using the weight gain of the dishes [36]. The dishes were weighed every eight hours for 48 h. Five weight measurement were made at 0, 8, 24, 32 and 48 h. The change in weight was recorded as a function of time; then the slope of each line was calculated by linear regression.
The water vapor transmission rate (WVTR) was calculated as the slope (g/h) divided by the film area (m 2 ) [37]. Results were calculated by Equations (2)-(4): where w/t is the weight gain of the dishes in time (g/h), x is the average film thickness (mm), A is the area of the exposed film (m 2 ), ∆p represents the partial water vapor difference across the two sides of the films (kPa), S is the saturated vapor pressure at 25 • C (3166 kPa), R 1 is the relative humidity in the desiccator (0.75), and R 2 is the relative humidity inside the dishes (0) [38][39][40].
The oxygen permeability (OxyP) of the films was determined by the analysis of the peroxide value. The membranes were cut into circles (diameter 3 cm) and sealed on top of Erlenmeyer flasks filled with 3 g sunflower oil. Then, the flasks were stored at 50 • C for 10 days [41]. The peroxide value (PV) was determined according to standard AOAC 965.33, with some modifications. The oil was treated with 10 mL chloroform, 15 mL glacial acetic acid and 1 mL potassium iodide. The samples were kept in the dark for 5 min; after that, 75 mL distilled water and 1 mL of starch were added. After vigorous stirring, the samples were titrated with 0.01 N sodium thiosulphate solution until the blue color disappeared.
Oil permeability (OP) was determined using the method described by Cao et al. (2020) [42], with some modifications. The films were cut in circles with a diameter of 3 cm and sealed on top of tubes containing 20 mL of sunflower oil. The tubes were placed upside down for 5 days on pre-weighed filter paper. PO was calculated by using the equation presented below: where ∆m is the changes in filter paper mass (g), x is the thickness of the tested film (mm), S is the surface of the film (m 2 ), and t is the testing time (days).
Thickness, Density, Tensile Strength and Hardness
Film thickness was determined using a digital micrometer Mitutoyo Absolute (Kawasaki, Japan) with an accuracy of 1 µm. The results were obtained after ten readings on different areas of the film's surface [31].
Density (p, g × cm −3 ) was calculates from the film dimensions according to the following Formula (6) [43]: where m is the mass (g), x is the thickness (cm), and A (m 2 ) is the area of the films. Tensile strength (TS) and hardness were measured using a Perten TVT 6700 texture analyzer with a 5 mm deformation and a 5 mm cylinder probe. The pre-speed, test speed and post speed were 2, 1 and 10 mm/s. The results were expressed in Newtons. The TS was calculated according to Equation (7): where F max is the maximum tensile force at rupture (N), x is the thickness (mm), and W (mm) is the width of the films.
Optical Properties
The color analysis of the samples was performed with a CR-400 colorimeter (Konica Minolta, Tokyo, Japan) using the CIELAB scale, wherein the L* value expresses lightness (0 for black and 100 for white), the a* value expresses the degree of redness (if the value is positive) or greenness (if the value is negative), and the b* value expresses the degree of yellowness and blueness (if the value is positive or negative, respectively) [44]. The color difference (∆E) was determined using Equation (8), described below: where ∆L*, ∆a* and ∆b* represents the color parameters differential between the samples and the standard white plate used as background (L* = 94.27, a* = −5.52, b* = 9.19).
For each sample were taken five readings (one in the center and four around the surface areas). Color measurements were taken in triplicates [45].
The films were cut into strips (1 cm × 3 cm) and placed inside a spectrophotometer test cell. All of the measurements were performed using air as the blank reference [28].
The ultraviolet and light spectrum was obtained using a Shimadzu 1800 UV spectrophotometer by exposing the films to light at wavelengths ranging from 200 to 800 nm [24].
The opacity and transparency of the films were calculated using the Equations (9) and (10) below [41,46]: Abs 600 x (9) Transparency = log T 600 X (10) where Abs 600 and T 600 represent the absorbance (UA) and transmittance (%) at 600 nm, and t represents the thickness of the films (mm).
FT-IR
The membranes, pure alginate, glycerol and SFOC powder were analyzed using FT-IR spectroscopy. The spectra were recorded within the range of 400 cm −1 to 4000 cm −1 using a Nicolet iS20 spectrometer (Thermo Scientific, Karlsruhe, Germany), equipped with an attenuated total reflectance accessory and a diamond crystal. Spectra were collected at 4 cm −1 resolution and 32 scans. The obtained spectra were processed with OMNIC software [37,47].
Scanning Electron Microscopy (SEM)
The surface and cross-section of the obtained films were analyzed by SEM (Tescan Vega II LMU, Tescan Orsay Holding, Brno, Czech Republic). The films were cut into small pieces and were fixed on double-sided adhesive carbon bands. The images were analyzed at an acceleration of 30 kV and were collected at 1 kx and 700× magnifications.
Antimicrobial Analysis
For the microbiological assessment of the samples were used compact dry-type plates with lyophilized culture media. The samples (1 g) were dissolved in 9 mL of saline solution. Then, 1 mL of this solution was dispersed on the culture media [34]. The films were tested for the total number of germs, Escherichia coli, Staphylococcus aureus, Listeria, coliforms, enterococcus, Bacillus cereus, yeasts, molds, Enterobacteriaceae and Salmonella. For the first analysis, the plates were kept in a hot air oven at 35 • C for 48 h (AOAC 010404). For the following five previously cited microorganisms, the plates were kept at the same temperature but for 24 h (AOAC 110402, 081001, ISO 11290-2:2017, 110401, 11190). For Bacillus cereus, yeast and molds the plates were kept at 30 • C for 2 (MicroVal 2019-LR87) and 3 days (AOAC 100401), respectively. For Enterobacteriaceae and Salmonella the plates were kept for 24 h at 37 • C (AOAC 012001) and 42 • C (ISO 6579-1:2017) respectively.
Statistical Analysis
Results were presented as mean ± standard deviation. The WC, WS, WVP, WVTR, OxyP, OP and density determinations were performed in triplicates. The results were processed using XLSTAT (trial version). The difference between the films were evaluated by ANOVA, using a Turkey test at a 95% confidence level. A principal component analysis (PCA) was applied to observe the relationships between the water-affinity and the optical and barrier properties of the obtained membranes.
The value obtained for water activity was low (0.40 ± 0.01), smaller than 0.6, which does not allow the growth of molds, yeast and bacteria [49].
The results for the mycotoxins studied were within the allowed legal limit established by the European Union and presented in Table 2. Mineral analysis by ICP-MS showed the absence of heavy metals, such as lead, mercury and cadmium [48].
Water Affinity
The properties regarding the affinity to water of the obtained films are presented in Table 3. The water-content values varied from 13.07 to 19.45%, showing a significant difference (p > 0.05) between the control and the sunflower oilcake films. The water content decreased with the amount of SFOC introduced. It can be assumed that the increase in oilcake flour, which is a lipophilic substance, does not allow water incorporation. Moreover, Bahmid et al. (2021) [50] declared that moisture may depend on thickness and surface roughness (particle dimensions, their amount and evaporation conditions), because the particles can make water penetration more difficult. This conclusion was also observed in our results, so the films containing ground SFOC stimulated slower water absorption. The control film had higher moisture due to the abundant presence of the hydrophilic group in sodium alginate [37]. a w is an important factor that affects the quality and shelf life of food products during processing and storage [51]. The parameter is determined based on the moisture levels and the interactions between water molecules and other ingredients [52]. All of the samples presented good a w values (between 0.29 and 0.40) and thus were considerate not susceptible to microorganism growth [53]. The values increased with the amount of oilcake added; this may be due to the increasing content in components (proteins, fiber and carbohydrates) that retain more water [54].
The solubility is an important requirement for the films because through this its depends the potential use of the edible films (e.g., the encapsulation of food additive or the maintenance of product integrity) an important requirement is the solubility. WS reflects the water resistance of the films. The solubility obtained for the films was high. When sunflower oilcake was added a significant (p > 95%) decrease in solubility was observed; thus, the lowest value was found for SFOC5. The decrease may be related to the increase in solid components, especially fats and fibers. Other reasons can be the differences in thickness and the inhomogeneous structure of the membranes [54]. In Table 4 are presented the barrier properties of the films. WVP is an indicator of the membrane's capacity to prevent moisture transfer. A large value negatively influences the quality and life of foods [55]. This factor is influenced by thickness, components, humidity and water activity [37]. WVP was significantly (p < 0.05) affected by the amount of SFOC added; the values decreased from 1.98 × 10 −4 to 1.13 × 10 −4 g × mm/KPa × h × m 2 . The values obtained indicated an improvement in the vapor barrier properties of the films. The decrease may be due to the higher concentration of oilcake, which makes the films thicker and denser. The values found were lower than those reported by Suput et al. (2018) [24] and Hromis et al. (2019) for films based on whole sunflower and pumpkin oilcakes. As expected, the lowest WVTR values was recorded for the sample with the highest sunflower oilcake addition.
The low moisture barrier of the control sample is caused by the behavior of alginate, which accelerates the water uptake, the permeability and transmission rate of the vapor [56]. The peroxide value (PV) obtained for the uncovered oil was 7.67 meq O 2 /Kg, which was higher that the PV values obtained for the oil covered with the obtained membranes. The results indicates that SFOC could block oxygen transmission. The PV value of the oil covered with the developed films ranged from 2.24 to 4.83 meq O 2 /Kg. The increase may be related to the high fat content of the oilcake.
The results indicates that all of the membranes were able to form a good barrier against atmospheric oxygen, protecting the foods to be packaged from unwanted oxidation reactions.
Oil Permeability (OP)
The OP analysis was done to investigate the practical application of the obtained membranes in fatty food packaging. As shown in Table 4, the OP of the control film was 0.027 g × mm × m −2 × day −1 . After adding SFOC, the values increased from 0.017 to 0.034 g × mm × m −2 × day −1 . No significant difference (p > 95%) were found between the control sample and the membranes with 0.1 g and 0.2 g SFOC. On the contrary, significant differences (p < 0.05) were found when the addition of flour increased (0.4-0.5 g). The results obtained were much lower than those obtained by Cao et al. [42] in edible films obtained with cassia gum reinforced with carboxylate cellulose nano crystal whisker (0.5, 0.1, 0.064 and 0.067 g × mm × m −2 × day −1 ). Therefore, SFOC-based films are promising for the packaging of oil-rich foods.
Moreover, on the filter paper, stains were not observed; this confirms the excellent barrier properties of the obtained membranes against grease. The alginate-based films are highly hygroscopic; thus, the oil, which is a lipophilic substance, does not dissolve in the alginate films [9,56].
Thickness, Density, Tensile Strength and Hardness
The thickness values varied between 0.029 to 0.044 mm, showing a significant difference (p < 0.05) between the SFOC and control films. The thickness increased with the increasing of the SFOC amount, and the differences were significant (confidence level 95%). This increase may be due to: the increase in solid contents (SFOC), the various differences in structure and chemical composition (high fiber content of SFOC) and the hydrophobicity character of the film constituents. The density of the film decreased from 1.45 g/cm 3 to 1.04 g/cm 3 with the addition of SFOC due to the increase in thickness.
The results for the mechanical characteristics are shown in Table 5. In comparison with other studies, the tensile strength differs (1.11 MPa [37], 1.23 MPa [57]); this may be due to the method of the preparation of the films (such as dry matter content, composition, mixing time, drying parameters and thickness) [56]. When increasing the SFOC content, a significant tendency of tensile strength and hardness to decrease was observed in the films.
Color Analysis
The optical properties of the food packaging material are important because the acceptance of the products by consumers depends on them. In Table 6 are presented the color values for the control and the SFOC films. SFOC films showed a darker color than the control film (L* values decrease); the differences were significant different at p < 0.05. The amount of SFOC significantly influenced (95% confidence level) the lightness of the samples (Figure 1). greenish/yellowish coloration in the films. The a* value may be related to the predominance of SFOC or may be due to the chemical composition, in particular the presence of pigments in SFOC flour. The total color differences were determined to see if the addition of oilcake influenced the overall membrane coloration. The results increased significantly from 1.74 to 25.77, with a significant difference (p < 0.05), demonstrating that the color difference can be distinguishable with the naked eye.
UV-VIS Spectra
The sensory and nutritional qualities of food products can be altered by exposure to light. The requirements for the spectral properties of packaging materials regarding UV-VIS are the low transmittance for UV radiation (thus increasing the lifespan of the packaged foodstuff) and high transparency in the visible region (to provide consumers with visual control). The protection against UV radiation is the most important requirement because radiation can cause deterioration in the packaging material [58].
Compared to the control sample, the SFOC films presented good light absorption between 200 nm and 400 nm. Considering the fact that this is in the UV spectra range, the samples have the ability to protect products against UV radiation. As shown in Figure 2, the pattern of light absorption in the UV region was: SFOC5 > SFOC4 > SFOC3 > SFOC2 > SFOC1 > control. Regarding the values obtained for the chromatic coordinates a* and b*, significant differences (p < 0.05) were observed between the control and the SFOC films. When increasing the amount of SFOC, a* values decreased while b * values increased significantly (p > 95%). Negative a* and positive b* coordinates imply a predominant greenish/yellowish coloration in the films. The a* value may be related to the predominance of SFOC or may be due to the chemical composition, in particular the presence of pigments in SFOC flour.
The total color differences were determined to see if the addition of oilcake influenced the overall membrane coloration. The results increased significantly from 1.74 to 25.77, with a significant difference (p < 0.05), demonstrating that the color difference can be distinguishable with the naked eye.
UV-VIS Spectra
The sensory and nutritional qualities of food products can be altered by exposure to light. The requirements for the spectral properties of packaging materials regarding UV-VIS are the low transmittance for UV radiation (thus increasing the lifespan of the packaged foodstuff) and high transparency in the visible region (to provide consumers with visual control). The protection against UV radiation is the most important requirement because radiation can cause deterioration in the packaging material [58].
Compared to the control sample, the SFOC films presented good light absorption between 200 nm and 400 nm. Considering the fact that this is in the UV spectra range, the samples have the ability to protect products against UV radiation. As shown in Figure 2, the pattern of light absorption in the UV region was: SFOC5 > SFOC4 > SFOC3 > SFOC2 > SFOC1 > control. In the visible field (400-800 nm), all samples presented low absorption values, thus providing high visual access. Moreover, the absorption of the SFOC membranes in the visible field were higher than the absorption for the control sample. This is due to the darkening of the membranes as the SFOC was added.
When SFOC was added, the transmission values in the UV decreased significantly (p < 0.05). In our study, the lowest transmission, as well as the best UV barrier, was found in In the visible field (400-800 nm), all samples presented low absorption values, thus providing high visual access. Moreover, the absorption of the SFOC membranes in the visible field were higher than the absorption for the control sample. This is due to the darkening of the membranes as the SFOC was added.
When SFOC was added, the transmission values in the UV decreased significantly (p < 0.05). In our study, the lowest transmission, as well as the best UV barrier, was found in the sample with 0.5 g addition.
Transparency is an important property for marketing, because a transparent material is usually more attractive for consumers [59]. The transparency and opacity values are summarized in Table 5. It can be seen that the addition of oilcake had a statistically significant (p < 0.05) impact on these properties. Higher addition levels resulted in lower transparency values and respectively higher opacity values. The results obtained were in accordance with the transmittance values, so it can be said that the opaquer films could be used as food protectants against light permeability.
FT-IR
The location and intensity of the characteristic absorption peaks for the sole ingredients (pure alginate, glycerol and SFOC powder), as well as for the control and SFOC membranes, are shown in Figure 3. The FTIR spectra of all of the obtained membranes presented four absorption wave numbers in three different spectra zones, namely 3500-3200 cm −1 , 3000-2800 cm −1 and 1000-1030 cm −1 , which can be associated to the bond stretching of O-H, C-H (symmetric and asymmetric) and C-O-C groups. Moreover, these are the characteristic broad bands present in the alginate structure [60]. Table 5. It can be seen that the addition of oilcake had a statistically significant (p < 0.05) impact on these properties. Higher addition levels resulted in lower transparency values and respectively higher opacity values. The results obtained were in accordance with the transmittance values, so it can be said that the opaquer films could be used as food protectants against light permeability.
FT-IR
The location and intensity of the characteristic absorption peaks for the sole ingredients (pure alginate, glycerol and SFOC powder), as well as for the control and SFOC membranes, are shown in Figure 3. The FTIR spectra of all of the obtained membranes presented four absorption wave numbers in three different spectra zones, namely 3500-3200 cm −1 , 3000-2800 cm −1 and 1000-1030 cm −1 , which can be associated to the bond stretching of O-H, C-H (symmetric and asymmetric) and C-O-C groups. Moreover, these are the characteristic broad bands present in the alginate structure [60]. Some of the peaks shifted to a lower (2927.14 cm −1 to 2920.29 cm −1 ) and/or higher (from 323.50 cm −1 to 328.52 cm −1 and from 1023.46 cm −1 to 1024.51 cm −1 ) intensity with the increase of SFOC, which was indicative of the interactions between the sodium alginate and the oilcake flour. Moreover, in comparison with the control samples in the SFOC films, there was found an additional peak at 1743-1744 cm −1 , possibly attributed to the carbonyl ester group present in lipidic molecules [61,62]. The films exhibited a pronounced absorbance band between 1600 cm −1 and 1300 cm −1 , which corresponds to the asymmetrical and symmetrical stretching of the COO bond [37].
The spectrum of pure glycerol exhibited characteristic absorption bands at 850.14, 921.60, 992.58, 1028.07 and 1107.46 cm −1 , maybe corresponding to the vibration of C-H, C-O linkage and stretching.
The spectra of SFOC showed an absorption band at 1634.51 cm −1 and 1540.32 cm −1 , corresponding to the amide I and II region, respectively [63]. Other absorption peaks presented in the spectra are: 1743.46, 1237.75 and 1033.97 cm −1 , which may correspond to the COO (for unconjugated cellulose) and C-O stretching [64]. Peaks between 2900 and 3000 cm −1 may correspond to the asymmetric stretching of CH 2 and CH characteristic for hemicellulose and cellulose [65].
Apparence and Morphologhy
With the naked eye, the control film was colorless and transparent, while the SFOCbased films were brownish, very shiny and had a slight sunflower flagrance. All films presented a homogeneous structure, are easy to handle and are resistant when manipulated by hand. These appearance characteristics were observed also by Lazic et al. (2020) when they developed SFOC-based films [16].
SEM analysis was conducted to observe the arrangements of the film components and the morphological differences of the developed membranes when the SFOC was added [66].
In Figure 4 is presented the SEM micrograph of the control and SFOC composite films. The control sample presented a smooth and uniform structure. On the contrary, Luo et al. (2019) [37] observed several cracks and bulges in plain sodium alginate films. With the addition of sunflower oilcake, the films showed rougher, but still continuous, structures. The roughness increased with the increasing amount of SFOC added, due to the increase in fibrous particles. Some of the peaks shifted to a lower (2927.14 cm −1 to 2920.29 cm −1 ) and/or higher (from 323.50 cm −1 to 328.52 cm −1 and from 1023.46 cm −1 to 1024.51 cm −1 ) intensity with the increase of SFOC, which was indicative of the interactions between the sodium alginate and the oilcake flour. Moreover, in comparison with the control samples in the SFOC films, there was found an additional peak at 1743-1744 cm −1 , possibly attributed to the carbonyl ester group present in lipidic molecules [61,62]. The films exhibited a pronounced absorbance band between 1600 cm −1 and 1300 cm −1 , which corresponds to the asymmetrical and symmetrical stretching of the COO bond [37].
The spectrum of pure glycerol exhibited characteristic absorption bands at 850.14, 921.60, 992.58, 1028.07 and 1107.46 cm −1 , maybe corresponding to the vibration of C-H, C-O linkage and stretching.
The spectra of SFOC showed an absorption band at 1634.51 cm −1 and 1540.32 cm −1 , corresponding to the amide I and II region, respectively [63]. Other absorption peaks presented in the spectra are: 1743.46, 1237.75 and 1033.97 cm −1 , which may correspond to the COO (for unconjugated cellulose) and C-O stretching [64]. Peaks between 2900 and 3000 cm −1 may correspond to the asymmetric stretching of CH2 and CH characteristic for hemicellulose and cellulose [65].
Apparence and Morphologhy
With the naked eye, the control film was colorless and transparent, while the SFOCbased films were brownish, very shiny and had a slight sunflower flagrance. All films presented a homogeneous structure, are easy to handle and are resistant when manipulated by hand. These appearance characteristics were observed also by Lazic et al. (2020) when they developed SFOC-based films [16].
SEM analysis was conducted to observe the arrangements of the film components and the morphological differences of the developed membranes when the SFOC was added [66].
In Figure 4 is presented the SEM micrograph of the control and SFOC composite films. The control sample presented a smooth and uniform structure. On the contrary, Luo et al. (2019) [37] observed several cracks and bulges in plain sodium alginate films. With the addition of sunflower oilcake, the films showed rougher, but still continuous, structures. The roughness increased with the increasing amount of SFOC added, due to the increase in fibrous particles. Moreover, no pore or ruptures were found on the cross-section structure of all membranes, indicating that the films were dense and continuous. Moreover, no pore or ruptures were found on the cross-section structure of all membranes, indicating that the films were dense and continuous.
A compact structure may contribute to the obtention of a low WVP value and good mechanical properties [37].
Microbiological Stability
Microbial contamination is the main cause of the spoilage and unacceptability of different food products [67]. According to the results presented in Table 7, the films with SFOC showed high microbiological stability. No coliform, Enterobacteriaceae, E. coli, Salmonella, Staphylococcus aureus or Listeria developed on the culture media. Regarding the total count, the highest values were found for SFOC5 due to the high content of vegetable material. All of the values obtained were within the permissive limit set by food safety and standard regulations (FSSAI), European regulations, and the Food and Drug Administration (FDA). In SFOC 5 were also observed more microorganism (Enterococcus, yeast, molds and Bacillus cereus) than in the other sample, by precisely 1 cfu due to the increased content of SFOC, but the values were far below the maximum allowed limit set by FSSAI and FDA.
Moreover, we observed that the plain sodium film also exhibited antimicrobial ability against nine microorganisms. The results obtained for E. coli and S. aureus were in accordance with those obtained in a previous cited study [37].
The microbial stability of the films is also due to the presence of sodium alginate, which gives general protection that increases the resistance against microbial agents [68,69].
Statistical Analysis
The relationship between the water-affinity properties, barrier characteristics, optical properties, density and thickness are presented in Figure 5. The two principal components explained 96.79% of the total variance (PC1 = 80.66% and PC2 = 16.12%). The PC1 was associated with the optical properties (L*, a*, b*, ∆E*, opacity, transparency and transmittance), water-affinity properties (a w , moisture, time of solubility, solubility), barrier properties (WVTR and WVP), density and thickness. On the other hand, only OP and PV were associated with PC2. Regarding the samples, good relationships were observed between SFOC1 and SFOC2 and between SFOC4 and SFOC5.
properties (WVTR and WVP), density and thickness. On the other hand, only OP and PV were associated with PC2. Regarding the samples, good relationships were observed between SFOC1 and SFOC2 and between SFOC4 and SFOC5. High positive correlations were found between the a*, b*, ΔE*, opacity, time of solubility and thickness parameters. Other positive correlation were found between moisture, WVTR and density, between transmittance and transparency and between L* and solubility. Negative correlations were found between the optical properties, water-affinity properties and barrier properties.
Conclusions
Sunflower oilcake obtained after the cold extraction of oil was investigated as a potential source for edible films. The use of sunflower oilcake has led to an increase in film properties and their nutritional value.
The results showed that, with the addition, the thickness, water activity, time of solubility, oxygen and oil permeability increased, while the moisture, solubility, water vapor permeability decreased. The SFOC composite films exhibited high absorption of UV radiation, thus protecting foodstuffs against photochemical reactions. On the other hand, the absorption in the visible field decreased, indicating a decrease in film transparency. Regarding the structure, the membranes were homogeneous and compact, without pores and cracks. Moreover, the films showed microbial stability against six tested microorganisms, which make them safe to be consumed directly with the products chosen to be packaged. The abovementioned properties make the membranes suitable for the packaging of a wide range of foods, including those susceptible to oxidative changes. Their good solubility also makes them suitable for the packaging of powdery products that need to be dissolved in hot water. Another possible application can be found in the packaging of sliced products (meat, cheese), due to the growing interest of consumers in smaller portions. High positive correlations were found between the a*, b*, ∆E*, opacity, time of solubility and thickness parameters. Other positive correlation were found between moisture, WVTR and density, between transmittance and transparency and between L* and solubility. Negative correlations were found between the optical properties, water-affinity properties and barrier properties.
Conclusions
Sunflower oilcake obtained after the cold extraction of oil was investigated as a potential source for edible films. The use of sunflower oilcake has led to an increase in film properties and their nutritional value.
The results showed that, with the addition, the thickness, water activity, time of solubility, oxygen and oil permeability increased, while the moisture, solubility, water vapor permeability decreased. The SFOC composite films exhibited high absorption of UV radiation, thus protecting foodstuffs against photochemical reactions. On the other hand, the absorption in the visible field decreased, indicating a decrease in film transparency. Regarding the structure, the membranes were homogeneous and compact, without pores and cracks. Moreover, the films showed microbial stability against six tested microorganisms, which make them safe to be consumed directly with the products chosen to be packaged. The abovementioned properties make the membranes suitable for the packaging of a wide range of foods, including those susceptible to oxidative changes. Their good solubility also makes them suitable for the packaging of powdery products that need to be dissolved in hot water. Another possible application can be found in the packaging of sliced products (meat, cheese), due to the growing interest of consumers in smaller portions.
Author Contributions: A.P. and S.A. contributed equally to the collection of data and preparation of the paper. All authors have read and agreed to the published version of the manuscript. | 9,157.4 | 2022-08-01T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Studies of HeH: Dissociative Excitation
We have used structure and scattering calculations to determine the potential energy curves, non-adiabatic couplings and autoionization widths for the HeH system. These will be used to study a variety of processes ranging from dissociative recombination to mutual neutralization. As an example, we present our results on the direct dissociative excitation of HeH by electron impact via excitation to the two lowest excited states of the ion. The results are found to be in good agreement with experiment.
Introduction
The collision of an electron with a molecular ion such as HeH + results in a number of processes. These range from elastic scattering to inelastic processes, such as vibrational and rotational excitation, and at higher energies, electron-impact excitation to a dissociative electronic state can result in direct dissociative excitation. In addition, there are a number of resonant processes. Every state of the ion can serve as a parent ion for a series of neutral states. Below the ground state, these are a Rydberg series of neutral states converging to the ion. However, if the parent state is an excited state of the ion, these states are doubly excited (Feshbach) resonances that lie below the direct dissociation threshold, and are formed when an incoming electron excites the target ion and attaches to a Rydberg orbital. Capture into these states can initiate a number of resonant processes. As the neutral molecule evolves in time, the system can autoionize, meaning the electron can be re-emitted, returning the molecule to its original electronic state. If the ion does not have enough energy to dissociate to products, the target is left in some state of vibrational excitation. However, if the ion has enough energy to dissociate, the process is resonant dissociative excitation and it provides an efficient path to dissociation at energies below the direct excitation level. If, while evolving in time, the neutral fragments reach an internuclear separation beyond which autoionization is no longer possible, the states are considered electronically bound and the result is dissociative recombination. Depending on the electron affinity and ionization potential of the final products, it may be energetically possible to have ion-pair formation.
The study of these processes requires both the accurate treatment of the electron scattering processes, but must also include an accurate representation of the potential energy curves, both for electronically bound states and the resonant state. In addition, the couplings between these states, both the coupling between the resonant states and the scattering continuum (the autoionization width) and the non-adiabatic coupling between all states are needed to complete describe the cross section including the branching ratios into final states. These same curves and couplings mediate another series of collision processes such as Penning ionization, associative ionization or mutual neutralization.
We have focused on the determination of accurate potential energy curves for the ion, Rydberg and resonant states for the HeH system, as well as autoionization widths and non-adiabatic coupling elements between all neutral states including the autoionizing states. These calculations included structure calculations as well as electron scattering calculations, using the complex Kohn variational method [1,2] to obtain the autoionization widths and T-matrix elements (used for the direct excitation cross section calculations). Details on these structure and scattering calculations can be found in [3].
As an example, we will present the results of our calculations on direct dissociative excitation.
One of the earliest published experimental studies on dissociative excitation (DE) of HeH + is the work of F. B. Yousif and J. B. A. Mitchell from 1989 [4]. In this work the dissociative recombination (DR) and DE processes of HeH + were studied using a merged beam method. The cross sections for DE were reported in the 0 − 40 eV energy range. The results showed an excitation energy threshold at about 20 eV for the low extraction conditions, where the ions are believed to be mainly in the ground electronic state. Series of sharp and very narrow peaks in the cross section were detected in the 20−26 eV energy region. The narrowness of the peaks was suggested to originate from a process where the electron is trapped instantaneously into doubly excited neutral resonant states.
The findings of Yousif and Mitchell prompted a theoretical study [5]. In this work the DE of HeH + was studied in the 20 − 26 eV energy region using the complex Kohn variational method. Excitation cross sections for the X 1 Σ → a 3 Σ + transition were computed in overall 2 Σ + and 2 Π symmetries as well as the total cross section at the equilibrium separation (R 0 = 0.77 Å). The calculation of the fixed-nuclei cross section resulted in a series of sharp peaks on a quite flat background. Closer inspection showed that most of the peaks were Feshbach resonances associated with energetically closed Rydberg states in this energy region. One of the peaks, situated at 24 eV, did not belong to the above mentioned category but proved to be a core-excited shape resonance. Further, it was shown that an autoionization process from an doubly excited state, as suggested by Yousif and Mitchell, was not a viable explanation of the narrowness of the peaks observed in the experiment. The computations in 2 Σ + symmetry were also performed at R = R 0 ± 0.05 Å in order to investigate how the cross section responds to changes in the internuclear distance. The results from these calculations showed that the widths of the resonance peaks and the value of the background cross section remained almost unchanged. The positions of the peaks were shifted with the excitation energy of the X 1 Σ − A 1 Σ transition. The direct DE cross section was computed by integrating the fixed nuclei inelastic cross section over the square of the vibrational wave function of the target ion. The sharp peaks observed in the fixed-nuclei cross section were then smoothed out.
A second experimental study of the DE of HeH + was performed by C. Strömholm et al. [6]. In this work the DR and DE processes for HeH were studied and the absolute cross sections were determined for energies below 40 eV. The experiments were performed using CRYRING ion storage ring at the Manne Siegbahn Laboratory at Stockholm University. Contrary to the results of the cross section obtained by Yousif and Mitchell, it was found here that the absolute cross section for the direct DE process was basically constant in the 21 − 37 eV energy region. Furthermore, it was found that there was an alternate DE pathway with an energy threshold already at 10 eV. In the reaction the electron is captured into a neutral doubly excited state which auto-ionizes into He + H + . This reaction is resonant dissociative excitation which competes with the DR process.
The results of the direct DE cross section for the HeH + of the above mentioned studies are displayed in Fig. 1.
Theoretical formulation
In the work Orel et al. [5] a time-independent expression for a total cross section was derived by means of a delta-function approximation and using the fixed-nuclei excitation cross sections. We will now give a more detailed description of this expression.
The fixed-nuclei excitation cross sections are given by [5] σ Here T Λ,nn 0 m 0 m (E, R) is the fixed nuclei T-matrix element on the energy shell. We will assume that a time-independent expression for the excitation cross section of DE can be formed by applying the adiabatic-nuclei approximation [7,8], Here ψ E is an energy-normalized continuum function. Λ refers the overall symmetry of the scattering and n, n = 0, 1, 2 are the electronic states of the target. In this work 0 → 1 and 0 → 2 scattering is studied. E refers to the scattering energy and E is the energy of the dissociative nuclear state. Hence, the energy of the ejected electron is given by E − E . E 0 is the asymptotic energy of the repulsive potential energy curve. Further, we also approximate the energy-normalized continuum wave function with the Dirac-delta function (details of this approximation can be found in [9] and references therein), where U(R) is the potential energy curve excited ionic state and R E is the classical turning point at energy E . Inserting Eq. (3) into Eq. (2) yields
02001-p.3
Hence, we can write Eq. (4) as Making the following change of variables, yields the following expression From Eq. (7) we see that the total fixed-nuclei excitation cross section is multiplied by the square of the vibrational wave function of the initial state of the target and integrated over the internuclear distance, R. This is the formula used by Orel et al. [5].
Results
In order to calculate the total cross section using Eq. (7), the fixed-nuclei inelastic excitation cross section for a certain internuclear distance,σ Λ nn (E, R) is needed. These calculations were carried out using the complex Kohn variational method [1,2]. The calculations used the aug-cc-pVQZ [10] basis set for He and the aug-cc-pVTZ [11] basis set for H. One extra diffuse d-functions was also added on He, resulting in a total of 106 functions. Using these basis sets a SCF calculation on the ionic ground state was performed. Then a full CI calculation was preformed on the three lowest excited states of the ion. From the full CI, natural orbitals are computed. All the possible excitations of the three electrons within the ten lowest natural orbitals form the reference configurations for the MRCI calculation. Additional single external excitations are also included. From each calculation the total fixed-nuclei elastic and inelastic scattering cross sections, T -matrix elements etc. are obtained. In the present study scattering with partial wave with ≤ 6, |m| ≤ 4 are included. The calculations were done on a grid of internuclear distances R andσ Λ nn (E, R) is computed for the complete E-grid for each R. The fixed nuclei inelastic cross sections at 1.45 a 0 (near equilibrium distance) for A 1 symmetry is displayed in Fig. 2 as the solid lines.
As discussed above, along with the direct DE, which dissociates into He + +H there are simultaneous resonant processes in which the excited electron is temporarily captured into a Rydberg state from which it eventually autoionizes to He+H + . The output of the scattering calculations includes both the direct and the resonant process. Thus, we would have to include both direct DE and resonant processes, as well as any interactions of the Rydberg states with the continuum, to exactly describe the full reaction mechanism. However, since the experiment will only measure the direct DE cross section, the contribution from the resonant states can be removed. There are more and less sophisticated methods to deal with the resonances. Here, a somewhat "brute force" method, consisting of removing any data points in the scattering output where there are resonance behavior, is employed. The data set without the resonances are then splined onto the same R-grid as the original data. These results at 1.45 a 0 (near equilibrium distance) for A 1 symmetry are displayed in Fig. 2 as the dashed lines. Using the fixed-nuclei excitation cross sections,σ Λ nn (E, R), calculated with the full T -matrix obtained from the complex Kohn variational method, we compute an averaged fixed-nuclei cross section using Eq. (7) 02001-p.4 both with and without removing the resonances. These results are shown in Fig. 3. There are sharp structures from the resonant states over the entire energy interval. The removal of the resonances as described above produces a smooth total cross section.
The total cross section obtained in the above described manner is compared with the experimental result of Strömholm et al. [6] in Fig 4. We can see that our theoretical result shows quite good agreement with the experimental result.
Conclusions
Our results on direct dissociative excitation of HeH + by electron impact via excitation to the two lowest excited states of the ion show good agreement with experiment. Future calculations to explore other processes in the system such as resonant dissociative excitation, dissociative recombination and mutual neutralization are planned. 02001-p.5 | 2,837.8 | 2016-03-01T00:00:00.000 | [
"Physics"
] |
SDN-assisted Service Placement for the IoT-based Systems in Multiple Edge Servers Environment
Edge computing is proved to be an effective solution for the Internet of Things (IoT)-based systems. Bringing the resources closer to the end devices has improved the performance of the networks and reduced the load on the cloud. On the other hand, edge computing has some constraints related to the amount of the resources available on the edge servers, which is considered to be limited as compared with the cloud. In this paper, we propose Software-Defined Networking (SDN)-based resources allocation and service placement system in the multi-edge networks that serve multiple IoT applications. In this system, the resources of the edge servers are monitored using the proposed Edge Server Application (ESA) to determine the state of the edge server and, therefore, the acceptable services by each server. Benefiting from the information gathered by ESA, the service offloading decision would be taken by the proposed SDN Non-core Application (SNA) in a way that ensures an efficient load distribution and better resources utilization for the edge servers. A Weighted Aggregated Sum Product Assessment Method (WASPAS) was used to determine the best edge server. The proposed system was compared with a non-SDN system and showed improvement in the performance and the utilization of resources of the edge servers. Furthermore, the request handling time was considerably reduced and settled in constant rates for a different number of devices.
Introduction
The recent years have witnessed vast growth in the information and communication technologies. The Internet of Things (IoT) technology is considered as the base stone for the development of smart cities, smart grid, smart factory, smart health care, etc. By using IoT technology, different devices can connect and share information [1,2]. Every day, a huge number of IoT devices generate a massive amount of data. These data need to be stored, processed, and analyzed. As the resources' capabilities of these devices are very limited, cloud computing provided the needed resources for these data to be processed and stored [3]. However, cloud computing has some limitations related to the transmission latency and allocation of resources [4]. The location of the cloud Data Centers (DCs) and the conditions of the network connection might affect the transmission latency, power consumption, and bandwidth utilization, and therefore affect the Quality of Service (QoS) and user experience [5]. As a solution for these problems, Edge Computing has been introduced. Edge Computing technology has supported cloud computing by bringing the resources closer to the network edge [6]. Instead of sending all the IoT devices' data to far located cloud DCs, some of the data can be processed in local distributed edge servers, fog, or cloudlets. This can reduce the load on the cloud DCs and ensures better latency, link utilization, and more efficient energy consumption [7].
For further improvement in the performance of such networks, the Software Defined Network (SDN) is used. SDN is a technology that decouples the control plane from the data plane and centralizes the network intelligence in a single component which is called the SDN controller. The SDN controller is responsible for specifying the flows in the network. Hence, it can improve traffic distribution in the edge network and reduce the load on the cloud DC. Furthermore, the SDN controller can take the responsibility of service discovery and service placement in the edge by discovering the suitable edge servers. This can carry some of the burdens on the edge servers and the end devices and, therefore, improves the performance of the network and increases the QoS [8].
As expected, edge servers have limited hardware capabilities as compared with cloud DC. Using virtual machines (VMs) to run microservices or services with limited resources needs may lead to inefficient resources' utilization. For this reason, Virtualization might not be the best mechanism to use. Containerization, which is an OS-level Virtualization, can be the proposer alternative. With Containerization, microservices/services can run in containers and consume only the required amount of resources, hence improving the overall hardware utilization of edge servers [9]. This paper presents an SDN-based resource allocation and service placement mechanism in an Edge-Cloud environment. In the proposed system, the SDN controller is responsible for offloading services to an edge server or the cloud. The suitable destination to offload the service will be determined according to the priority of the service, the resources' usage of the edge servers, and the load on the edge servers. By leveraging the Containerization mechanism, edge servers will run services/microservices inside containers. Docker containers are used as a Containerization platform. The main contributions of this work are: • Reducing the total time for handling IoT service's requests by presenting an efficient SDN assisted offloading mechanism to determine the convenient edge server for each service/microservice using multiple-criteria decision-making (MCDM) algorithms.
• Specifying the servers state, which would determine the acceptable and suspended services for the edge server by monitoring the resources' usage of the edge servers.
• Ensuring an efficient service distribution and load balancing between edge servers to improve the performance of the services/microservice and to grant a better utilization of the edge servers' resources. The rest of the paper will be organized as follows. Section 2 displays a summary of the related works. Section 3 presents the proposed system architecture and the tools and technologies used in the system. A detailed description of the system implementation and methodology is presented in section 4. Results and discussion are presented in section 5, and the conclusion is drawn in section 6.
Related Works
The literature is rich with resources that have studied the Edge Computing technology and the positive aspects of using it with Cloud Computing. It is proved in almost all researches, that the use of Edge Computing has improved the performance of IoT-Cloud platforms. Authors in an earlier work [10] presented a fog-based IoT-healthcare system. The results showed an improvement in network delay and energy consumption. Other authors [11,12] introduced smart home systems based on Fog-Cloud environment. In another study [13], the authors suggested a smart campus system to enhance real-time service provisioning and application management. With Edge-Cloud computing, problems of resource allocation and service placement in the edge network have appeared. Many resources have discussed these subjects. For example, Minh et al. presented a service placement approach in Fog-Cloud environment in which services could be processed in the Cloud, Fog, or IoT devices, depending on their requirements. The proposed approach showed a reduction in latency, energy consumption, and network load [1]. On the other hand, Lui et al proposed a multi-objective Mixed Integer Linear Programming (MILP) model to select an optimal Cloudlet from multiple Cloudlets. In the proposed model, the nearest Cloudlet with the highest mean reward and the lowest latency is preferred. The measured results were based on storage and bandwidth [14]. In another investigation [15], Xu et al suggested a model that breaks services to multiple subsets according to the request start time and detects spare space in the computing nodes. Nodes with the lowest and enough spare space are selected. Furthermore, they proposed a load balancing scheme in which workloads can be migrated from computing nodes with high resources' usage to others with lower resources' usage. In another work [16], the authors proposed a VM scheduling method in Fog-Cloud environment with VM live migration for load balancing. An application-aware workload allocation scheme was also proposed [17]. The applications handled by VMs are optimally allocated in the closest suitable cloudlets. The results showed improvement in the response time. Zhao et al presented edge resources allocation algorithms for multiple applications to minimize average service response time. The results were measured and compared on three algorithms, where the Clustering-Based Heuristic Edge Resource Allocation (CHERA) was preferred due to its higher computational efficiency [18].
Other studies suggested the use of SDN to reduce network congestion and delay. Aujla et al [19] presented a workload slicing scheme and an energy-aware inter-DC migration control scheme using SDN with the Stackelberg game to provide an optimal inter-DC migration. The results were evaluated depending on energy consumption, delay, Service-Level Agreement (SLA) violation, and migration rate. Other authors [7] proposed a healthcare system using SDN for forward/reverse data offloading and flow management across multi-region edge DC. The results were measured depending on delay, complexity, and number of handovers.
System Architecture
The system architecture is composed of three layers, as shown in Figure-1. The first layer contains various types of IoT devices. The second layer contains edge servers with different hardware capabilities distributed in a local region close to the IoT devices. Furthermore, it contains an SDN controller that is responsible for the flow management and service offloading among the end devices, the edge servers, and the cloud. The third layer contains the cloud DC which is assumed to have huge resources.
Edge Servers with Containerization Mechanism
The recent years have shown a transformation in the style of application development, from monolithic stand-alone applications to microservices. Microservice is an architectural style that splits a single application to small parts, each of which runs as an independent process [20,21]. Every microservice is responsible for a specific task. Microservices can be executed in multiple machines and communicate with each other using specific APIs [22]. For such cases, Containerization is considered as the best solution for running microservices/services with limited needs. Containers are faster and more lightweight than VMs, which ensures better resources' utilization, less overhead, better versioning control, and an improved overall system and network performance [23]. In the proposed system, the all edge server supports Docker Containerization. For each microservices/services, there would be a Docker image. These Docker images would be available in the edge servers that provide the service. Using Docker images, an edge server would run a container for each required service, as shown in Figure-
SDN-based Service Management Model
In the Edge-Cloud environment that supports various types of applications with enormous services/microservices and a large number of IoT devices, the use of SDN will have positive effects not only on the flow management but also on the service discovery and placement. Considering an IoT device requests a service that requires a quick response time. With multiple edge servers in the proximity of the device, it is a big challenge to know which is the best edge server that provides the service and responds with a minimum delay. In such a case, the IoT device should send the request to all reachable edge servers and wait for a response. The device would not be able to know which is the least overloaded edge server or edge server with the least link delay. Using SDN, service discovery would be easier. The SDN controller would have complete information about the edge servers which includes the server state and the amount of the available resources, while it can also route traffic into the least congested paths. Furthermore, it can monitor the edge servers' resources state and balance the load between the edge servers [8].
The proposed System: Methods and Algorithms
By considering the issues and solutions discussed in the previous section, this work proposes a system that facilitates service discovery between IoT devices and edge servers in the edge network and ensures load balancing between edge servers. The following subsections present an extensive description of the proposed methods and the implementation of the system.
SDN Assisted Service Placement in Multi-Edge Environment (SASPME)
The proposed system has two parts, shown in Figure-3, which are the Edge Server Application (ESA) and the SDN Non-core Application (SNA). The following subsections describe each part of the system.
A. The Edge Server Application (ESA)
ESA would run inside each edge server. It has connections with the SNA and the IoT devices. The main parts of ESA shown in Figure-3 are described below. 1) Edge Statistics: monitors the state of the edge server and the utilization of resources and gathers information about services. It has three sockets, as described below.
•SDN-Edge client socket: it registers the edge server in the SNA and sends the total resources information and the available services in the edge server. •Edge Resources Information client socket: it sends information about the edge resources' utilization (CPU percentage, available memory, free disk space), the number of running services, and the server state (overloaded or normal). Furthermore, it sends a list of suspended services. When the resources' utilization of the edge server exceeds a specific limit, ESA would check the amount of the resources utilized by each service. If a service usage of a specific resource would increase the usage of this resource above the acceptable limit, then this service would be added to the suspended services list. •Edge Services Information client socket: it would send information about running and exited services (containers). The information includes the container name, IoT IP, minimum and maximum CPU usage, minimum and maximum memory usage, the container start time, minimum and maximum execution time (for exited containers), and the container state. The container state will determine whether a service (container) is running for a long time.
2) Service Request Manager: it is responsible for managing IoT requests received from SNA on the Service Request server socket, which receives the IoT request forwarded from the SNA and generates a token for each IoT device-service request. 3) IoT Connection Manager: it manages the connection with IoT devices, it has the IoT connection server socket that receives connections from the IoT devices, and runs containers for services after checking the IoT token. 4) Migration Manager: it has a server socket which would receive notification from SNA to reject an IoT device-service, when SNA decides to migrate (horizontally offload) a service from this edge server to another edge or the cloud. 5) ESA Database Manager: is responsible for ESA database management (insertion, deletion, and adaptation of the data). B. The SDN Non-core Application (SNA) SNA would be responsible for the service placement in the edge-cloud network. SNA has connections with the ESA in the edge servers and with the IoT devices, as shown in Figure-3. The details of each part are described below. 1) SDN-Edge Connection: it creates a thread that receives connections from edge servers and gets information about the total resources of the edge server and the available services.
2) Edge Resources Information: it creates a thread that receives information about the edge server resources' utilization, the number of running services, suspended services list, and server state. If the server is overloaded (CPU/memory usage exceeds 80%), SNA would not offload new services to this edge server, and it would decide whether to migrate (horizontally offload) some services to another edge or to the cloud.
3) Edge Services Information: it creates a thread that collects information about services in the edge servers to manage services migration. It has two sockets. •Services Information server socket: it receives information about running and exited services from each edge server. •Service Migration client socket: it sends a notification to the edge server to reject an IoT device connected to a specific service. 4) SDN-IoT Connection: it receives connections from the IoT devices. According to the priority of the service, SNA would decide whether to offload it to an edge server or the cloud. Moreover, it would choose the best edge server according to the resources' utilization, number of running services, and distance between each edge and the IoT devices. 5) Service Request Handler: it sends the IoT device request to the best destination (edge or cloud), receives a token from the edge server, and forwards it back to the IoT device. 6) SNA Database Manager: is responsible for SNA database management (insertion, deletion, and adaptation of the data).
SASPME Offloading Schemes
The proposed system has two schemes for service offloading, namely the vertical offloading and horizontal offloading. Both are described below.
A. Vertical offloading
When an IoT device requests service for the first time, it would send a request to SNA which would estimate the best destination to offload the service. The following steps describe the vertical offloading scheme shown in Figure-4. 1. SNA should have information about the total resources of each edge server connected to the network, and it should receive periodic updates from edge servers about server state and resources' utilization. 2. IoT devices would send service requests to SNA. The request should include the name and the priority of the service. Priority will determine whether a service is delay-sensitive or delay-tolerant. According to the priority of the service, SNA will decide whether to offload it to an edge server or the cloud. Delay-sensitive services would be offloaded to an edge server, while delay-tolerant services would be offloaded to the cloud. 3. For delay-sensitive services, SNA should offload the service to the best edge server. An MCDM model is used to estimate the best edge server. 4. After estimating the best edge server, SNA would forward the IoT request to that server. When the edge server receives the request, it would generate a token for this request (IoT device-service) and send it back to SNA. 5. SNA would forward the response received from the edge server to the end device. The IoT device can then start the connection with the edge server.
6. The IoT device would connect to the edge server. While receiving the IoT connection, the edge server would check the token. If tokens match, the edge server would start the container of the service, and the IoT device would be able to send its data to be processed in that container.
B. Horizontal offloading (migration)
In this case, services would be migrated (horizontally offloaded) from one edge server to another or from one edge server to the cloud. The following steps describe the horizontal offloading scheme that is shown in Figure-5. 1. When the state of an edge server is changed to overloaded, SNA will check the currently running services in that edge server. For services that are running for a long time, if they are exploiting high resources, they would be migrated (horizontally offloaded) to the cloud or, if they do not need high resources, they would be migrated to another edge server. Otherwise, no services would be offloaded to that edge server, until the state of the edge server is changed to normal. 2. After the decision has been taken to migrate a service, SNA would inform the edge server by sending the service name and the IP of the IoT device which had requested the service. The edge server would reject any future connection from this IoT device to that specific service. 3. If the edge server has rejected the IoT device connection, the IoT would request the service again from SNA. SNA would offload the service to a new edge server or the cloud according to the previous decision.
Edge Servers' State Determination and Suspended Services Selection
Exhausting the resources of an edge server may lead to degradation in the performance of all the running services. Therefore, monitoring the resources in edge servers will help SNA to determine the state of the edge servers and balance the load between them. Considering multiple edge server environment, let E be a set of the edge servers connected to the network, where e i is an element in the set and n is the number of the elements in E. Let S be a set of services provided by each edge server in E, where s i is an element in the set and m is the number of elements in S. The memory and CPU usage of every e i in E is denoted by Me i and Ce i . It is important to ensure that Me i and Ce i do not reach high limits. ESA should periodically check these values. As presented in Algorithm-1, when Me i and Ce i exceed 70% of their limits, ESA would determine which services to be suspended depending on their needs for resources. The decision is taken according to js i , ks i , ls i , and ms i , which represent maximum memory, minimum memory, maximum CPU, and minimum CPU utilization, respectively. Services that are added to suspended list SL, would be sent to SNA. MF and CF are flags that indicate the state of the memory and CPU in edge servers. When Me i or Ce i exceed the limits specified in Algorithm-1, MF/CF would be set to True and the state of the edge server would be "overloaded". Otherwise, if both of them is False, the state of the edge server would be "normal".
4.3.
Estimation of The Best Edge Server In SASPME, SNA would periodically receive information about resources' utilization from all edge servers. To estimate the most suitable edge server for an IoT service request, SNA would decide according to the available resources, the load on the edge server, and the distance between the edge servers and the IoT device. To compare such divergent types of data, MCDM algorithms are used. With MCDM, multiple alternatives are evaluated and ranked depending on multiple criteria [24,25]. In the proposed system, edge servers (alternatives) would be ranked according to the number of running services, CPU usage, available memory free disk space, and the distance between the edge servers and the IoT devices to estimate the most suitable edge server. The inputs of the decision matrix are shown in Table-1. Because the data are in different types, the matrix should be normalized first to be comparable. The normalization procedure used in this work is the same as previously described [24]. For beneficial criteria, i.e. available memory and available storage, where higher values are desired, we have For non-beneficial criteria, i.e. the number of running services, CPU percentage, and distance, where lower values are desired, we have In this work, the weighted aggregated sum product assessment (WASPAS) method is used. This method is a combination of the Weighted Sum Method (WSM) and the Weighted Product Method (WPM). These methods are detailed below.
A. Weighted Sum Method (WSM)
It is a simple method in which each criterion has a specific weight w j . The sum of all the weights should be equal to one. It is calculated according to the following equation.
Qi (1) = ∑ (3) The results are sorted in a descending order, and the highest result would represent the best choice.
B. Weighted Product Method (WPM)
It is similar to WSM but with some differences. Multiplication is used instead of addition, and the criterion is raised to the power of the weight, as shown in the equation below.
Qi (2) = ∏ (4) After sorting the results in a descending order, the highest result would represent the best choice.
C. Weighted Aggregated Sum Product Assessment Method (WASPAS)
This method is a combination of the WSM and WPM. The following equation presents a joint generalized criterion of weighted aggregation of additive and multiplicative methods. Q = 0.5 Qi (1) + 0.5 Qi (2) = 0.5 ∑ + 0.5 ∏ (5) Same as the previous methods, results are sorted in a descending order, and the alternative (edge server) with the highest result would be the best choice.
Results and Discussion
To test the effectiveness of the proposed system, experiments were implemented for two cases: 1) without using the SDN controller; 2) using the SDN controller. The first case includes ESA only. IoT devices would send requests to all reachable edge servers using broadcast messages. The IoT device would choose the edge server with a quick response time without considering the distance and the load on that edge server. In the second case, the proposed SASPME (SNA and ESA) would be implemented. The experiments were executed using two physical machines. The first machine is used to run the SDN controller (Onos) and three VMs, with a VM for each edge server. The second physical machine is used to run two VMs, one is for the Cloud and the other is for IoT devices. The specifications of the physical machines and VMs are presented in Table-2. All VMs in the system are connected to a Mininet network that has an Openflow enabled switch. These networks are connected through Generic Routing Encapsulation (GRE) tunnels. Figure-6 shows the system setup. The results were measured for a different number of devices (10,20,30,40, and 50) in both cases. Each device requests a single service. To explain the impact of running various types of services on the resources' utilization of the edge servers, three types of services were implemented. An edge detection service that implements edge detection algorithms, RSA (Rivest-Shamir-Adleman), and SHA-3 (Secure Hash Algorithm 3) cryptography algorithms were used. The time between a device request and another was randomly selected between 7ms and 20ms. Each device will reconnect to the edge server at a random time between 1 to 3 minutes. The next subsections present the results collected from both cases.
The CPU and memory utilization were measured for both cases. In the first case, the type of service for IoT device was chosen randomly and requests were sent directly to edge servers. The IoT devices offloaded their data to the edge server with the quickest response time. The results in Figure-7 show an inequitable load distribution between edge servers. Edge server 1 was in an overloaded state for a long time, while edge servers 2 and 3 remained in a normal state where the CPU and Memory usage were mostly in low rates. Overloaded and normal states are represented by 0 and 1, respectively. Furthermore, the services running in an overloaded edge server are prone to failures or can negatively affect the performance of these services. In the second case, when SNA and ESA are used, the resources of edge servers were utilized more efficiently and the edge servers were in a normal state all the time. Resources' utilization of edge servers was measured and sent to SNA every 30 Seconds. The resources' utilization measurement, collected from the edge servers, have improved future offloading decisions and assured an efficient services distribution. The resources' utilization and the state of edge servers, along with the number of running devices in all edge servers for 10 devices in both cases are shown in Figures-(7 and 8), respectively. The request handling time was measured for five experiments with a different number of devices. In the first case, the total request handling time was measured by the period in which an IoT device sends a service request and receives a response from an edge server. Figure-9, which presents the average request handling time for different devices using ESA only, shows that the average request handling time rose in varying rates by increasing the number of IoT devices. In this case, the IoT requests is broadcasted to all the edge servers and they should all respond to each received request,
Ali and Salman
Iraqi Journal of Science, 2020, Vol. 61, No. 6, pp: 1525-1540 1538 even though it may not be completed. This has increased the overhead on the edge servers, and therefore, it has increased the request handling time.
In the second case, the offloading destination would be determined by the priority of the requested service. Therefore, the time to handle requests received from IoT devices would depend on the destination estimation time and the connection time between SNA and ESA. For delay-sensitive services, the request handling time would include the time to choose the best edge server and the time to forward the request to that edge server. As shown in Figure-10, the time to estimate the best edge server was relatively close in all cases. Figure-11 shows the average time to forward the request to the edge server.
Requests for delay-tolerant services would be directly forwarded to the Cloud. In this case, the request handling time would depend only on the time to forward the request to the Cloud. As shown in Figure-12, the average request handling time using ESA and SNA shows a considerable improvement as
Ali and Salman
Iraqi Journal of Science, 2020, Vol. 61, No. 6, pp: 1525-1540 1539 compared with the previous case. Also, by increasing the number of devices, the average request handling time remained in convergent rates. In this case, the number of requests sent to each edge server was notably reduced as compared with the previous case. Hence, the average request handling time is minimised.
Conclusions
In this paper, we have focused on resources allocation for IoT services in the edge networks. The proposed SASPME system aims to improve the performance of the IoT-based applications by allocating computational resources for the delay-sensitive services in the edge servers. In SASPME, the SDN application is proposed to reduce the overhead on the edge servers by taking the responsibility of handling the IoT requests and taking the offloading decision to the best destination. The experiments showed that gathering information related to the available computational resources from the edge servers in short periods of time can improve the decision making and, hence, ensure a more balanced load distribution between the edge servers, although it may increase the load on the link. The WASPAS method, used in the SDN application, is an effective solution to estimate the best edge server depending on different criteria. Furthermore, containerization is used in the edge servers to ensure efficient utilization of resources. SASPME is compared with a non-SDN system. We conclude | 6,837.6 | 2020-06-27T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Function of Nucleus Ventralis Posterior Lateralis Thalami in Acupoint Sensitization Phenomena
To observe the effect of electroacupuncture (EA) on nucleus ventralis posterior lateralis (VPL) thalami activated by visceral noxious stimulation and to explore the impact of EA on the mechanism of acupoint sensitization under a pathological state of the viscera, EA was applied at bilateral “Zusanli-Shangjuxu” acupoints. The discharge of VPL neurons was response to EA increased after colorectal distension (CRD). The stimulation at “Zusanli-Shangjuxu” acupoints enhanced discharge activity of VPL neurons under CRD-induced visceral pain. The frequency of neuronal discharge was associated with the pressure gradient of CRD which showed that visceral noxious stimulation may intensify the body's functional response to stimulation at acupoints.
Introduction
Acupoints are special locations on body surfaces where the Qi of meridians and internal organs is infused. They are also the key link underlying the interactions between meridians and internal organs. When internal organs are under a pathological state, acupoints become more sensitive [1][2][3]. The size and function of acupoints change accordingly with the change of visceral functions [4,5]. Therefore in pathological conditions, the diagnostic and therapeutic effects of acupoints on visceral diseases are enhanced [6].
Spinothalamic tract is traditionally viewed as the major pathway of noxious inputs. Previous studies showed [7][8][9][10] that noxious inputs transmitted via spinothalamic tract can be affected by other noxious inputs. A current issue in neuroscience research is the mechanism underlying the peripheral and central sensitization caused by different noxious inputs [11].
This study evaluated the neuronal discharge of ventral posterior lateral nucleus (VPL; the most important brain center for somatovisceral relay) by noxious inputs from the body surface and colorectal distension (CRD). We also observed whether the effect of acupuncture on the receptive field (acupoint area) of VPL neurons on body surface was affected by visceral noxious inputs. The phenomena and mechanism of acupoint sensitization at the VPL level induced by visceral noxious inputs will be discussed. ). The heads of rats were fixed on stereotaxic instruments. We incised the skin of the middle of skull and the suture was exposed by removing the resubcutaneous tissue and periosteum. Then we should adjust the frontal and back suture located in a horizontal plane. The three-dimensional location coordinates of VPL nuclei were determined according to Rat Brain Atlas [12] 3.0∼4.0 mm behind the anterior fontanel, 3.0∼3.5 mm next to skull sutures. Under observation with a surgical microscope, the tip of the glass microelectrode was inserted to VPL nuclei through the skull hole by the microelectrode manipulator (5000∼5800 m beneath the surface of the brain). Impedance at the tip of the glass microelectrode was set at 10-15 MΩ (filled with 2% pontamine sky blue). When the target neurons were identified, 2% agar was perfused onto the skull surface to protect the brain tissue from drying and reduce volatility caused by breathing.
Materials and Methods
For all recorded neurons, the responses to mechanical stimulations applied to their peripheral receptive field were checked to identify the distribution and size of the receptive fields (mechanical stimulations include touch and pressure by von Frey hairs (von Frey Model 2390; U.S. IITC Company), skin stimulation by tooth tweezers, and acupuncture stimulation). We also observed responses of these neurons to CRD. Only neurons that responded to both the mechanical stimulation on the skin receptive field and the 10 mmHg of CRD were included as the objects of observation (and were named as convergent neurons or CN).
Colorectal Distension.
A 4 to 6 cm long balloon was made from a disposable condom tip and tied on a 4 mm diameter hose ( Figure 1; BIOPAC Amplifier Module Model: MP150 System TSD104A; Manufacturer: BIOPAC Company, USA). The balloon was inserted through the anal orifice straight into the colon. The depth was approximately 4 cm. Three to five drops of the warmed paraffin oil were smeared on the balloon's surface before the balloon was placed into the colon to avoid direct damage to the inner wall of the colon and anus. The distance from balloon end to anus was about 0.5 cm. 20-80 mmHg CRD stimulus was given via a syringe, with the duration of about 30 s. The activation of convergent neurons was observed at different intensities of CRD stimulation. Pressure ≥ 40 mmHg was identified as visceral noxious stimulation [13]. The time interval between CRD stimulations was no less than 10 min to avoid colorectal sensitization caused by hyper stimulation.
EA.
EA was applied at bilateral "Zusanli-Shangjuxu" points. The stimulation was set as a square wave pulse with a width of 5 ms and frequency of 20 Hz. The intensity was 1.5 times of the threshold of A fiber [14] (the average threshold intensity of A fiber reflex was 1.54 ± 0.50 mA) and the time for EA was 30 s. The discharge of VPL neurons to EA was observed before and after CRD. (2) EA was applied at bilateral "Zusanli-Shangjuxu" points for 30 seconds. (3) After an interval of 10 minutes, different intensities of CRD were given to rats for 30 seconds. The discharge of the convergent neurons to nonnoxious stimuli (20 mmHg), noxious stimuli (40 mmHg), and strongly noxious stimuli (60, 80 mmHg) was recorded, respectively, to observe the activation of convergent neurons by different intensities of CRD. (4) After an interval of 10 minutes, EA was once again applied at bilateral "Zusanli-Shangjuxu" points for 30 seconds. The discharge of VPL neurons in response to EA stimulation before and after different intensities of CDR was observed to test the dose-effect relationship between stimulus intensity and response ( Figure 2).
Statistical Analysis.
The data was analyzed with Spike-II (the data analysis software of MICRO 1401 biological signal acquisition and analysis system) and SPSS 13.0 software. The number of neuronal discharge of VPL neurons in every 30 seconds and the activation/inhibition rate were counted and the mean and variance of neuronal discharge before and after the EA intervention were calculated. Comparison between groups was made with independent sample -test. < 0.05 was considered as statistically significant.
Histological Localization.
When recording of neuronal discharge was completed, 20 A of negative direct current was passed to the glass microelectrode via the microelectrode amplifier for 20-30 min. Pontamine sky blue in the glass microelectrode was imported into VPL nuclei to mark the position of recording electrode. Thereafter, the rats were euthanized and perfused through the heart with 4% of paraformaldehyde. Then the rats' brains were removed and fixed. After an interval of 72 hours, frozen sections of the brain were cut for H&E staining (Figure 3). Recording points that were not located in the VPL nuclei were removed from the study.
General Characteristics of the Responses of VPL Neurons.
A total of 126 VPL neurons that responded to mechanical stimulations from the body surface were identified in the 26 male SD rats, by referring to the Brain Atlas of the rat [14]. Figure 3 illustrates part of the pontamine sky blue positioning of VPL neurons. Their receptive fields were distributed at the poster lateral of the contralateral body, tail, hips, or hind legs. The receptive fields of most neurons were small but had clear boundaries. The receptive fields could be activated by gentle brushing or tapping by von Frey filaments (Figure 4).
The Influence of Different Intensities of CRD on the Discharge of VPL Neurons.
We isolated 54 convergent neurons from the 126 VPL neurons that responded to inputs of mechanical stimulation and systematically observed the discharge of 9 of the 54 convergent neurons caused by different intensities of CRD stimulation. The results showed that, after CRD stimulation ranging from 20 to 80 mmHg, the discharge frequency of VPL neurons significantly increased in rats more than before CRD stimulation ( < 0.01) ( Figure 5). CRD, 12 from rats receiving 60 mmHg of strong nociceptive CRD, and 10 from rats that were given 80 mmHg of strong nociceptive CRD. Equal intensities of EA were given to rats for 30 seconds before and after CRD. The results showed that the discharge frequency of VPL convergent neurons induced by EA increased significantly after CRD more than before CRD when rats were given different intensities of CRD ( < 0.05) ( Figure 6).
As the intensity of CRD stimulation increased, there also was an increase in the percentage of the discharge number of VPL neurons from EA at acupoints. A certain doseeffect relationship could be observed between stimulation and response. It showed that acupoints on the body surface were sensitized after CRD. The effect of EA on acupoints was enhanced. The sensitization of acupoints increased as the intensity of visceral noxious stimulation increased (Figure 7).
The above results showed that noxious visceral stimulation facilitated the responses of VPL neurons to inputs of EA stimulation from acupoints on body surface.
Discussion
Our previous studies have shown that most neurons which responded to somatic afferent inputs also responded to inputs from CRD or skin vibrotactile stimulation. In most cases, the response was shown as sensitization of neurons. Responses of more than 50% of neurons to skin vibrotactile stimulation could be enhanced by CRD previously applied to experimental animals [15]. Results of this study showed that, within a certain intensity range, the discharge frequency of VPL convergent neurons increased as the intensity of CRD stimulation increased. Since CRD had an activation effect on spinal cord neurons, when EA was applied at acupoints after CRD, the discharge of VPL convergent neurons had a significant increase more than before CRD. This confirms that noxious visceral distension can sensitize VPL neurons making them respond more strongly to inputs from EA applied to acupoints on skin receptive fields. In other words, the neural facilitation of VPL neurons after noxious visceral stimulation led to dynamic changes of the response from the acupoint sensitized. As the intensity of visceral noxious stimulation increased, its effect on sensitization of acupoints on body surface also strengthened and showed a clear dose-response relationship. Our results show that VPL neurons are involved in the dynamic process of acupoint sensitization.
Evidence-Based Complementary and Alternative Medicine 5
The thalamus is the most important brain structure to relay somatic and visceral afferent inputs to the cerebral cortex. There are three projection systems from the spinal cord to the bottom part of ventral thalamus: the spinothalamic tract, the cervical spinal column, and the postsynaptic dorsal column ascending fibers. A study by Yang et al. [16] on VPL of rats showed that 94% of VPL neurons could be activated by nonnoxious and noxious stimuli applied at peripheral receptive fields, whereas 6% of VPL neurons only responded to noxious stimulation. No VPL neurons responded only to nonnoxious stimulation. Nearly 60% of VPL neurons also responded to CRD, primarily with activation. VPL neurons, therefore, are involved not only in the transmission and processing of somatic sensory inputs, but also in the transmission and processing of visceral nociceptive inputs.
We observed in the rat thalamus VPL experiment that most neurons that responded to haptic inputs from contralateral body also responded to CRD and skin vibration tactile stimuli. The responses of more than half of the neurons to skin vibrotactile stimulation could be enhanced by CRD conditioned stimulation applied previously. In contrast, the responses of VPL neurons to CRD were not enhanced by skin tactile stimulation, if the order of conditioned stimulation was reversed; that is, the skin stimulation was given before CRD. Moreover, the effect was mainly shown as an inhibitory effect. A possible explanation for acupoint sensitization is that repeated CRD may cause the irritability of intestinal wall, which can be viewed as one type of visceral inflammation, and induces sensitization of afferent neurons [17]. Visceral noxious stimulation could also significantly enhance neuronal responses to skin tactile stimulation. The enhancement effect may be related with hyperalgesia caused by visceral disease [18,19].
Many previous studies suggest that only noxious stimulation can significantly inhibit the afferent transmission of nociceptive inputs [18]. However, we observed that, at the single cell level, a gentle touch on skin could produce inhibitory effect on hypothalamic neurons' response to CRD, though this inhibitory effect was usually mild and transient. The conditioned stimulation of CRD significantly improved the responses of thalamic neurons to tactile inputs. The facilitation effect was related with the activity of excitatory intermediate neurons. That is, excitatory intermediate neurons could enhance the after-effects of excitatory responses caused by CRD and prolong the discharge duration of VPL neurons. If skin tactile stimulation was given after CRD, the discharge number of VPL neurons was higher than the discharge number when only CRD or skin tactile stimulation was given. The excessive sensitivity of central neurons to skin tactile stimulation may be related with hyperalgesia [18,19]. Though only a few such sensitive points were found on skin receptive fields, the sensitization effect was lasting and was longer than the effect directly caused by skin stimulation. The effect also lasted significantly longer than the inhibitory effect of tactile stimulation on CRD response [17]. In this case, visceral nociceptive inputs had a stronger effect on the tactile inputs than the other way around, at least at the single cell level of thalamus VPL neurons. However, it should be emphasized that the perception of the visceral pain depends on the group response of neurons, which includes the interaction and feedback among nerve centers at cerebral cortex, thalamus, and other areas.
Our study showed that nociceptive stimulation of CRD could make VPL neurons more sensitive to EA stimulation applied at skin receptive fields. It indicates that viscera pathological condition can facilitate the afferent inputs from stimulation at the body surface. The interaction between somatic and visceral inputs occurs at the lumbosacral segments of the spinal cord. The segments (L1-L3) not only integrate information from the skin on lower abdomen and hind legs, but also are the location of afferent neurons for "Zusanli-Shangjuxu" points which were elected in our experiment and dominate the lower digestive tract. Many sensitive points on body surface are distributed at relevant acupoint zones that have a regulatory effect on digestive system functions. The phenomenon that visceral nociceptive inputs can facilitate the neural responses to afferent inputs from the body surface at corresponding spinal segments may be related with the mechanism underlying referred pain. It also provides a scientific explanation for the Chinese medical theory of "pain as acupoints" and "essence of acupuncture points. " | 3,332 | 2015-06-16T00:00:00.000 | [
"Biology"
] |
Training Robust Deep Neural Networks on Noisy Labels Using Adaptive Sample Selection with Disagreement
Learning with noisy labels is one of the most practical but challenging tasks in deep learning. One promising way to treat noisy labels is to use the small-loss trick based on the memorization effect, that is, clean and noisy samples are identified by observing the network’s loss during training. Co-teaching+ is a state-of-the-art method that simultaneously trains two networks with small-loss selection using the “update by disagreement” strategy; however, it suffers from the problem that the selected samples tend to become noisy as the number of iterations increases. This phenomenon means that clean small-loss samples will be biased toward agreement data, which is the set of samples for which the two networks have the same prediction. This paper proposes an adaptive sample selection method to train deep neural networks robustly and prevent noise contamination in the disagreement strategy. Specifically, the proposed method calculates the threshold of the small-loss criterion by considering the loss distribution of the whole batch at each iteration. Then, the network is backpropagated by extracting samples below this threshold from the disagreement data. Combining the disagreement and agreement data of the two networks can suppress the degradation of the true-label rate of training data in a mini batch. Experiments were conducted using five commonly used benchmarks, MNIST, CIFAR-10, CIFAR-100, NEWS, and T-ImageNet to verify the robustness of the proposed method to noisy labels. The results show the proposed method improves generalization performance in an image classification task with simulated noise rates of up to 50%.
I. INTRODUCTION
D EEP neural networks (DNNs) have achieved a remarkable level of performance in various applications such as image classification [1]. This result is highly dependent on the availability of a large amount of high-quality labeled data, which is difficult to obtain in practice. Instead, a common means of constructing a large labeled dataset is to use crowdsourcing systems [2], [3] such as Amazon's Mechanical Turk or search engines that query samples using a keyword, which is then used as a label [4], [5], [6]. Both approaches can facilitate the acquisition of labeled data, but contaminate these data with unreliable labels, which are called noisy labels. Real-world datasets have been reported to contain levels of noise ranging from 8.0% to 35.8% [7], [8], [9]. Furthermore, it has been found that 52% of web images retrieved using a query contain incorrect labels [10]. DNNs are highly able to fit to noisy labels [11], [12], resulting in an inevitable loss of accuracy.
Our goal is to effectively and robustly train DNNs using a training dataset with noisy labels. Various existing studies have investigated how noisy labels can be handled. A typical method is loss correction [13], [11], [14], [15], which corrects for the forward or backward loss values of the training samples by estimating the noise transition matrix. However, the accuracy of the noise transition matrix estimation decreases when there are many classes and the number of noisy data is large. Moreover, in recent years, methods based on gradient clipping [16], [17], which corrects the losses by constraining the gradient, have received much attention [18]. However, both loss correction approaches have a problem with error accumulation, where the errors in the loss correction continue to affect the network updates [19]. True-label rate CIFAR100 Symmetry 50% Co-teaching+ FIGURE 1. True-label rate during training on CIFAR-100 dataset using Co-teaching+ [24]. As the number of training epochs increases, training samples are selected from a subset with a high number of noisy labels. As a result, in the last stage of learning, the network is trained using noisy samples.
Recent research on DNNs has confirmed that they first learn easy (most likely clean) samples and then learn hard (most likely noisy) samples [12], which is called the memorization effect. Intuitively, suppose we could use this effect to train DNNs using only the samples with small loss. In that case, we could achieve a robust generalization performance for noisy labels without estimating the noise transition matrix. One promising approach, called sample selection, selects small-loss samples from the forward loss of the network and updates the network using backpropagation [20], [21], [22], [23], [24], [25].
Co-teaching [23] and Co-teaching+ [24] have been proposed as practical methods to deal with highly noisy data. Co-teaching trains two networks simultaneously by selecting small-loss samples at each iteration and cross-updating each network to avoid accumulating error. Co-teaching+ improves this approach by training the two networks on samples that disagree with each other's predictions to prevent their convergence and maintain their variance. This strategy is called "update by disagreement." Co-teaching+ is, to the best of our knowledge, the state of the art of sample selectionbased methods. However, as the number of training epochs grows, the proportion of noisy data used for backpropagation increases, which degrades the generalization performance. Figure 1 shows the true-label rate during training on the CIFAR-100 dataset using Co-teaching+. The true-label rate is defined as the proportion of samples with the true labels among the small-loss samples extracted from the mini batch at each iteration. In our validation study, 50% of CIFAR-100 was uniformly and randomly relabeled for each class based on symmetry flipping [26]. The results of that study indicate that the noise rate of the training samples selected by the disagreement strategy increases as the iterations progress, leading to overfitting on noisy data. It is possible to reduce the number of noisy data by lowering the rate at the end of the iterations. In fact, Yao et al. [25] used the proportion of clean data, i.e., samples without any noisy labels, as one of the parameters and tuned it using AutoML [27]. However, in practical use, it is not always possible to obtain clean data in advance.
In this paper, we propose an adaptive sample selection method to robustly train DNNs using the disagreement strategy. The key idea of the proposed method is to prevent noisy labels from becoming mixed into a training mini batch by determining a small-loss threshold at each epoch. Co-teaching+ extracts small-loss samples from the disagreement data at a defined rate throughout all iterations. However, because the small-loss samples, which are likely to be clean labels, may become biased toward disagreement or agreement as training progresses, the amount of samples to be extracted should be determined on an iteration-by-iteration basis. In the proposed method, the threshold is defined by calculating the percentile value using the data of the entire mini batch. Then, the network is backpropagated by extracting the samples below the threshold from the disagreement data. Using data combined in such a way, we can stop the true-label rate of the subset extracted from the disagreement data from decreasing. Therefore, the main contributions of this paper can be summarized as follows: • We present a new small-loss selection method based on the memorization effect. • We propose using a combination of agreement and disagreement data in the disagreement strategy, thus reducing the decrease in the true-label rate during the training process. • We present the results of experiments using five commonly used benchmark datasets, MNIST, CIFAR-10, CIFAR-100, NEWS, and T-ImageNet to demonstrate that the proposed method achieves state-of-the-art results.
The remainder of this paper is structured as follows. Section II reviews the related work of deep learning with noisy data. In Section III, we propose our training method with memorization effect-based sample selection. Experimental results are discussed in Section IV, and the conclusions are given in Section V.
The basic idea behind the loss correction approach is to correct the forward or backward loss of the DNN based on the estimated noise transition matrix. Bootstrapping [13] employs a reconstruction-based objective that uses the concept of perceptual consistency to train the network while correcting its predictions. F-correction [14] introduces a twostep method that first estimates the noise transition matrix of noisy data and corrects the output of the loss function using the forward loss correction mechanism [14]. In [14], the network is pre-trained using noisy data, and the samples with the highest output per class are assumed to be perfect samples that are likely to be clean. The noise transition matrix is then estimated using the softmax probabilities when a perfect sample is an input to the wrong class. However, Fcorrection is inaccurate on datasets with many classes and a small number of samples per class such as CIFAR-100. Some methods assume that clean validation data are available. Hendrycks et al. proposed gold loss correction, which estimates the matrix measuring label corruption calculated using known clean samples [15].
In contrast to estimating the noise transition matrix as described above, there is an approach for correcting the loss that constrains the gradient norm to a specified value by gradient clipping [16], [17]. Menon et al. showed that noise robustness can be obtained using a partially Huberized loss, which clips only the contribution of the gradient [18].
B. LABEL NOISE CLEANING APPROACH
Label noise cleaning is an approach that identifies suspicious labels and changes them to the corresponding true ones. This approach relies on a feature extractor that maps the data into feature domains to investigate the level of noise in the noisy labels. It is an iterative framework, where the classifier and the label transformer are trained on each other and their abilities improve during training, unlike data preprocessing, where noisy labels are removed before training begins. An algorithm using this approach can be divided according to whether it requires clean data or not. If clean data are available, the obvious approach is to relabel the noise labels using the predictions of the network trained on the clean data. For relabeling, [28] uses a label blending operation, which calculates the weighted sum of the given noisy labels and the predicted labels. Alternatively, [29], [30] introduced a framework of joint optimization for both training the classifier and transforming noisy labels into clean ones. Expectation maximization is used to estimate both the parameters of the classifier and the posterior distribution of the labels to minimize the loss.
C. DATASET PRUNING APPROACH
The first approach in dataset pruning is to completely remove the noisy samples found previously and train the network on the remaining dataset. The simplest approach is to remove the samples misclassified by the network [31]. For instance, [32] used a combination of noise filters, where each noise filter assigns a level of noise to the samples. These predictions are then combined to remove samples with the highest noise levels. Luengo et al. [33] extended this method using the label correction approach. If different noise filters predict the same label for a noisy sample, the label for that sample is changed to the predicted label, otherwise it is removed from the dataset. In [34], the state of the network is varied between underfitting and overfitting by periodically adjusting the learning rate. During underfitting, noisy samples have higher losses, so this cyclic process removes the noisy samples.
The second approach is to remove only the labels of noisy samples. The traditional method employs a semi-supervised learning method [35], [36]. SELF [36] is based on a running average model called the Mean-Teacher [40], which obtains self-ensemble predictions from all samples and incrementally removes samples with labels that do not match the original labels. DivideMix [37] uses the Gaussian mixture model to divide the samples into clean and noisy samples. Using the split samples, a semi-supervised approach based on the MixMatch strategy [41] is used.
D. SAMPLE SELECTION APPROACH
This approach continuously monitors the DNNs and detects the true-labeled samples to be learned in the next training iteration. Intuitively, DNNs can achieve better generalization performance when the training data are less noisy. This approach uses the characteristic of DNNs called the memorization effect, i.e., they learn clean and simple patterns in the initial epochs, even in the presence of noisy labels. Thus, they have the ability to filter out noisy samples using their loss values. The goal is to make DNNs robust to noise by selecting only small-loss samples and eliminating mislabeled data with high losses during training iterations.
Self-paced learning [42], [43] can filter out noisy labels by assigning small weights to mislabeled samples and large weights to clean samples, thus ensuring robust model learning. Specifically, specifying a monotonically decreasing weighting function allows the classifier to focus on the easy samples first and then fit the difficult samples. For example, in the MentorNet approach [19], an additional network, called StudentNet, is trained and MentorNet is used to select clean samples to guide the training of StudentNet. If clean validation data cannot be prepared, the self-paced MentorNet uses a predefined curriculum, that is, a self-paced curriculum. The concept of the self-paced MentorNet is similar to that of the self-learning approach [44], and it inherits the problem of error accumulation.
Han et al. proposed Co-teaching [23], which trains two networks in a symmetric way. Co-teaching introduces crosstraining, where a small-loss sample from one network is used as a training sample for the other network. By exchanging training samples between two networks, bias in the training samples is avoided and the accumulated error is reduced. In [38], Wang et al. proposed a method for reweighting small loss samples. Specifically, a loss function designed based VOLUME 4, 2016 on the ArcFace loss [45] is used to recalculate the loss of selected small-loss samples to increase the likelihood that a sample with high confidence will be selected. In [39], Chen et al. introduced the iterative noisy cross-validation (INCV) method into Co-teaching, which selects a mini-batch of samples that are estimated to be true labels using the network under training at each training iteration. However, the two networks converge to a consensus, causing a problem similar to that of the self-paced MentorNet, which uses a single network.
Co-teaching+ [24] is an improved method that introduces the concept of decoupling [22] into Co-teaching. Decoupling is similar to Co-teaching in that it simultaneously trains a pair of networks, but it updates the networks using the samples with different predictions. The weights of the two networks do not converge, allowing them to maintain divergence. Because Co-teaching+ is closely related to the proposed method, the algorithm and problem are described in the following section.
In summary, most loss correction methods have difficulty handling multi-class data, so the development of the sample selection approaches, using which uses the memorization effect, is promising. The sample selection approach continuously monitors the DNNs and selects samples to be trained learned in the following training iteration. Thus, sample selection-based methods can be incorporated into the algorithms of different approaches by simply manipulating the input stream, so a combined strategy is expected to improve accuracy. The state-of-the-art sample selection-based method is Co-teaching+, which substantially improves generalization performance using a combined selection of disagreement and small-loss data. In this paper, we point out the problems of Co-teaching+ and propose an adaptive sample selection method to improve it. Therefore, the proposed method does not assume pre-training with clean data [15], [25] is not assumed in the method proposed here, and this paper does not deal with a strategy that combines the sample selection approach with other approaches [37].
III. METHOD
The proposed method improves on existing sample selectionbased methods by exploiting the memorization effect. This section first introduces the Co-teaching+ algorithm, and then describes our learning method with the proposed sample selection method (shown in Figure 2).
A. LEARNING FROM NOISY DATA
As in Co-teaching, two DNNs are trained simultaneously, but Co-teaching+ consists of two steps: disagreement update and cross update. The first step updates the mini batch data so that each network makes its own predictions and samples with predictions from the two networks that disagree are selected. Next, in the cross-update step, based on these disagreement data, each network further selects its own small-loss samples, but backpropagates those selected by the paired networks to update their parameters. (2) ada D' (1) ada Network 2 (ω ) (2) Network 1 (ω ) (1) Small-loss selection with disagreement Cross-update Network 2 (ω ) (2) Network 1 (ω ) (1) Sort samples by loss Sort samples by loss Threshold Threshold FIGURE 2. Training process of the proposed method. The forward loss values are calculated from the mini batch dataD, and then the prediction disagreement dataD of the two networks parameterized by ω1 (resp. ω2) are extracted. Because the loss distribution of the disagreement data is biased at each iteration, extracting small-loss samples at a fixed rate allows noisy data to be mixed in. The proposed method adaptively controls the number of small-loss samples, whose subset is denoted asD (1) ada (resp.D (2) ada ) fromD by defining a threshold that considers the loss distribution of the whole mini batch at each iteration. UsingD (1) ada (resp.D (2) ada ), both networks are cross-updated.
Specifically, the two networks, with parameters ω 1 and ω 2 respectively, are trained using the mini-batch technique. We are given the training data D, and split them into mini batches notes the sample and its label, and B is the batch size. Then, according to the predictions {ȳ B } (predicted by ω 2 ), disagreement data are extracted as follows: By training the two networks using disagreement dataD , the two networks do not converge but maintain their divergence, similar to the decoupling algorithm [22]. To remove noisy data from disagreement dataD , each network selects small-loss dataD (1) andD (2) based on their own parameters ω 1 and ω 2 , respectively. Next, each network is backpropagated using their paired data. For example, parameter ω 1 is updated based on small-loss dataD (2) . Note that, to control how many small-loss data will be selected at epoch e, the proportion of small-loss samples is defined as follows: where R noise is an estimate of the noise rate in training data D. Because of the memorization effect, the DNN initially fits clean data and then gradually overfits noisy data. Therefore, a large λ is initially used, but the value of λ is quickly reduced up until epoch E k to avoid fitting noisy data. From epoch E k onward, it is adjusted according to the noise rate in the training data (i.e., λ(e) = 1 − R noise ). Initialize model parameters ω (1) and ω (2) ; 4: for n = 1, 2, · · · , |D| do 5: Draw n-th mini batchD from D;
13:
end for
B. ADAPTIVE SAMPLE SELECTION
Our algorithm is described in Algorithm 1. The key difference between the proposed method and Co-teaching+ is the introduction of an adaptive process to set the loss threshold for determining small losses at each training iteration. Specifically, we considered the following two issues when designing Algorithm 1.
1) A decrease in the true-label rate of the training mini batch data may degrade the generalization performance.
2) The noise rate of the disagreement dataD is not always the same as that of the training data D.
As the number of training epochs increases, the small-loss samples extracted at the disagreement-update step become noisy labels, which leads to overfitting to noisy data (as described in Section I). Furthermore, Co-teaching+ controls how many small-loss data are extracted from the disagreement data by λ(e), defined in Eq. (2). However, parameter λ(e), which is the noise rate R noise , is an estimate for the entire training data D, and the expected noise rate for the two subsets, i.e., the agreement or disagreement data, is not always R noise . In other words, it is not appropriate to fetch the same proportion of small-loss samples throughout all iterations, because the number of small-loss samples with true labels present in the disagreement subset will vary in each iteration. To avoid this problem, it may be possible to reduce the number of noisy data, for example, by making the sampling criterion more stringent, such as lowering λ(e) at the end of training. However, such scheduling of λ(e) requires a certain amount of clean validation data. We first search for the λ(e)th percentile loss in network m (= 1, 2), which is denoted as P where L(·; ω (m) ) is the loss parameterized by ω (m) when the samples are given. By calculating the threshold based on mini batch dataD, it is possible to tighten the sampling criterion when the disagreement samples are biased toward high loss data. Thus, we can address the problem of decreasing the true-label rate as training progresses. This enables adaptive small-loss sampling according to the training situation at each epoch without the need for clean validation data. Note that, when one of the two sets of data is not present in steps 9-10 (Algorithm 1), the networks are updated using disagreement data D without small-loss selection, similar to the Co-teaching+ algorithm.
Finally, given a sample to be labeled, we use one of the two networks to predict the label of the sample, following the method used in Co-teaching and Co-teaching+.
IV. EXPERIMENTAL RESULTS
In this section, we confirm the effectiveness of the proposed method by simulating noise to create datasets based on the MNIST, CIFAR-10, CIFAR-100, NEWS, and T-ImageNet datasets.
A. EXPERIMENTAL SETUP
Datasets: The details of the five datasets used in our experiments, MNIST, CIFAR-10, CIFAR-100, NEWS, and T-ImageNet, are summarized in Table 1. From those datasets, we created synthetic datasets by corrupting their labels using two noise transition matrices, symmetry flipping [26] and pair flipping [14], following [23], [24]. Note that on the NEWS dataset, [24] conducted experiments on seven classes that are groups of the original 20 classes, whereas we conducted experiments on the original 20 classes. An example of a noise transition matrix for symmetry flipping with four classes and a noise rate of R noise is as follows: (4) We used symmetry flipping with R noise = {0.2, 0.5}, denoted as Symmetry 20% and Symmetry 50%, respectively.
Next, we used two types of pair flipping. The first type swaps the labels between adjacent classes. An example of pair flipping, applied between adjacent classes with four classes and a noise rate of R noise is as follows: (5) We used "adjacent" pair flipping for datasets with R noise = 0.45, denoted as Pair(adjacent) 45%.
Unlike Pair(adjacent), the second type of pair flipping is to swap labels between two classes that are visually similar. The reason for simulating noise in this way is that we assume that real-world annotators are highly likely to mislabel classes that are similar in visual appearance. We followed [14] to define visually similar classes. For MNIST, the transitions are 2 → 7, 3 → 8, 5 ↔ 6, 7 → 1. For CIFAR-10, the transitions are TRUCK → AUTOMOBILE, BIRD → AIRPLANE, DEER → HORSE, and CAT ↔ DOG. For CIFAR-100, because there are 20 superclasses such as aquatic mammals, fish, and flowers, the transitions are made within the same superclass. For NEWS, because there are seven news groups (comp., rec., aci., misc., talk, alt., and sci.), the transitions are made within the same group. We used "visually similar" pair flipping with R noise = 0.45, denoted as Pair(similar) 45%. Note that for T-ImageNet, while it is possible to form class groups in the tree hierarchical structure defined in WordNet [46], we did not conduct experiments on Pair(similar) because the distribution of the number of classes per group is imbalanced.
In our experiments, we assume that the noise rate R noise is known. However, R noise is not known in practice, although an estimate can be obtained by counting the number of perfect samples [14] of each class. Baselines: We compared the proposed method, denoted as Proposed, with the following state-of-the-art methods: 1) Standard: The networks shown in the Table 2 are trained directly using noisy data. Standard is included in the comparison to verify how much accuracy is reduced when the robust deep learning method is not used for noisy data. 2) Co-teaching: This method trains two networks simultaneously in a symmetric way. Reference [23] demonstrated that Co-teaching outperforms loss correction methods [13], [14], [47] and previous sample selection methods [22], [19]. 3) Co-teaching+: An improved version of Co-teaching, which has the disagreement step in addition to the cross-update step, is a state-of-the-art method based on sample selection. Our training scheme is designed based on Co-teaching+. 4) Huberized: This method introduces the partially Huberized loss function [18] to Co-teaching+. A comparison of the performance of Huberized and Proposed confirms the effectiveness of the proposed adaptive sample selection. We re-implemented all methods using public source code under the same conditions. As described above, in this study, the proposed method was compared with the methods without pre-training using a subset consisting of clean validation data. Network structure and optimizer: The network architectures and optimization methods were changed for each dataset. For experiments using the MNIST, CIFAR-10, CIFAR-100, NEWS, and T-ImageNet datasets, we used the experimental conditions given in [24]. The architectures used in our experiments consist of a two-layer MLP for MNIST, a five-layer CNN for CIFAR-10, a seven-layer CNN for CIFAR-100, a three-layer MLP for NEWS, and a 18-layer Preact ResNet [48] for T-ImageNet. The details of the architectures are summarized in Table 2. As an optimization method, we used Adam [49] with an initial learning rate of 0.001, linearly decreasing to zero from 80 epochs to 200 epochs, a momentum of 0.9, and a batch size of 128.
Evaluation metric: For the evaluation metric, we used the test accuracy, i.e., Test Accuracy = (# of correct predictions) / (# of test dataset). All experiments were repeated five times, and we report the averaged results. In each figure, the 95% confidence interval is indicated by shading. Figure 3, for the Symmetry 20% and 50% conditions, the accuracy of Proposed is almost the same as that of Co-teaching+ and Huberized, but is better than those of the others. For Pair(similar) 45%, Proposed outperforms Co-teaching in the middle epochs. In contrast, the accuracy of Proposed is lower than that of Co-teaching at the last epoch. For Pair(adjacent) 45%, Proposed shows a significant improvement in accuracy. Table 3 shows the average accuracy of different methods in the last 10 epochs. Proposed has the highest accuracy of 89.22% and 97.88% for Pair(adjacent) 45% and Symmetry 20%, which are 4.91 and 0.07 pps 1 higher than the secondbest methods, respectively. For the Pair(similar) 45% and Symmetry 50% conditions, the differences between the best method and Proposed are only 0.15 pp. In other words, Proposed is almost equal to the second-best method for Symmetry 20%, Pair(similar) 45%, and Symmetry 50%, but it is much more effective under the Pair(adjacent) 45% condition, with an increase of 4.91 pp.
1) Results on the MNIST dataset
When Proposed is compared with Co-teaching+, the difference is 0.07 pp for Symmetry 20% and 0.15 pp for Symmetry 50%, which are almost equal. However, under pair flipping conditions, Proposed is superior by 6.89 pp for Pair(adjacent) 45% and 2.8 pp for Pair(similar) 45%, which are substantial differences.
2) Results on the CIFAR-10 dataset Figure 4, for Symmetry 20%, the accuracy of Proposed is almost equal to that of Co-teaching+ and Huberized, but for Symmetry 50%, that of Proposed is better than that of Co-teaching+ and Huberized. For the Pair(adjacent) 45% condition, there is an improvement in the latter epochs when compared with Co-teaching+. In contrast, for Pair(similar) 45%, Proposed has the lowest accuracy. Table 4 shows the average accuracy of different methods in the last 10 epochs. For Pair(adjacent) 45%, Symmetry 20%, and Symmetry 50%, Proposed has the highest accuracy, This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. i.e., 39.58%, 57.81%, and 51.67%, which are 1.76, 0.75, and 2.22 pps higher than the second-best results. However, for Pair(similar) 45%, Proposed has the worst accuracy of 49.25%. This is 1.89 pp lower than the best result of Coteaching. Proposed outperforms Co-teaching+ under the Pair(adjacent) 45%, Symmetry 20% and Symmetry 50% conditions. Among them, the differences in accuracy for Pair(adjacent) 45% and Symmetry 50% are 1.76 and 2.22 pps, respectively, indicating a substantial improvement. In contrast, under the Pair(similar) 45% condition, there is no substantial difference.
3) Results on the CIFAR-100 dataset Figure 5 compares the accuracy for each epoch up to 200 epochs on the CIFAR-100 dataset. The accuracy of Proposed is almost the same as that of the baselines for Symmetry 20%, but for the other cases, the accuracy is substantially better than those of the baselines. In particular, in the latter epochs, the proposed method avoids overfitting on noisy data. Table 5 shows the average accuracy in the last 10 epochs. Proposed has the highest accuracy of 32.98%, 33.07%, and 39.95% under the Pair(similar) 45%, Pair(adjacent) 45%, and Symmetry 50% conditions. When compared with Co-teaching+, the difference is 0.03 pp for Symmetry 20%, which is almost the same, but Proposed is better by 2.8, 4.37, and 1.93 pps for Pair(similar) 45%, Pair(adjacent) 45%, and Symmetry 50%. Figure 6 compares the accuracy for each epoch up to 200 epochs on the NEWS dataset. Even for this dataset, which is text-based and not visual data, the accuracy of the proposed method is better than that of the baselines, especially in the latter epochs. This result shows that the small-loss criterion based on the memorization effect is practical not only for visual data but also for other types of data. Table 6 shows the average accuracy in the last 10 epochs. Proposed has the highest accuracy values of 18.11%, 16.25%, 19.20%, and 15.11% for each of the four noise transition patterns, outperforming Co-teaching+ with an average improvement of about 1.5 pp.
5) Results on the T-ImageNet dataset
To evaluate our method in a complex situation, Figure 7 shows the averaged test accuracy on T-ImageNet over the last 10 epochs. On this dataset, although the test accuracy temporarily decreases at the 80th epoch when the learning rate starts to decrease, the methods using the co-training framework with the small-loss criterion suppress the tendency of the test accuracy of the Standard method to decrease because of noisy labels. Among these methods, Proposed performs better as the number of epochs increases. Table 7 shows the test accuracy in the last 10 epochs. Proposed consistently achieves higher accuracy regardless of noise transition pattern. The differences between Proposed and Co-teaching+ are 4.23 pp for Pair(adjacent) 45%, 2.5 pp for Symmetry 20%, and 2.66 pp for Symmetry 50%. The results of the experiments on the five datasets show that Huberized has almost the same accuracy as Co-teaching+ with a difference of no more than 1 pp. In contrast, Proposed improves the accuracy by up to 6.89 pp when compared with Co-teaching+, i.e., on the MNIST dataset with Pair(adjacent) 45%, and the worst case is 0.28 pp, i.e., on the CIFAR-10 dataset with Pair(similar) 45%. Note that improvements in accuracy are observed for all noise transition patterns on the NEWS dataset, which is not a visual dataset, and the CIFAR-100 and T-ImageNet datasets, which are close to a real environment and have a large number of classes.
C. TRUE-LABEL RATE DISCUSSION
In this section, we compare the true-label rate of Co-teaching+ and that of Proposed. Our objective is to reduce the decrease in the true-label rate of deep learning by combining the cross-update and disagreement strategy. Therefore, we verify the effectiveness of Proposed by confirming whether the introduction of our sample selection method improves the true-label rate. Note that to calculate the true-label rate, we used the ground-truth labels before the label transitions, but they were used only for reference.
First, Figure 8 (a) compares the true-label rate on the MNIST dataset, and it can be seen that the true-label rate of VOLUME 4, 2016 Proposed gradually becomes lower than that of Co-teaching+ for Pair(Similar) and Pair(adjacent). However, Figure 3 and Table 3 show that Proposed outperforms all baselines shown. This result can be explained as follows: (i) on MNIST, the predictions of the two networks agreed on many of the samples, (ii) Proposed did not use the small-loss trick after the middle epochs, and (iii) Proposed suppressed the decrease in the number of training samples in the initial epochs. Whereas the true-label rate of Co-teaching+ outperforms that of Proposed, Co-teaching+ suffers from insufficient learning due to the small number of small-loss samples in the backpropagation. The degradation of generalization performance due to insufficient learning can be confirmed by the performance difference between Co-teaching+ and Coteaching shown in Figure 3. In the initial epochs, there is no significant difference in the true-label rate between Proposed and Co-teaching+. However, Proposed, which determines the small-loss criterion by considering the loss distribution of the whole mini batch, used a larger number of samples for backpropagation than Co-teaching+. This effect leads to the performance difference in the initial epochs. Moreover, as shown for Pair(adjacent), the true-label rate of Proposed starts to decrease from the middle epochs. Both Proposed and Co-teaching+ use all the disagreement data including noisy labels when the number of small-loss samples becomes zero.
The reason for the decrease in the true-label rate of Proposed is that this process switching occurs. Hence, whereas the truelabel rate of Co-teaching+ is higher than that of Proposed, its generalization performance is not improved due to the small number of samples. Therefore, the proposed sample selection method is able to suppress the decrease in the number of training samples in the initial epochs, which occurs when the predictions of the two networks agree frequently. Second, Figure 8 (b) compares the true-label rate on the CIFAR-10 dataset, where a substantial improvement is observed for Symmetry 20% and 50%. In contrast, the two truelabel rates for Pair(similar) are almost the same, whereas for Pair(adjacent), there is a slight improvement in the true-label rate but an increase in the accuracy. Under the Pair(adjacent) condition, even a small increase in the true-label rate contributes to an improvement in accuracy.
Finally, Figure 8 (c) shows the true-label rate on the CIFAR-100 dataset, where improvements in the true-label rate is confirmed in all cases. As shown in Table 5 improvement in the accuracy. This result can be explained as follows. (i) Proposed suppressed the decrease in the truelabel rate and trained the network with fewer noisy samples.
(ii) Proposed accelerated the fit to the hard samples. The comparison of the true-label rate between Proposed and Co-teaching+ in Figure 8 (c) shows that the true-label rate of Proposed exceeds that of Co-teaching+ throughout almost all epochs. In this paper, the true-label rate is defined as the proportion of samples with the true label among the smallloss samples extracted from the mini-batch at each iteration. Thus, by maintaining a high true-label rate, Proposed can train the network with more true-labeled samples than Co-teaching+. Second, DNNs tend to learn simple patterns first and then gradually memorize all the samples [12]. Therefore, Proposed can be considered to be fit to hard samples, which improves the generalization performance of the classifier, especially by suppressing the decrease in the true-label rate in the latter half of training.
In summary, we can confirm the improvement of the truelabel rate on the CIFAR-10 and CIFAR-100 datasets, and this effect improves accuracy. The results of four different noise simulations, especially on CIFAR-100, show that the proposed method reduces the decrease in the true label rate from the middle to the latter half of training and improves the testing accuracy. For the MNIST dataset, the proposed method successfully avoids the problem of decreasing the number of training samples, which occurs when the number of agreement samples is large. However, in such a case, the proposed sample selection method has a weak impact on the latter half of training. Therefore, the effectiveness of the proposed method, which sets an adaptive loss threshold for small-loss samples for each epoch, is confirmed.
D. COMPUTATIONAL COST
In this section, we compare the computation time of the proposed method with that of other methods. This experiment was conducted by using the CIFAR-100 dataset with Symmetry 50%. We used the PyTorch framework [50] to implement each model, and training was performed on two RTX A6000 GPUs with NVLINK and AMD EPYC 7402P @ 2.8 GHz. Table 8 shows the results of the average computation time per iteration for all 200 epochs. Standard, which learns a single network, has the shortest computation time of the five methods. The Proposed method and the comparison methods, which train two networks simultaneously, have longer computation times. From Table 8, we can confirm that the computation times of the Co-teaching and Proposed methods are very similar.
V. CONCLUSIONS
In this paper, we presented a method to robustly train DNNs under real-world conditions where noisy labels are expected to be heavily present in the training data. DNN training methods that use the sample-selection approach, which uses the small-loss trick based on the memorization effect, has recently become a promising method for scaling to a large number of classes. Among them, Co-teaching+ is a stateof-the-art method that improves robustness by training two networks simultaneously using disagreement data. However, in Co-teaching+, the data selected by the small-loss criterion become noisy as the number of epochs increases. In this paper, we proposed a practical solution to this problem. The key idea of the proposed method is to prevent noisy labels from becoming mixed in the mini batch data by determining the small-loss threshold at each epoch. Extensive experiments on five benchmarks demonstrate that the proposed method achieves a state-of-the-art performance. Further, the improvement in the true-label rate was confirmed on a dataset that closely simulates a practical environment.
One of the limitations of the proposed method is that it relies on the disagreement strategy. Therefore, when the predictive agreement between the two networks is high, the proposed sample selection method is unlikely to be effective. The other limitation is that very difficult but clean samples are indistinguishable from noisy samples. Such samples are helpful for improving the robustness of classifiers. Our future work is to develop a method to incorporate them into the training samples. | 8,908.2 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
High contrast plasma mirror: spatial filtering and second harmonic generation at 1019 W cm−2
Recently, the use of plasma optics to improve temporal pulse contrast has had a remarkable impact on the field of high-power laser-solid density interaction physics. Opening an avenue to previously unachievable plasma density gradients in the high intensity focus, this advance has enabled researchers to investigate new regimes of harmonic generation and ion acceleration. Until now, however, plasma optics for fundamental laser reflection have been used in the sub-relativistic intensity regime (1015–1016 W cm−2) showing high reflectivity (∼70%) and good focusability. Therefore, the question remains as to whether plasma optics can be used for such applications in the relativistic intensity regime (>1018 W cm−2). Previous studies of plasma mirrors (PMs) indicate that, for 40 fs laser pulses, the reflectivity fluctuates by an order of magnitude and that focusability of the beam is lost as the intensity is increased above 5×1016 W cm−2. However, these experiments were performed using laser pulses with a contrast ratio of ∼107 to generate the reflecting surface. Here, we present results for PM operation using high contrast laser pulses resulting in a new regime of operation—the high contrast plasma mirror (HCPM). In this regime, pulses with contrast ratio >1010 are used to form the PM surface at >1019 W cm−2, displaying excellent spatial filtering, reflected near-field beam profile of the fundamental beam and reflectivities of 60±5%. Efficient second harmonic generation is also observed with exceptional beam quality suggesting that this may be a route to achieving the highest focusable harmonic intensities. Plasma optics therefore offer the opportunity to manipulate ultra-intense laser beams both spatially and temporally. They also allow for ultrafast frequency up-shifting without detrimental effects due to group velocity dispersion (GVD) or reduced focusability which frequently occur when nonlinear crystals are used for frequency conversion.
2 in a new regime of operation-the high contrast plasma mirror (HCPM). In this regime, pulses with contrast ratio >10 10 are used to form the PM surface at >10 19 W cm −2 , displaying excellent spatial filtering, reflected near-field beam profile of the fundamental beam and reflectivities of 60 ± 5%. Efficient second harmonic generation is also observed with exceptional beam quality suggesting that this may be a route to achieving the highest focusable harmonic intensities. Plasma optics therefore offer the opportunity to manipulate ultra-intense laser beams both spatially and temporally. They also allow for ultrafast frequency upshifting without detrimental effects due to group velocity dispersion (GVD) or reduced focusability which frequently occur when nonlinear crystals are used for frequency conversion.
Introduction
With the advent of increasingly intense ultra-short pulse laser systems 6,7,8,9 tailoring and modifying laser beams becomes increasingly difficult. At intensities approaching the breakdown threshold for the bulk materials/crystals, for example, spatial filtering and second harmonic generation become a problem. Consequently, for very high-intensity lasers a concept that can withstand arbitrarily high intensities is necessary.
The relatively new field of plasma optics [1] can achieve just that. Relying on the reflection of light off an overdense plasma layer, instead of an optical coating, such an optic can-in principle-withstand arbitrarily high intensities.
One prominent and well-studied example of plasma optics is the plasma mirror (PM) used at laser intensities of roughly 5 × 10 15 W cm −2 to significantly increase the contrast between the peak and the picosecond pedestal of high-intensity laser pulses [2]- [6]. In this scheme, the rapid transition from low reflectivity (e.g. an anti-reflection (AR)-coated surface) to a highly reflective plasma surface results in an optical switch with a rise time of several hundred femtoseconds [2]. For high-power laser systems laser prepulse and amplified spontaneous emission are set to be below threshold intensity and are therefore transmitted through the PM, while the peak of the 3 pulse is reflected. The contrast enhancement (CE PM ) is defined as the ratio of plasma reflectivity (R PM ) to that of the AR coated optic (R AR ), i.e. as: The key constraint on the quality of the reflected radiation is the quality of the reflective plasma surface generated. Therefore, the intensity of the pulse on target must stay below a certain value to avoid significant distortion of the smooth plasma surface. Previous experiments have shown that pulses obeying the constraint c s t < λ Laser , where c s is the expansion velocity of the plasma, t is the time between plasma formation and the peak of the laser pulse and λ Laser is the laser wavelength, permit efficient specular reflection of the incident light.
Another example for an effect specific to plasma optics is the efficient generation of high-order harmonics of the fundamental laser frequency from the plasma generated at the surface of a solid density target [1], [7]- [12]. Harmonic generation is due to two distinct processes: i) coherent wake emission (CWE) as described by Quéré et al [8] and ii) relativistically oscillating plasma surface (ROS), described in detail by the theory of relativistic spikes of Baeva et al [9]. Recent experiments studying relativistic oscillating plasma harmonics have demonstrated diffraction-limited XUV beams at a wavelength of 40 nm [10] and shown the generation of harmonics extending as high as the 3000th order of the fundamental laser frequency corresponding to photon energies of up to 3 keV [12].
In this paper, we quantify for the first time, the beam quality for the reflection of the fundamental (800 nm) and the second harmonic (400 nm) of a 50 fs laser pulse interacting with a fused silica target at intensities of up to 2 × 10 19 W cm −2 with a contrast ratio >10 10constituting a high-contrast plasma mirror (HCPM). Clear evidence of significant spatial filtering of the incident laser beam, dispersion-free generation of high-quality second harmonic of the laser radiation with an efficiency of the order of >5% and a near-Gaussian beam profile are observed. Thus plasma optics constitute a promising route to overcome the limitations of conventional optics for spatial filtering, frequency doubling and contrast enhancement for the new ultra-high-intensity laser systems and also gives important insights into the process of high-order harmonic generation.
Experimental setup
The experiments described in this paper were performed using the ASTRA laser at the Central Laser Facility (CLF) of the Rutherford Appleton Laboratory [13]. It delivers pulses of 800 mJ in 50 fs with a contrast of 1 : 10 7 at 500 fs before the main pulse. The contrast of the beam was further improved using a PM in the near-field of the laser before the beam is focused onto the target. A sketch of the experimental setup is shown in figure 1. Typical images taken with cameras N and F are shown in figures 1(b) and (c), respectively, indicating that the beam from the PM is still focusable and that the flat-top beam profile is preserved. The hole in the beam (labeled y) originates from a hole in a mirror of the ASTRA compressor used to split-off a small part of the beam for diagnostic purposes and is also well reproduced after the reflection of the PM.
The main beam is focused under an angle of 30 • onto a fused silica target, T, using a 15 cm f/3 off-axis parabola P3 resulting in <3 µm near-diffraction limited spot with a spot-averaged intensity 2 × 10 19 W cm −2 . The profiles of the reflected fundamental and second harmonic are The laser beam incident from the left side of the plot is first focused and then recollimated using two identical off-axis parabolas P1 and P2. The PM is placed in the near-field of the focusing beam to enhance the laser contrast. Near-and far-fields of the beam reflected off the PM are routinely monitored with cameras N and F. The beam is then focused onto a fused silica target using a third off-axis parabola P3. The near-field of the fundamental and the second harmonic reflected off the target are observed on a polytetrafluoroethylene (PTFE) screen using a charge coupled device (CCD) camera C. (b) Typical image of the laser near-field after reflection and recollimation off the PM. The hole in the beam, y, marked with the dashed circle, originates from a pick-off mirror after the laser compressor used for temporal characterization and is well reproduced in the reflection off the PM. The hole on the left side of the beam (x) is due to clipping in the diagnostic system and can be disregarded. (c) Typical image of the laser far-field after the reflection from the PM. In (d), we show a lineout along the dashed line in (b). The hole in the beam is clearly visible. The large intensity modulations in the center of the beam are due to interference fringes caused by contaminations on the optics in the imaging system. observed with a third camera C on a PTFE screen placed roughly 20 cm behind the target in the direction of the specular reflection. Suitable color filters CF (i.e. 800 and 400 nm interference) were used in front of camera C to distinguish the signal generated by the fundamental and second harmonic, respectively. The possibility of x-ray fluorescence was eliminated by the use of a glass pellicle placed in the reflected beam in front of the PTFE screen.
Reflected beam quality and spatial filtering
In this section, we show experimental evidence of high-quality reflection and significant spatial filtering of the fundamental laser beam at intensities up to 2.0 ± 0.5 × 10 19 W cm −2 . This high-quality reflection, returning a highly focusable beam, is the key in determining whether a plasma optic is a useful tool for applications. In fact, it is the necessary prerequisite to make these methods applicable in ultra-high-power laser systems.
Focal scan
To study the reflection of the fundamental laser beam off the target under different conditions, we conducted a series of measurements with different positions of the laser focus relative to the target surface, thus probing both the reflection in the near-and far-fields of the laser.
When the front surface of the target is placed 200 µm in front of, or behind the best focus, the reflecting overdense plasma is formed in the near-field of the incident focusing laser beam at an intensity of ∼10 17 W cm −2 that is roughly constant over the whole extension of the flat-top laser beam. Under these conditions all features of the incident flat-top beam are reproduced well in the reflected beam, as can be seen in figures 2(a) and (c) and the red lineout in figure 2(e). The edges of the beam profile remain steep and the hole in the beam originating from the laser compressor (see figure 1) is well reproduced suggesting that the reflectivity of our HCPM is constant over the whole near-field beam diameter. Note that the lineouts shown in figures 1(d) and (e) were acquired using different methods. While figure 1(b) is a direct image of the beam, figure 2(a) is an image of the diffuse reflection off the PTFE screen. This in itself will contribute strongly to the blurring of the hole in the beam visible when comparing figures 1(d) and figure 2(e). However, the hole in the beam is still clearly visible in 2(e) for near-field reflection, whereas it is completely gone when the target is positioned in the far-field. This is the first observation of PM operation at relativistic intensities >10 18 W cm −2 and suggests that extremely high pulse contrast can indeed be achieved by a simple cascade of multiple PMs between the focusing optic and the final target as suggested by Dromey et al [2]. This implies that extremely high-contrast interactions will be possible for the next generation laser systems using cascaded PMs to maintain an acceptable contrast for mirror formation far beyond the regimes currently exploited using double PMs.
In contrast, when the front surface of the target is positioned in the tight focus, i.e. the far-field of the laser beam, significant spatial filtering of the reflected light is observed. In figure 2(b) and the blue lineout in figure 2(e) the hole in the beam has completely disappeared and the edges of the pulse profile are less steep. This suggests that higher spatial frequencies of the beam, that are located further out in the wings of the focal region, are reflected less efficiently than those spatial frequencies focused in the center of the beam.
The effect of spatial filtering using a PM can be understood when compared to previous experiments performed at lower contrast and intensities [2]- [6], [14]. Typical high-power laser systems have a flat-top near-field laser beam profile for maximum energy extraction from the amplifier chain. In the far-field (focal plane) this flat-top profile corresponds to an Airy profile. For conditions of high intensity the central focal region is strongly reflective while the intensity in the wings is not sufficient to generate such a high-quality reflective surface. As a result the higher frequency contributions to the beam profile are suppressed and the resulting reflected beam is significantly smoothed.
Figure 2(f) shows the result of a calculation illustrating this. The initial model beam profile (red) is Fourier transformed to determine the intensity distribution in the focus of the incident beam. We assume that only those frequency components focused within the second minimum of the focal pattern are reflected efficiently on the PM. The inverse Fourier transform of these components yields a beam profile corresponding to the blue curve in figure 2(f). Qualitatively the blue curve from this simple model calculation shows the same features as the experimentally measured reflected beam profile shown in the blue curve of figure 2(e), the slopes of the profile became shallower and the hole in the beam has disappeared.
Varying laser contrast
The key to the operation of the plasma optics in the HCPM regime is that the intensity on target is sufficiently low prior to switch-on intensity for reflection, such that the plasma surface is not deformed due to preplasma expansion, resulting in specular reflection. This implies that, for high intensity (>10 18 W cm −2 ), the pulse must already have a significant contrast (>10 10 ) so that by the time of arrival of the main pulse the target surface is essentially still at solid density, and not a tenuous plasma with a long density scale length, L > λ Laser , compared to the laser wavelength. Figure 3 shows the reflected near-field profile for reflection of our fundamental 800 nm laser beam at an intensity of 10 19 W cm −2 from the fused silica target under HCPM and low contrast PM conditions, respectively. The nanosecond pulse contrast at the target position was changed from high contrast (∼10 10 ) to low (∼10 8 ) contrast by shifting the rising edge of the gate of the fast rise time (∼200 ps) Pockels cell in the laser chain. In the high-contrast case, it was positioned at the foot of the main pulse while it was shifted to a position several nanoseconds before that in the low contrast case.
Note that in both, the low-and the high-contrast case, the laser beam was cleaned using the PM and the near-and far-fields of the cleaned beam look identical. This proves that in both cases the PM was switched on only a few picosecond before the arrival of the main pulse and not by the nanosecond pedestal [2]. The consequence of this is that the shape of the prepulse determined by the timing of the fast Pockels cell is preserved but the intensity of it is reduced by a factor given by the ratio of the reflectivity of the plasma divided by the reflectivity of the unionized PM-substrate (see equation (1)). This has been shown in detail using a third-order autocorrelation in previous experiments [2,5].
In contrast to that a striking difference can be observed in the reflected beam profile from the relativistic interaction on the main target for the low-and the high-contrast case. For low contrast the reflected beam profile breaks up completely and no beaming is observed in the direction of the specular reflection (the dashed circle in figure 3 marks the area where the specular reflection would be expected). In the high-contrast case, the behavior of the highintensity PM is very different. Figure 3 shows a high-quality beam reflected from the target surface. This is a clear indication that the beam was reflected off a well-defined surface with a very short scale-length preplasma and that despite the high-focused intensity of the laser beam the contrast was high enough to prevent an early ionization of the target. 8 The different behavior for high and low nanosecond-contrast can be attributed to different plasma switch-on times. In the low contrast case the nanosecond pedestal ionizes the target leading to a reflection of the beam off a long scale-length preplasma. As has been pointed out earlier [2] this leads to a break-up of the reflected beam and a loss of specular reflection. In the high-contrast case, the plasma is switched on by the foot of the main pulse not giving the target enough time to significantly expand resulting in specular reflection. The reason why the interaction on the target is more sensitive to the intensity of the prepulse than that on the conventional PM can be readily understood when considering that the interaction intensity at the target is three to four orders of magnitude higher than at the PM while the contrast is only two orders better. Thus, while the prepulse level is low enough to prevent early ionization on the PM even in the low contrast case, this is not the case at the main target leading to the destruction of the reflected beam under these conditions. This has two important implications. One is that a PM can be operated properly at intensities much higher than those quoted in [2]- [6] provided the contrast of the laser beam incident on the PM is sufficient. The other is that the reflected fundamental of the driving laser from a solid target can be used to monitor the quality of the surface at the time when the main laser pulse is incident on it. Good reproduction of the near-field indicates an interaction with a well defined, steep density gradient, while beam breakup implies that a long scale length preplasma was formed.
As a result monitoring the reflected fundamental light in, for example an ion acceleration experiment from very thin foils could give important insight into the state of the target at the instance of the interaction with the main laser beam. It is also of interest in high harmonic generation experiments from solid targets since a surface that is 'clean' enough to reflect the fundamental laser beam is the absolute prerequisite for the observation of near diffraction limited harmonic beams as have been demonstrated [10].
Second harmonic generation
Another interesting aspect of high-intensity laser-solid interactions is the efficient generation of the second harmonic and its high beam quality. In general, second harmonic generation with nonlinear crystals (e.g. potassium dihydrogen phosphate (KDP) and beta barium borate (BBO)) works very well. However, for the shortest pulses frequency doubling crystals are not ideal. Firstly, they lengthen the pulse of the frequency doubled beam. This occurs because they do not have sufficient bandwidth and due to GVD walk-off between the fundamental and the harmonic [15].Forhighpowerfemosecondlaserbeams(suchasthePFS(seefootnote8) orASTRAGemini(seefootnote9))whichhaveintensitiesof2-4TWcm −2 intheexpanded beam, nonlinear effects both in time and space become intolerable even for ultra-thin crystals (B-integral >5 in a few 100 µm of crystal).
Harmonics from solid targets may provide an attractive alternative because the second harmonic is predicted to reach reflectivities R 2ω > 0.5R ω [16] for a 0 > 3-provided high efficiency can be demonstrated without degradation of the beam quality. Since the harmonic generation is a surface effect, there should also be no pulse stretching due to dispersive effects or bandwidth limitations.
To investigate the properties of the second-harmonic emission as an alternative means of achieving 2ω operation, we have studied the near-field beam profile of, and the conversion efficiency using our HCPM. Figure 4(a) shows a comparison of the reflected beam profile of the fundamental and the generated second harmonic when the target is positioned in the focus of the incident laser beam. A lower bound for the conversion efficiency is estimated by comparing the signals measured for the fundamental and the second harmonic and the filters used in the measurements. This comparison shows that >5% of the incident laser light is converted into the second harmonic. This value approaches ≈0.2, the value achievable with nonlinear crystals at femosecond pulse durations. Note that the conversion efficiency may in fact be substantially higher because the reflected second harmonic is expected to have a bandwidth >2× that of the interference filter and the redshift due to holeboring [17] will result in the second harmonic being shifted with regards to the maximum transmission of the 2ω filter.
Besides looking at the conversion efficiency, comparing the beam profiles of fundamental and second harmonic (figure 4) yields interesting insight into the interaction of the laser with the HCPM. Firstly, the divergence of the fundamental and the 2ω beams are very different and secondly, the beam profile of the second harmonic is much smoother and more Gaussianlike than that of the fundamental. The divergence of the 2ω beam corresponds to a diffraction limited second harmonic beam being emitted from a source the same size as the laser focal spot. Consequently the 2ω beam must be focusable to near-diffraction limit for its wavelength. This implies that even at the lower bound of the conversion efficiency the peak intensity that can be achieved at 2ω is likely to exceed that of a beam frequency doubled in a crystal, where the divergence of the frequency doubled beam is typically similar to that of the fundamental and consequently non-diffraction limited. Once the shorter pulse duration that is achievable for femtosecond-pulses is taken into account using HCPM appears to be favorable in terms of the peak intensity that is achievable.
To understand the difference between the beam profiles it is important to consider that the fundamental and the second harmonic have a very different origin. While the fundamental is the reflection of the incident laser beam off the over-critical plasma density surface formed on the target the second harmonic is generated during the relativistic interaction of the driving laser with the target. This difference becomes obvious when considering what kind of intensity distribution in the focus results in the experimentally measured beam profiles after the beam has expanded from the target. Figure 4(b) illustrates this by comparing the normalized intensity distributions of the fundamental and the second harmonic in the focus of the incident beam, i.e. in the source of the expanding beam. The focal distributions were calculated from the measured beam profiles via Fourier transform taking into account the different wavelengths of fundamental and second harmonic.
For the fundamental, we still expect substantial reflectivity of the mirror at intensities of 10 17 W cm −2 and thus it is reasonable that the side maxima of our focal distribution at least partially reflect the incident light as can be seen in the red plot in figure 4(b). In contrast to this the second harmonic shows no emission originating from the side maxima of the focal distribution (blue curve in figure 4(b)). Instead the source distribution is nearly Gaussian with some extra energy in the wings smearing out across the first minimum in the fundamental distribution. This distribution suggests that the second harmonic is generated via the relativistic oscillating surface mechanism. The intensity in the central part of the focus is sufficiently high to generate harmonics, whereas the side lobes with intensities below the relativistic limit show negligible conversion efficiency [7]. Note that the central peak in the source is not narrowed despite the nonlinearity of the generation process. The transverse motion of the electrons in the oscillating surface [10] results in an oscillating mirror of about the same width as the focal spot.
While the source distribution can be nicely explained in this way the lack of emission from the side lobes with focal intensities of a few times 10 17 W cm −2 seems surprising at first glance. CWE harmonics can be generated at intensities as low as 10 16 W cm −2 , more than one order of magnitude lower than the intensity in the side lobes, and the conversion into CWE harmonics is found in [8] to depend only weakly on intensity. To understand why we still do not observe any emission in the wings of the focus it is important to consider that the CWE mechanism requires a density gradient in order to generate harmonics [8,18]. A step-like density ramp does not create any CWE [19]. If we now take into account that the second harmonic is generated at a density of n = 4n c simulations show that the relativistic laser pulse steepens the density profile in this density range so strongly that the scale length becomes practically zero [20]. In this case the main source for harmonic emission is the ROS [9] which is only efficient for intensities higher than the relativistic limit of Iλ 2 = 1.38 × 10 18 W cm −2 µm 2 leading to practically no emission from the side maxima of the focus.
Especially interesting is that this approach is obviously not limited to generating the second harmonic. It can also be used to generate, for example, third or fourth harmonic efficiently for application in an experiment. It is important to note though that the beam profile for these harmonics would have to be investigated in detail since the CWE contribution to these harmonics is generated at higher densities in the preplasma gradient where the effect of steepening from the laser may be less pronounced.
Conclusions
In conclusion, we have demonstrated stable operation of plasma optics under high contrast conditions at intensities of 2 × 10 19 W cm −2 -three orders of magnitude more intense than in previous PM experiments [1]- [6]. We have shown that in this new regime of operation, the HCPM is capable of reproducing the beam profile of the incident laser beam very well when operated in the near-field, and functions like an efficient spatial filter in the far-field. This demonstrates that ultra-high contrast operation is scalable to lasers with highest peak intensities by cascading multiple PMs and provides a means of spatial filtering that in contrast to the conventional method with pinholes, is scalable to arbitrarily high intensities.
The observation of near diffraction limited and efficient reflection of the fundamental laser light off a target at intensities above 10 19 W cm −2 also provides a precise diagnostic for truly high contrast interactions, e.g. for thin-foil ion acceleration experiments.
The observation of dispersion free, diffraction limited second harmonic generation offers an advantageous method for frequency doubling ultra-short or ultra-high power laser pulses (seefootnotes6-9).Also,owingtotheharmonicnatureofthegenerationprocess,thismethod is obviously not limited to the generation of the second harmonic, even though the beam profile of higher harmonics would need to be investigated in detail. | 6,318.2 | 2008-08-05T00:00:00.000 | [
"Physics"
] |
How to use ambiguity in problem framing for enabling divergent thinking: integrating Problem Structuring Methods and Concept-Knowledge theory
Collective behaviours and participatory models could be hampered by the presence of ambiguity, that reflects the multiplicity of interpretations that different actors bring to a modelling exercise. Despite commonly overlooked in modelling, how ambiguity in subjective problem frames is embraced determines the quality of the participatory modelling process. This work describes an innovative approach based on the integration of Problem Structuring Methods, specifically Fuzzy Cognitive Mapping (FCM), and Concept-Knowledge (C-K) design theory, as mean to transform ambiguity from a barrier to an enabling factor of divergent thinking in participatory modelling. The integration of methods allows to identify and analysis ambiguity in problem framing, avoiding viewpoints’ polarization that hamper the development of collective behaviours. However, individualistic problem frames can still yield organized collective actions when these frames are sufficiently aligned. Often environmental policies fail because decision-makers are not aware of the misalignment and their decisions are based on wrong assumptions about the others’ problem frames. This work discusses the results of two case studies aimed to design environmental policies for groundwater protection in Kokkinochoria area (Republic of Cyprus) and Apulia Region (South-East Italy), demonstrating the potential of FCM and C-K theory integration in supporting divergent thinking in participatory modelling.
Introduction
Understanding the relationship between actors' knowledge, behaviour and action is a key challenge for modelling approaches (White, 2017). Participatory activities are expanding modelling beyond prediction in order to include processes co-designed with stakeholders and inclusive of multiple knowledge forms (Brugnach and Ingram, 2012). As White (2016) discussed, originally OR focused on the objectivity of the scientific method, and the adopted models assumed a singular version of rationality (Jackson 2006, Keys 1997, Mingers 2000 independent from different perceptions (Ackoff 1962and 1978, Lesoume1990, Mingers et al. 2004, Raitt 1979. However, soft modelling approaches investigated the possibility of using qualitative methods, including subjective values to support decision-making (Checkland et al. 2004, Davis et al. 2010, Eden et al. 2006, Mingers, 2011White et al. 2007, Yearworth et al. 2013. Capturing differences in problems frames, through models of viewpoints, enhance an understanding of a problematic situation and to help support its resolution (Eden 1992, Giordano et al. 2017a, White 2017. In doing so the presence of ambiguity in the perception of the problem to be addressed, between model developers and model users, and among different users, is challenging the effectiveness of participatory modelling approaches (e.g. Brugnach et al. 2007, Janssen et al. 2009, Wood et al. 2012). Ambiguity is a type of uncertainty that indicates the confusion that exists among actors in a group regarding what the concerning issues, problems or solutions are (Weick 1995). It reflects the multiplicity of interpretations and meanings different actors bring to a modelling exercise.
Ambiguity can be both a source of creativity and a source of conflict (Giordano et al. 2017a). While it is commonly overlooked during modelling, how ambiguity is resolved and embraced is determinant for the quality of the participatory process supported by the modelling exercise, influencing what is being modelled and the outcomes generated (e.g. Ingram 2012, Leskens et al. 2014). This is particularly true in participatory modelling activities for the design of environmental policies, where a plethora of different decision-actors, with different, and potentially conflicting, goals and values need to be involved. Furthermore, considering behaviour in participatory modelling activities should strengthen the relationship between "representing" and "intervening" focusing on the mediating role of the model and its social practice (White, 2017).
Within this context, what is the most suitable approach for representing different values, goals and knowledge when engaging stakeholders in a participatory modelling process? Providing answer to this research question is the main scope of this work.
On the one hand, representing the different contributions could produce several benefits in the modelling exercise. Firstly, integrating different pieces of knowledge allows to develop a model capable to support policy-and decision-makers in accounting for the different issues related to the problem at stage. Secondly, it could have a positive effect on the stakeholders' long-term engagement in the participatory activity. Evidences show that if the participants are capable to recognize their contributions in the developed model, then they will develop a sense of ownership toward the model itself, that could guarantee the long term engagement (Giordano and Liersch, 2012).
On the other hand, integrating different perspectives in the modelling process raises several issues.
Firstly, dealing with conflicting problem understandings requires efforts from the modelers to achieve a consensus among the participants. Secondly, power issues need to be accounted for. That is, are the collected pieces of knowledge equally important or different weights have to be assigned according to the expertise of the stakeholders (Krueger et al., 2012;Giordano and Liersch, 2012)?
Addressing the above-mentioned issues is of utmost importance in order to facilitate the participatory modelling process and to make the obtained model suitable for supporting the decision-making process.
This work describes an innovative approach based on the integration between Problem Structuring Methods (e.g. Checkland 2000, Rosenhead 2006, and specifically Fuzzy Cognitive Mapping (FCM) (Kosko 1986), and Concept-Knowledge (C-K) theory (Hatchuel et al. 2003, Agogué et al 2014b, Le Masson et al. 2017) as a mean to transform ambiguity from barrier to enabling factor of divergent thinking in participatory modelling. The activities described in this work demonstrate the suitability of the integrated approach to avoid the polarization of viewpoints, conditions that can greatly interfere with the development of participatory models for collective actions. To this aim, as suggested by some authors (e.g. Brugnach et al. 2011, Giordano et al., 2017aPluchinotta et al. 2019a), we assumed that divergent frames can still yield organized collective actions when different problem frames are sufficiently aligned and a "shared concern" among the stakeholders is built, avoiding the formation of wrong assumptions about the others' problem frames.
The proposed approach was experimentally implemented in two case studies aiming to design environmental policies for water management and groundwater protection, namely Kokkinochoria area (Republic of Cyprus) and Apulia Region (South-East of Italy). The obtained results demonstrate the potentialities of FCM and C-K theory integration in supporting divergent thinking.
This chapter is structured as follow, after the present introduction, section 2 describes the integrated approach and discusses the case studies, while concluding remarks and the lesson learned are reported in section 3.
Integrating Problem Structuring Methods and Concept-Knowledge theory
In order to provide answer to the research questions, an innovative approach based on the integration between PSM and C-K theory, was designed and implemented in two case studies described further in the text.
This developed multi-methodology is meant to facilitate the alignment of different problem frames and available knowledge and to enable the creative process for innovative policy design and consensual participatory modelling exercises.
On the one side, C-K theory supports the innovation management within a design generative process. It is based on the distinction between two expandable spaces: a space of Concepts (Cspace), and a space of Knowledge (K-space). The co-evolution of the C-and K-spaces represents the generative process (Hatchuel et al. 2003). In this work, the K-space expansion phase is supported by making the decision-makers aware of the main reasons of ambiguity, while the C-space expansion is realized accounting for the policy alternatives that could be implemented to overcome the main differences in problem framing.
On the other side, in this work FCM allows to elicit and structure individual problem frames, contributing to identify and analyse the mains elements of ambiguity and those that can alter the modelling outcomes. Thereafter, the results of the ambiguity analysis are used as elements of the Kspace, supporting the creativity process within a C-K theory framework.
The following phases were identified in the proposed methodology: 1) PSM, and specifically Fuzzy Cognitive Mapping activities are used to elicit and structure stakeholders' individual problem understanding, and to detect the most important elements in their mental models; 2) Ambiguity analysis is implemented to detect and analyse similarities and differences in problem frames. To this aim, two elements were accounted for, i.e. the most central elements in the FCM and the expected dynamic evolution according to the FCM simulation.
Starting from the results of the previous phases, a C-K theory based tool, namely P-KCP, designed and implemented in the domain of policy design, was applied in order to facilitate the alignment of the problem frames and the creation of the shared concern as starting point for the generation of policy alternatives (Pluchinotta et al., 2019a for details). Therefore: 3) Phase K aims to gather missing information and building a comprehensive summary of current knowledge about the issue under consideration. It combines the outputs of the ambiguity analysis with scientific literature studies, available data, emerging technologies, best practices, etc. This phase supports the building of the overall K-space combining and aligning the individual stakeholders' K-spaces, in order to reach a shared concern and a common knowledge between each viewpoint. 4) Phase C for the development and expansion of the C-space supported by the creation of a shared base of knowledge. Phase C consists of one-day generative workshop in which stakeholders collectively evaluate and discuss the elements representing the dominant design (i.e. traditional policy alternatives) and suggest expansions of the C-tree. The tree-like structure of the C-space allows to illustrate various policy alternatives as concepts connected to the initial design task under consideration.
5) An integrated model is developed referring to the aligned problem frame defined during the phase K. The model is capable to simulate policy scenarios designed during phase C, and to support the further expansions of the K-space by introducing the elements concerning the potential impacts of the selected policy alternatives.
The proposed multi-methodology was implemented in two case studies aiming to design environmental policies for groundwater protection in Kokkinochoria area (Republic of Cyprus) and Apulia Region (South-East of Italy). For sake of brevity, the case studies activities are used in this work for describing the different steps of the adopted approach.
Case studies description
The purpose of this section is to briefly presents the insights from the applications of the integrated methodology combining FCM and CK framework for supporting the co-design of environmental policies for groundwater (GW) protection in two case studies, namely Kokkinochoria area (Republic of Cyprus) and Apulia Region (South-East of Italy).
Generally, Mediterranean regions are heavily dependent on GW for socio-economic development (e.g. Zikos et al., 2015). Both areas under analysis are characterized by seawater intrusion caused by intensive agricultural activities in coastal areas, which rely on both surface water and GW (e.g. Pluchinotta et al. 2018, Zikos andRoggero, 2012). This situation is resulting in an increasing imbalance between withdrawn water and the GW recharge, causing an impoverishment in GW quantity and quality (Pereira et al. 2009). Furthermore, both challenging contexts are characterized by the presence of several decision-makers with conflictual objectives and different problem formulations (e.g. Ferretti et al., 2018).
Indeed, most of the policies implemented in the Mediterranean basin aim to improve the efficiency of GW use through innovative irrigation techniques or to restrict the GW use through tight control of farmers activities (Giordano et al. 2015). Nevertheless, evidence suggests that many times those policies largely failed to achieve a sustainable use of GW, due to an over simplification of the ambiguity in problem frames associated (Giordano et al. 2017a
Fuzzy Cognitive Maps
FCM was meant to elicit and structure the different stakeholders' problem frames. The basic assumption is that, to make ambiguity a source of creativity in policies co-development, decisionmakers need to be aware of the existence of different, and equally valid, problem understandings.
The first issue to be addressed concerned the selection of the experts to be involved in this. In order to minimise the selection bias and the stakeholders marginalization (Reed et al., 2009) a top-down (Harrison andQureshi 2000, Prell et al., 2008). The preliminary interviews carried out allowed to widen the set of stakeholders to be involved (Giordano et al. 2017b).
The individual FCM were developed through semi-structured interviews, collecting the stakeholders' perceptions about the cause-effects chains affecting the GW management and protection in the two study areas. Then, the interviewees described causes, direct and indirect impacts of GW mismanagement. The interviews were analysed to detect the keywords in the stakeholders' argumentation (the variables in the FCM) and the causal connections among them (the links in the FCM). Figure 1 shows how the stakeholders' narratives, collected during the interviews, were translated into FCM variables and relationships. Figure The relationships between variables can be represented through an adjacency matrix (e.g. Pluchinotta et al 2019b). In the FCM, this matrix allows the overall effects of an action on the elements in the map to be inferred qualitatively, as described below.
Ambiguity analysis
This phase aimed to detect and analyse the main differences and similarities among the different stakeholders' problem understandings, through two sequential analysis. Firstly, the FCM were examined to detect the most central elements in the stakeholders' problem understanding, the so called "nub of the issue" (Eden, 2004). Secondly, the FCM capability to simulate qualitative scenarios (e.g. Borri et al 2015) were used to describe the expected evolution of the variables' states according to the stakeholders' problem understandings.
Concerning the first analysis, FCM centrality degree was assessed: higher is a variable centrality degree, more central is the variable, and more important is the concept in the stakeholder's perception. Santoro et al. (2019) describes the methodology for assessing the centrality degree. The second analysis aimed at comparing the way the involved stakeholders perceived the evolution of the system through the change of state of the FCM variables. To this aim, the FCM capability to simulate qualitative scenario was adopted (Kok, 2009). Two different scenarios were simulated and compared, i.e. the Business-As-Usual (BAU) scenario and the GW overexploitation scenario. The comparison allowed us to identify the variables that, according to the stakeholders' mental model, will be affected in case of reduction of GW quality due to overuse for irrigation purposes. Figure 4 shows the comparison between the two scenarios for the Water Development District (WDD) in Cyprus.
Figure 4 -Comparison between BAU and GW overexploitation scenario according to the Cyprus WDD's mental model
The graph shows that, according to the WDD's mental model, the overuse of GW for irrigation purposes will lead to a decrease of the water quality, an increase of the seawater intrusion with a consequent reduction of the agricultural production, due to the decrease of the GW quality. These are the most affected variables in the WDD's mental model. Thus, the higher the impacts of GW overuse on the variables in the stakeholder's mental model, the more central these issues are in the stakeholders' problem understanding.
The most important elements were, hence, detected by aggregating the FCM centrality degree and the impact degree, as shown in table 2. These elements represent the most important goals to be achieved through the implementation of a GW protection policy, according to the stakeholders' problem frames. A similar analysis was carried out for the Capitanata case study. The ambiguity analysis allowed us to analyse why and where stakeholders' problem understandings differ each other's. The results of this analysis were used to support the creation of a shared concern and the gather of knowledge on the issue under consideration, i.e. phase K.
C-K theory and the shared concern
A C-K theory-based tool has been designed and tested in the domain of policy design (Pluchinotta et al., 2019a for details). This participatory policy design tool (P-KCP) has been applied in both case studies for a methodological support to the K-and C-spaces expansions.
Specifically, within the policy design process decision-makers operate under conditions of uncertainty, due to limited information about policy outcomes, which can undermine policy effectiveness and complicate policy development (e.g. De Marchi et al 2016, Nair andHowlett 2016, Tsoukias et al. 2013). It has been recognised that novelty in the alternatives' design phase of a decision aiding process, can come through the expansion of the solutions space (Colorni and Tsoukiàs 2018). The expansion of the solutions space can be obtained through the evolution of problem formulations, due to revision or update (Ferretti et al. 2018) and to the alignment of ambiguous problem frames (Giordano et al 2017a). Within this context, design theory describes design processes through a formal methodology, supporting the capacity to be innovative in generation of policy alternatives (Pluchinotta et al, 2019a).
Briefly, modern design theories focus on generate objects, that are partially unknown and will be progressively discovered during the design process itself (Hatchuel et al. 2007, Agogué et al. 2014a). Thus, C-K theory is based on the distinction between two expandable spaces (Hatchuel et al. 2002). The K-space represents all the knowledge available to a designer at a given time and its elements are propositions whose logical values are known (i.e. the Designer can define them as true or false). Whereas the C-space is a set of propositions whose logical status are unknown, (i.e. it cannot be determined with respect to a given K-space) (Hatchuel et al. 2002, Agogué et al. 2014b).
The design process is thus defined as the co-evolution of C-and K-spaces: Concepts are elaborated by using Knowledge and new Knowledge is gained through the elaboration of Concepts (figure 5) (Le Masson et al. 2017).
Figure 5 -The C-K approach
The phase K aims to build a shared base of knowledge supporting the subsequent generative C phase thanks to its expansions. The phase K uses the FCM and ambiguity analysis outcomes to support a participatory group activity where different stakeholders' problem frames are presented and discussed. It detects and analyses potential conflicts among stakeholders leading to the definition of common knowledge and a shared concern on the GW protection problem. The shared concern, namely a common problem formulation among the involved stakeholders, represents the starting point for the generation of policy alternatives.
Afterwards, a stakeholder generative workshop for the C-space building and expansion was carried out for the design of policy alternatives in both case studies.
During the one-day generative workshop, the process of designing policy alternatives was supported and managed accordingly to the C-K principles of innovation management. In the C phase, stakeholders evaluate the dominant design (traditional policies) and propose innovative policy alternatives through the expansion of the C-space. The C-space allows to illustrate various alternatives as concepts connected to the "initial design task" thanks to the tree-like structure (Agogue et al. 2014b). It represents the map of all possibilities, highlighting the dominant design and improving the search of new alternatives. Figure 6 shows the C-tree produced for the Apulia case study, where the initial design task was the design of GW protection policy for the agricultural sector. In both case studies, the discussions of the phase C lead to a portfolio of preferred policy alternatives shared with all the stakeholders and to the introduction of few innovative policy alternatives. For instance, for the Apulia case study, the alternative "shared management of GW aquifers" has been recognised a promising long-term strategy, enhancing the innovative management of GW through a collective decision-making process. A shared GW governance could empower the farmer community through reward regulations for virtuous GW use, overcoming the "command and control" traditional policy. The starting points for this C-space expansion were: i) a specific piece of knowledge in the shared K-space brought by one stakeholder on common pool resources management, according to Ostrom (1990)'s works, that introduced the awareness of the attributes defining the GW resource (i.e. the K-space expansion); ii) the outcomes of the ambiguity analysis that identified the pivotal role of the variable "illegal pumping" in different stakeholders' mental models (Pluchinotta et al. 2019a). Figure 6 uses a colour code: i) the branches describing known policy alternatives are coloured in black, ii) the ones in blue indicate attainable policy alternatives using existing knowledge or a combination of K-space subsets, and iii) the paths in green represent innovative policy alternatives, requiring the expansion of the K-space in order to enlarge the C-space. Similarly, in the Kokkinochoria the results of the discussion during the K-space development were used to align the stakeholders' mental models and to enable the development of an integrated model. It is worth noting that in this case the misalignment that was hampering the development of the integrated modes was not provoked by the lack of common elements among the mental models.
The misalignment was mainly due to differences in the perceived polarity of the causal connections and, thus, of the expected evolution of the state of the variables. In order to overcome the ambiguity as barrier, participants were required to discuss over the expected evolution of the system variables.
A consensus was achieved for the interested variables. Figure 8 shows the aggregated FCM to be used for further discussing the effectiveness of the proposed alternatives with the stakeholders.
Discussion and conclusion
The results collected in the two above mentioned case studies allow us to draw some conclusion concerning the suitability of the PSM and C-K integrated approach to support analysts and modellers in dealing with ambiguity in problem framings during participatory modelling exercise for designing innovative policy alternatives. The PSM, and specifically the FCM, demonstrate their capability to structure the complex cause-effect chains affecting the stakeholder's problem understanding. The ambiguity analysis -based on the FCM modelling approach -allowed us to detect divergences and, in some cases, potential sources of conflicts in GW management. These elements were at the basis of the convergent thinking phase. Making the different stakeholders aware of the differences and similarities forced them to critically analyse their own problem framing, to identify the assumption they usually made concerning the behaviour of the other actors and to challenge those assumptions. In many cases, the discussion based on the results of the ambiguity analysis allowed to change the individual problem frames and to achieve a satisfactory alignment, allowing to co-define the share K-space capable to generate the policy alternatives for GW protection in the two case studies. Thus, the evidences collected during the experience in the case studies demonstrate that making the decision-actors aware of the existence of ambiguous problem framings is the key to enable creative and collaborative decision-making processes.
The analysis of the results obtained in the two case studies allowed to detect potential limits of the adopted approach. Firstly, it requires time and resources in the analysis phase -i.e. FCM development and ambiguity analysis. Nevertheless, the results showed that making the participants aware of the existing differences greatly facilitate the discussion. Therefore, it is possible to state that the time consuming first part of the process allowed a fast and effective convergent thinking phase.
Secondly, the adopted method claims for the long-term engagement of the stakeholders. Since the divergent thinking phase is based on the elicitation and analysis of the individual perceptions of the problem frame, having the same stakeholders participating in all the different phases is a key for the reach of the collective behaviour and the success of the whole process. Participants are sources of information and their opinions may also be compared against available data, contributing to further refinement of the model (Rouwette, 2017). To this aim, efforts were carried out since the early phases of the method implementation in order to meet the actual needs and concerns of the different stakeholders. The results of the individual FCM analysis concerning the main goals to be achieved were used to enhance the communication between the analysts and the participants and, thus, guarantee the stakeholders' involvement in the different phases of the process.
Lastly, the stakeholders expressed the need to have quantitative assessment of the effectiveness of the selected measures in protecting GW. To this aim, the models developed during the interaction with the stakeholders in the two case studies are used for providing further information to the involved stakeholders.
From a behavioural research perspective, as argued by several scholars (e.g. Hämäläinen et al. 2013) there is now a grow in need to incorporate different perceptions into modelling interventions (White, 2016). In this sense the proposed study offered interesting insights for the understanding of the collective behaviour, proposing an integrated method to address behavioural concerns and to avoid the use of behavioural "objectivistic" assumptions in participatory models. | 5,636.4 | 2019-10-25T00:00:00.000 | [
"Computer Science"
] |
Interactional Dynamics Perspective on Academic E-mail Correspondence
The article examines academic e-mail correspondence as a special type of writing in the electronic environment that has dynamic, interactive, dialogical, and distributed character. The focus is made on the dynamics of interaction between the correspondents such as contact setting, orientation and co-functions; and the text of an e-letter is regarded as indices of the write’s state or affordances in terms of ecological linguistics. The establishment of consensual domain of interaction brings about a new stage of cognition emergence which may lead to distributed learning. Co-writing is like a dance that unfolds in the world and with others. Recognition that writing via the Internet is distributed process across time, space and minds may help in understanding the theory of cognition and learning.
Introduction 1.
Ontologically writing and speaking are different phenomena.In general, writing has its constraints (there is neither direct contact, nor non-verbal means of communication, nor paralinguistic components, thus not much interactivity on the whole) and it also has extended capacities (it allows planning and control of the message which can be revised or edited, etc., offers geographical reach and a permanent record can be observed in any time scale).Writing in the context of electronic environment is different and affords greater opportunities for interaction.Moreover, one can witness a shift of writing as something stable, abstract and timeless to writing on the Internet as an interactive and dialogical activity of sense making together with others (co-writing).
Objectives of this paper are to single out the main features of e-mail correspondence in general, to interpret text from the ecological perspective, and to analyze the given e-mail correspondence in terms of interaction and distributed language theory.
Cognitive dynamics of an e-mail correspondence resemble those of talk -at least, in their "other-orientation" though it lacks direct interaction and non-verbal or paralinguistic means of "real" face to face communication.E-mail correspondence, being one of the most wide-spread type of communication in our modern world, hasn't become a well learned object for linguistic analysis.
From bio-socio-cognitive or distributed approach E-mail correspondence can be viewed as a dynamic interactive dialogical and distributed activity aimed at creating a consensual domain of linguistic interactions.Consensual domain of interactions is defined by Maturana as "… a domain of interlocked (intercalated and mutually triggering) sequences of states, established and determined through ontogenic interactions between structurally plastic state-determined systems" (Maturana, 1975).During the correspondence people recursively coordinate, orient, and complement each other, creating a consensual domain of interactions; and if it is a success, it may bring to distributed learning or some new cognitive state may be triggered.
The methods used in the paper are heuristic and holistic.Heuristic method is the collection of linguistic acts data.In this case an e-mail correspondence between two scholars consisting of 18 e-mails to observe and analyze e-texts taking into account pragmatics and semiotics, modes of interaction during the e-correspondence.Holistic or ecological method as a general theoretical method of investigation is applied because a text is viewed not as language symbols but as indices of the writer's state observed by the reader where the environment plays an important role in the interpretation process.The writer presents the affordances or opportunities for the observer's interpretation with the help of the generated text.This method is popular in modern cognitive linguistics being multidisciplinary, and it allows to view text differently from orthodox linguistics (Cartesian dualism) and to describe various types of interaction in the process of ecorrespondence.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences E-mail correspondence is dynamic in character since it runs in series of e-letters, where one complements the other and altogether they create a single whole picture of communication.It proceeds as if in the flow of a dialogue where the outcome of the communication is unknown to the correspondents and depends on the dynamics of their interactions.
The second feature of the e-mail correspondence is its interactivity.It presupposes close interaction despite distance which is provided by fast replies.This interactivity shapes what correspondents do, feel, think, and write together.
The third thing to stress is its dialogicality or even "double dialogicality" (Linell, 2009).The notion of double dialogicality was introduced by Linell who was inspired by Bakhtin's dialogical approach and it implies that communication and dialogue are not just between the two people in the conversation, but there is a third party to what they might refer such as traditions, ideas, conversations, practices they previously experienced and which might be applied further in the current conversation.Writing an e-mail we may refer to "remote audiences", our experience and interactions with them.These "third parties" may be used as aids or partners in the correspondence.
E-mail correspondence is obviously a distributed writing in terms that it's distributed through space (people belonging to different places of the world communicate without any hedge), it is distributed through time in terms that products of the earlier events may transform the later events, it is distributed across members of a social group.We often write together with others -either by borrowing in resources from the Internet -and/or by co-constructing texts."Language can be traced to how living bodies co-ordinate with the world.On this perspective, far from being a synchronic 'system', language is a mode of organization that functions by linking people with each other, external resources and cultural traditions" (Cowley, 2011).
It's distributed in minds taking into account the conception of ourselves as expanded, extended, and dynamic.While communicating our aim is not to be separated from people with our thoughts unknown and closed to others but on the opposite we want to be open, to express ourselves to overcome the boundaries and reach mutual understanding.At the same time our thoughts are shaped in the process of communication, we are who we are socializing with."We ourselves are dynamically distributed, boundary crossing, offloaded, and environmentally situated, by our very nature <…> A person is not a self-contained module or autonomous whole.We are not like the berry that can be easily plucked, but rather like the plant itself, rooted in the earth and enmeshed in the brambles" (Noë, 2009).A person is not his or her brain and we are not locked up in a prison of our own ideas and sensations.Social and cognitive distribution presupposes that our atmosphere is not separate from others.Our feelings, habits of thinking, patterns of words and so on are parts of a complex web that links us all together; it is our "ecology of thought".Isaacs argue that this ecology is the living network of memory and awareness, one that is not limited to any single person but is in fact held collectively.Out of this ecology comes the collective atmosphere in which we all live and work (Isaacs, 2009).
Bakhtin introduced the term "existence from without"."A word (or in general any sign) is interindividual.Everything that is said, expressed, is located outside the "soul" of the speaker and does not belong only to him (or her) (Bakhtin, 1986)."I" exists and shows up only in interaction with or relation to "You".Outside there are two phenomena "I" a "Non I", inside they appear as one inseparable whole."I" as such does not exist without "You" and there is no pre-given "I", it is born, formed, shows up only in relation with "You", where you is a part of my environment.So we are plugged-in organisms.Our selves, our attitudes and motivations are discursively constructed, that is they are created, developed and maintained in interaction with others (Linell, 2009) and ourselves.Co-writing is interaction in the writing environment where we orient one another.
This opens up a new set of questions -and problems.Among this is that of how we should theorize text of an email correspondence and how we use co-constructed writing for learning or, in Maturanian terms, how the correspondence shapes the rise of a consensual domain.
Text of E-mail 3.
The method is based on the inscriptions investigation from pragmatics and considering the outcome -giving rise to a consensual domain construction, new state of mind arising, and possible distributed learning process occurring.The key lies in how to theorize the resulting inscriptions of the E-mail correspondence.
To investigate this, we need a method -one that does not treat text as "language" fixed by a code, a set of definite structures to be analyzed -but, rather, an investigation that brings into account what acts of writing and their consequences mean for those involved.Basically a view where we focus on -not texts -but how these interact with cognitive dynamics.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, Rome-Italy Vol 6 No 6 S1 November 2015 115 On the activity theory view, they are mediational means.As Peter Jones claims in his critical paper there's no code when external speech is turned into inner speech.In fact there's no internalization or appropriation.Text of an eletter is not a mediating means because if it were so we would continue the language myth and a telementation theory, that a language sign mediates between person A and B and B internalize the content and learn.Thus language constitutes a self-contained realm of meaningful forms -the idea we are totally against (Jones, 2009).Following Gibson's ecological approach to perception, the body of the e-mail correspondence is not sense-making symbols by themselves, but the affordances or opportunities for behavior which are meaningful to an observer."The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill" (Gibson, 1979).There appears a question if the affordances are properties of the environment or the organism.Gibson himself answers this question the following way "An affordance is neither an objective property nor a subjective property; or it is both if you like.An affordance cuts across the dichotomy of subjective-objective and helps us to understand its inadequacy.It is equally a fact of the environment and a fact of behavior.It is both physical and psychical, yet neither.An affordance points both ways, to the environment and to the observer" (Gibson, 1979).A. Chemero comes to the conclusion that affordances are not properties of the environment.They are relations between particular aspects of animals and particular aspects of situations.Properties of the environment are not affordances in the absence of complementary properties of animals.Affordances belong to animal-environment system (Chemero, 2009).Thus the text of an e-mail doesn't contain affordances by itself or affordances are not the properties of the e-mail, they occur in the relation between the e-mail and the observer (a person who perceives it).
What we perceive is not the text itself but the relation between us and the text.According to radical empiricist, perception is direct because it is an act that includes the thing perceived."Perceivables are relations between perceivers and aspects of situations" (Chemero, 2009).Thus reading an e-mail we perceive the relation between us and the e-mail and another person perceives the relation between him or her and the same text so that perceptions can remain private.The main issue here is what we perceive is not in the environment alone, it is in the relation between an observer and the environment.These relations are open and dynamic, that is why text interpretation can be different every time it is perceived.
On the Maturanian alternative, the text of the e-letters is indices of an observer's states.In contrast to solo-writing, these inscriptions are more than first correspondent's indices.They are also 'external resources' that influence the dynamics of the three as an extended (or distributed) text-creating system (the first correspondent revises his wordings, they influence later versions of him who comes back to and revises the text; they also influence the second correspondent who is reading them and the third party, who is an "objective observer" who is reading and analyzing the correspondence between the two).Therefore, we orient to -not just the same inscriptions -but to inscriptions that, to an extent, have similar effects (both on ourselves and on each other).And while indices of the observer's states they are also inscriptional structures that function as external resources.
Two Modes of Coordination 4.
Two modes of coordination can be observed in the e-mail correspondence as in any other activity according to Imoto who calls them horizontal and vertical (just for the convenience of understanding).Horizontal coordinations are related to the interactions of the organism with its medium (the environment and the other writer as a part of the environment); vertical coordinations are explained as the interactions with its own internal states that are originally derived from the horizontal coordinations and then recursively coordinated through the nervous system as if they were another independent domain of interactions (Imoto, 2010).Reflection, planning, revision, and editing can be referred to vertical coordinations.The first correspondent writing a first e-mail in the sequence coordinates mostly vertically, that is with him/herself, but at the same time dialogically based on the former interactions with others and priming the possible interactions with the second correspondent, then a letter is produced as indices of the write's inner state and resources or affordances for an observer directed to the second correspondent.The second correspondent's behavior is coordinated horizontally and vertically, as a consequence his or her behavior is changed.The notion of "zeitnot" (time pressure) is crucial in understanding that the written message is perceived "here -and -now" and explains why the writer's state of ideas is never totally equal to the one he or she has had before.
The first correspondent and the text that he/she is creating can be seen as a 'system'.Thus it becomes, broadly, indices of the first correspondent's states (or of ideal states -which result from both indexical acts and subsequent revision)."The relation between a person who perceives the e-mail and a written text that influences the dynamics of the three (a reader, a writer, and an onlooker) can be viewed as an extended text-creating system (Karamalak, 2012).This new approach of distributed writing can be put into an applied frame of e-mail correspondence investigation or analysis.The e-mail correspondence consisting of 18 e-mails between two scientists, a professor and a PhD student is under the current analysis.As a result the following steps of a consensual domain rising in an e-mail correspondence can be pointed out. 1) A system coupling with another system (two correspondents).
2) Mutual frame of reference (shared inscriptional co-indexing, knowledge background, mutual issues of concern, limits of each other).For Maturana (1978), when encountered (in revising or in reading the other), it sets off perturbations that echo the limits of the other and, at the same time, show our attempts to extend boundaries.3) Mutually triggering of inscriptional sequences.There are different types of interactions that lead to the emergence of a new cognitive state such as contact setting, orientation and co-functions.a) Contact setting and interaction (asking and answering questions, sharing knowledge in terms of ideas, bringing examples, dialogical co-action) is an inscription produced by the first correspondent that gives rise to an observable effect in an inscription by the second correspondent.The first stage of interaction includes the following: contact setting, introduction about oneself ("I am a PhD Student <…> I am currently collaborating with <…> What fascinates me is that <…> I switched my interest from <…> I took a summer course …") and bridging the gap between "I and You" by mentioning the other correspondent ("it was very fortunate to have you as a guest …").
The second e-letter (the response) shows us the reply or a positive reaction on the contact setting ("I would be pleased to add you…") some kind of orientation ("If you have any thoughts on how we should conceptualize…").As an observer we can judge about the limits and the desire to expand the boundaries thus motivation for the further interaction the first author is a beginning researcher and the second one is prominent but at the same time the mature scientist also has some limitations ("But I have no knowledge of anyone who has written about any such concept …").
The first stage of interaction can be characterized as a simple and very cautious provided with probing or feeling the ground, so to speak.The most important thing here is acceptance or non-acceptance of the partner.One person makes one step towards the other with the desire to interact and waits for the reaction of the other.In gestures it could be seen through stretching a hand and handshaking.In writing we cannot judge about these dynamics by body movements, the only thing we can judge is by the dynamics of the text.First, two correspondents seem to be "far away" from each other, there's not much mutuality, only motivation, desire to learn something new, and interest in the common issue.However this is more than enough for a good beginning.The first time of interaction can be awkward because people are adjusting themselves to each other.It might resemble a play in the ball.Every time a person starts playing with another one it takes time till you see some progress in playing.First you need to adjust the impact force to the one of your partner's, this adjustment happens recursively.Then the play can be progressed and improved.b) Orientation (prompts that may influence the participants, giving food for thought, a change in the flow of ideas) is an inscription by the first correspondent that evokes a change in how the second correspondent subsequently inscribes.The fourth letter is mostly of orientation character and gives guideline in what direction to move in the discussed field of research ("if I were doing this, I would start out with…"), refers to some papers to read and gives ideas and food for thought.
Cognitive domain of interaction is established from the very first stage of interaction between the two people, but during the stage of orientation it is established to the full.Two writers of the e-mail correspondence involved are coupling, trying to fit each other to compensate the gap of knowledge, cooperate in a successful manner because they are both interested in it.c) Co-functions where an inscription by the first correspondent prompts explicit agreement/disagreement by the second one and on the opposite.Co-functions occur when an oriented organism becomes active to that degree that it can orient the orienting one so when we witness recursive orientation.The stage of co-functions arises when the activity can be called collaborative ("… let's try to work at it a bit collectively <…> we should gain from thinking through some of these things together").
As a famous proverb says "two heads are better than one" or "four eyes see better than two".It is commonly known that working as a group can enhance the effectiveness of decision making, just as it can enhance the effectiveness of problem solving.William Isaacs in his book "Dialogue, the art of thinking together" speaks about the "collective intelligence" (CQ); together we are more aware and smarter than we are on our own [Isaacs, 1999].
Bohm also says about undivided or unbroken wholeness in flowing movement, non-locality or entanglement.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences 117 According to his ideas thought and knowledge are limited, but the boundary can always be extended.Bohr understood it in terms of yin/yang or what he called complementariness.Two organisms change in the way to complement each other like two pieces of puzzle.If this complementarity is achieved, then from this unity something knew is achieved, knew ideas are generated, which couldn't have been worked out without the interaction and consensual domain establishment (Bohm, 1985)."We cannot study creativity by isolating individuals and their works from the social and historical milieu in which their actions are carried out…what we call creative is never the result of individual action alone" (Csikszentmihalyi, 1988).
This distributed learning is a spontaneous system that arises as an order from chaos since both writers cannot predict the emergence of this new system as Maturana says discussing spontaneity versus purposefulness (Maturana, 2011).When the correspondents start co-writing they cannot predict the outcome of their e-mail correspondence and what kind of learning will emerge or in other words what will be learned as a result of this co-writing.The learning domain is chaotic because an observer is not able to predict its arising.Later on the observer realizes this new learning domain of interactions and the chaos is not chaos any more.It turns into a system.
Starting the e-mail correspondence people feel fragmented but conversing there come to feel the "wholeness" and rise to a new level of understanding of the problem they discuss.This type of learning is not when one person orients the other and informs the latter, it is a generative and creative learning, when they come to the idea which has not been reached without their interaction.
Further, we can say that without this co-ontogeny between the correspondents, certain behaviors would not have arisen.Within the co-ontogeny the behaviors of the correspondents become consensual -i.e. they have created a consensus about the coordination of their behaviors.
Consensual domain is the juncture between two observers in coupling is itself a higher-order domain of description.The notion consensual is something like mutual and co-operational.Languaging is like a dance, mutual adaptation to one another's actions, orientation in conversation, cooperation."Dialogue is about common participation in which we are not playing a game against each other but with each other" (Bohm, 1989).
For this kind of distributed learning in terms of "thinking together" to make a breakthrough a communication should be conducted with some ethics and human values: 1. the atmosphere of friendly relations, trust, and respect; 2. common ground and compatible skills of the people involved; 3. motivation, commitment and dedication to the process cannot be underestimated as well; 4. readiness or desire for lots of give and take, making a true dialogue a recursive process of interactions.
As an observer we can witness learning or new ideas emerging concerning the issue discussed in the correspondence from the sixth letter when the first writer draws back to the example brought by the second writer and comes up with the idea of non-local learning as taking habits together.
It is interesting to mention that the writers begin to feel and observe the process of learning mentioning about it from the ninth letter ("What I have learned about depends on my communication with you.For me, I also constantly draw upon my own feelings, previous experiences and especially current happenings to make meanings of what you have said.That is why I go back to our previous E-mails whenever a new thought emerges").What happens is that the correspondents are reading, writing, and thinking "out of selves" together with each other.
Distributed Learning 6.
Distributed learning in the process of co-writing can be considered as the climax of consensual domain establishment.It occurs when two writers adapt to each other and complement each other.Knowledge rising or distributed learning defined in terms of how the context of the first correspondent's behavior influences the second correspondent's behavior such that they come up with something that neither was likely to reach alone.Finding the way to go on in one direction and creation of the 3rd.The value of coming to "cooperate and understand the third".It is useless to understand the other, it's better to cooperate and understand the third.The climax of the consensual domain is when the similar understanding and cooperation is triggered and knew knowledge emerges.Consensus is achieved thanks to cooperational integration when isolation of I or ego is overcome and the correspondents are ready to go to the higher level of understanding the issue and create or generate new knowledge.Dialogue or in our case shared inscriptional coindexing is a path to greater wisdom and learning.Bohm (1989) refined dialogue to a creative art."Dialogue makes possible a flow of meaning in the whole group, out of which will emerge some new understanding" (Bohm, 1989).This understanding of a dialogue is close to Bakhtin's ideas that participants of a dialogue create a new whole within common meaning space.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences Learning is not digestion of something.As Noë states consciousness is not something that happens inside us.It is something we do or make or achieve.It can be compared with with dancing."The locus of consciousness is the dynamic life of the whole, environmentally plugged-in person or animal" (Noë, 2009).He proposes that human experience is a dance that unfolds in the world and with others.
Mutual coordination is extended to action in the form of improved performance.In the e-mail correspondence the e-mail itself is not something one correspondent does to another person, it is something they do together.Together they achieve new ideas that neither party could have imagined before starting.
Conversing, the energy of our differences and similarities is channeled toward something that has never been created before.It brings us to a greater common sense and is thereby a means for accessing the intelligence and coordinated power of people.The intention of dialogue is to reach new understanding and, in doing so, to form a totally new basis from which to think and act (Isaacs, 1999).Dialogue not only raises the level of shared thinking, it impacts how people act, and, in particular, how they act all together.In the dialogue there happens some transformation not only of the relationship between people, but also the very nature of the consciousness.Isaacs says about "generative dialogue" that "emerged as people let go of their positions and views <…>They found themselves attending simply to the flow of conversation, a flow that enveloped us and lifted us to a new level of shared understanding about dialogue" (Isaacs, 1999).
Conclusion 7.
From bio-socio-cognitive or distributed approach E-mail correspondence can be viewed as a dynamic interactive dialogical and distributed activity aimed at creating a consensual domain of linguistic interactions.The text of the academic text of e-mails is not a "language" fixed by a code, but, affordances that in affords us to generate the meaning only in the process of interaction.What we perceive is not the text containing information but the relation between us and the text.Having analyzed a series of e-mails, three main stages were outlined: a system coupling with another system (two correspondents); mutual frame of reference (shared inscriptional co-indexing, knowledge background, mutual issues of concern, limits of each other); mutually triggering of inscriptional sequences.
There are different types of interactions that lead to the emergence of a new cognitive state such as contact setting, orientation and co-functions.Contact setting and interaction (asking and answering questions, sharing knowledge in terms of ideas, bringing examples, dialogical co-action) is an inscription produced by the first correspondent that gives rise to an observable effect in an inscription by the second correspondent.Orientation (prompts that may influence the participants, giving food for thought, a change in the flow of ideas) is an inscription by the first correspondent that evokes a change in how the second correspondent subsequently inscribes.Co-functions occur when an oriented organism becomes active to that degree that it can orient the orienting one so when we witness recursive orientation.This distributed learning is a spontaneous system that arises as an order from chaos since both writers cannot predict the emergence of this new system.
Recognition that writing via the Internet is distributed process across time, space and minds may help in understanding the theory of cognition and learning in general.Writing via the Internet environment differs from traditional writing and it needs a new methodology for its analysis.Writing is transforming with the expanding of media environment and acquiring some dynamics of a talk.Electronic writing being less emotional than speaking is acquiring more emotionality with the use of icons.Despite the fact that people are more and more communicating in a written form the dynamics of their interactions are not studied yet thus challenging us to tackle this problem.The research can be expanded in the examination of other forms of co-writing and conversing in the writing environment through the Internet like Facebook, twitters, different blogs, etc. | 6,414.8 | 2015-11-02T00:00:00.000 | [
"Education",
"Linguistics"
] |
© Hindawi Publishing Corp. SOME SUBMERSIONS OF CR-HYPERSURFACES OF KAEHLER-EINSTEIN MANIFOLD
The Riemannian submersions of a CR-hypersurface M of a Kaehler-Einstein manifold M˜ are studied. If M is an extrinsic CR-hypersurface of M˜, then it is shown that the base space of the submersion is also a Kaehler-Einstein
manifold.
Introduction.
The study of the Riemannian submersions π : M → B was initiated by O'Neill [14] and Gray [9].This theory was very much developed in the last thirty five years.Besse's book [3,Chapter 9] is a reference work.Bejancu introduced a remarkable class of submanifolds of a Kaehler manifold that are known as CR-submanifolds (see [1,2]).On a CR-submanifold, there are two complementary distributions D and D ⊥ , such that D is J-invariant and D ⊥ is J-anti-invariant with respect to the complex structure J of the Kaehler manifold.The integrability of the anti-invariant distribution D was proved by Blair and Chen [4].
Recently, Kobayashi [10] considered the similarity between the total space of a Riemannian submersion and a CR-submanifold of a Kaehler manifold in terms of the distribution.He studied the case of a generic CR-submanifolds in a Kaehler manifold and proved that the base space is a Kaehler manifold.
In Section 3, we extend the result of Kobayashi to the general case of a CRsubmanifold.
In Section 4, we study a Riemannian submersion from an extrinsic hypersurface M of a Kaehler-Einstein manifold M onto an almost-Hermitian manifold B. In this case, we prove that the basic manifold is a Kaehler-Einstein manifold.If M is C n+1 , a standard example is the Hopf fibration S 2n+1 → CP n equipped with the canonical metrics.
For the basic formulas of Riemannian geometry, we use [11,12].
2.
Preliminaries.Let M be a complex m-dimensional Kaehler manifold with complex structure J and Hermitian metric •, • .Bejancu [2] introduced the concept of a CR-submanifold of M as follows: a real Riemannian manifold M, isometrically immersed in a Kaehler manifold M, is called a CR-submanifold of M if there exists on M a differentiable holomorphic distribution D and its x M, where T ⊥ x M is the normal space to M at x ∈ M for any x ∈ M. It is easily seen that each real orientable hypersurface of M is a CR-submanifold.The Riemannian metric induced on M will be denoted by the same symbol •, • .
Let ∇ (resp., ∇) be the operator of covariant differentiation with respect to the Levi-Civita connection on M (resp., M).The second-fundamental form B is given by for all E, F ∈ Γ (T M), where Γ (T M) is the space of differentiable vector field on M. We denote everywhere by Γ (τ) the space of differentiable sections of a vector bundle τ.
For a normal vector field N, that is, N ∈ Γ (T ⊥ M), we write where −L N E (resp., ∇ ⊥ E N) denotes the tangential (resp., normal) component of ∇E N.
Let µ be the orthogonal complementary vector bundle of Definition 2.1 (Kobayashi [10]).Let M be a CR-submanifold of a Kaehler manifold M. A submersion from a CR-manifold M onto an almost-Hermitian manifold is a Riemannian submersion π : M → M with the following conditions: (i) D ⊥ is the kernel of π * , (ii) π * : D x → T π(x) M is a complex isometry for every x ∈ M.This definition is given by Kobayashi for the case where µ is a null subbundle of T ⊥ M (see [10]).If JD ⊥ x = T ⊥ x M for any x ∈ M, we say that M is a generic CR-submanifold of M (Yano and Kon [15]).For example, any real orientable hypersurface of M is a generic CR-submanifold of M.
The vertical distribution of a Riemannian submersion is an integrable distribution.In our case, the distribution vertical is D ⊥ , which is integrable according to a theorem by Blair and Chen [4].
The sections of D ⊥ (resp., D) are called the vertical vector fields (resp., the horizontal vector fields) of the Riemannian submersion π : M → M .The letters U , V , W , and W will always denote vertical vector fields, and the letters X, Y , Z, and Z denote horizontal vector fields.For any E ∈ ᐄ(M), vE and hE denote the vertical and horizontal components of E, respectively.A horizontal vector field X on M is said to be basic if X is π -related to a vector field X on M .
It is easy to see that every vector field X on M has a unique horizontal lift X to M, and X is basic.
Conversely, let X be a horizontal vector field and suppose that X, Y x = X, Y y for all Y basic vector fields on M, for all x, y ∈ π −1 (x ), and for all x ∈ M .Then, the vector field X is basic.We have the following O'Neill's lemma (see [8,14]).Lemma 2.2.Let X and Y be basic vector fields on M.Then, they are satisfying the following: We recall that a Riemannian submersion π : (M, g) → (M ,g ) determines the fundamental tensor field T and A by the formulas for all E, F ∈ Γ (T M) (cf.O'Neill [14] and Besse [3]).
It is easy to prove that T and A satisfy ) for any U,V ∈ Γ (D ⊥ ) and X, Y ∈ Γ (D).Formula (2.4) means that the restriction of T to the integrable distribution D ⊥ is the second-fundamental form of the fiber submanifolds in M, and (2.5) measures the integrability of the distribution D.
3. Kaehler structure on the basic space M .From (2.1), we have for any X, Y ∈ Γ (D).
Here, we denote by h and v (resp., h and v) the canonical projections on D and D ⊥ (resp., µ and JD ⊥ ).Define a tensor field C on M as the vertical component v(∇ X Y ) of ∇ X Y (cf.Kobayashi [10]).The tensor field C is known to be a skew-symmetric tensor field defined by Kobayashi such that for all X, Y ∈ Γ (D).
Note that the tensor field C is the restriction of A to Γ (Ᏼ) × Γ (Ᏼ).
From Definition 2.1 and Lemma 2.2, we obtain that Jh∇ X Y (resp., h∇ X JY ) is a basic vector field and corresponds to J ∇ X Y (resp., ∇ X J Y ) for any basic vector fields X and Y on M.
On the Kaehler manifold M, we have Proof.From Lemma 2.2 and (3.4), we obtain that ∇ X J Y = J ∇ X Y , so that M is a Kaehler manifold.Remark 3.3.Proposition 3.1 is proved for generic CR-submanifolds of M (i.e., µ = 0) in [10].
Riemannian submersions from extrinsic hyperspheres of Einstein-Kaehler manifolds.
We recall that a totally umbilical submanifold M of a Riemannian manifold M is a submanifold whose first-fundamental form and second-fundamental form are proportional.
The extrinsic hyperspheres are defined to be totally umbilical hypersurfaces, having nonzero parallel mean-curvature vector field (cf.Nomizu and Yano [13]).Many of the basic results concerning extrinsic spheres in Riemannian and Kaehlerian geometry were obtained by Chen [5,6,7].
Let M be an orientable hypersurface in a Kaehler manifold M.Then, M is an extrinsic hypersphere of M if it satisfies for any vector fields E and F on M. Here, H denote the mean-curvature vector field of M. If we put k = H (where the norm • is, with respect to a scalar product, induced on every tangent space to M), then k is a nonzero constant function on the extrinsic hypersphere M. We denote by N the global unit normal vector field to M.Then, ξ = −JN is a global unit vector on M such that N = Jξ.Let D be the maximal J-invariant subspace (with respect to J) of the tangent space T p M for every p ∈ M. We see that M is a CR-hypersurface of M such that T M = D ⊕ D ⊥ , where D ⊥ is the one-dimensional anti-invariant distribution generated by the vector field ξ on M.
The anti-invariant distribution D ⊥ is integrable, and its leaves are totally geodesic in M (but not in M).
This is an easy consequence from Gauss and Weingarten's formulas of the leaves of D ⊥ in M.This means that O'Neill's tensor T vanishes on the fibres of the Riemannian submersion π : M → B.
The main result of this section is the following theorem.To prove Theorem 4.1, we need several lemmas.
Lemma 4.2. Following the assumptions of Theorem 4.1, then
for any horizontal vector X on M.
Proof.From Gauss's formula (2.1) and the umbilicality of M, we get ∇X ξ = ∇ X ξ for any vector field X on M.Then, we have On the other hand, M is a Kaehler manifold, so that ∇ commute with J: Consequently, for any horizontal vector fields on M.
Proof.We say that A X Y is a vertical vector field, hence Then, Lemma 4.4.Following the assumptions of Theorem 4.1, then where R and R are the curvature tensor on M and M, respectively.
Proof.We have the Gauss equation Using the umbilicality condition, we get (4.9).
Lemma 4.5.For any horizontal vector fields X and Y on M, Proof.For a Riemannian submersion with totally geodesic fibres, the following formula is known: On the other hand, the first term on the right part is skew-symmetric with respect to the vertical vector fields V and U. From (4.12) and (4.9), we obtain (4.11).
Proof of Theorem 4.1.For the horizontal vector fields X, Y , Z, and W on M, we have the following equation of O'Neill: (see [3,14]).By (4.9) and (4.11), we get the following formula that connects the curvature of M to the curvature of the Kaehler manifold M: Let (e 1 ,...,e p ; Je 1 ,...,Jl p ) be a local J-frame of basic vector fields for the horizontal distribution D.Then, (e 1 ,...,e p ; J e 1 ,...,J e p ) is a local J -frame if π star e i = e i on the Kaehler manifold B.
Using the above lemmas, from (4.14) by a straightforward calculation, we conclude that B is a Kaehler-Einstein manifold if M is a Kaehler-Einstein manifold.
Corollary 4.6.Let M be a complex-form space and M an orientable CRhypersurface of M.Then, the base space of submersion π : M → B is also a complex-form space.
Proof.The corollary follows by straightforward calculation making use of (4.14).
Example 4.7.Let S 2n+1 be the standard hypersphere in C n+1 .Then, S 2n+1 is an extrinsic hypersphere in C n+1 , and we have the Hopf fibration π : S 2n+1 → CP n equipped with the canonical metrics.
Call for Papers
This subject has been extensively studied in the past years for one-, two-, and three-dimensional space.Additionally, such dynamical systems can exhibit a very important and still unexplained phenomenon, called as the Fermi acceleration phenomenon.Basically, the phenomenon of Fermi acceleration (FA) is a process in which a classical particle can acquire unbounded energy from collisions with a heavy moving wall.This phenomenon was originally proposed by Enrico Fermi in 1949 as a possible explanation of the origin of the large energies of the cosmic particles.His original model was then modified and considered under different approaches and using many versions.Moreover, applications of FA have been of a large broad interest in many different fields of science including plasma physics, astrophysics, atomic physics, optics, and time-dependent billiard problems and they are useful for controlling chaos in Engineering and dynamical systems exhibiting chaos (both conservative and dissipative chaos).
We intend to publish in this special issue papers reporting research on time-dependent billiards.The topic includes both conservative and dissipative dynamics.Papers discussing dynamical properties, statistical and mathematical results, stability investigation of the phase space structure, the phenomenon of Fermi acceleration, conditions for having suppression of Fermi acceleration, and computational and numerical methods for exploring these structures and applications are welcome.
To be acceptable for publication in the special issue of Mathematical Problems in Engineering, papers must make significant, original, and correct contributions to one or more of the topics above mentioned.Mathematical papers regarding the topics above are also welcome.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/.Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/according to the following timetable:
Theorem 4 . 1 .
Let M be an orientable extrinsic hypersphere of an Kaehler-Einstein manifold M. If π : M → B is a CR-submersion of M on an almost-Hermitian manifold B, then B is an Kaehler-Einstein manifold. | 2,976.4 | 2003-01-01T00:00:00.000 | [
"Mathematics"
] |
Voltammetric and amperometric sensors for determination of epinephrine : A short review ( 2013-2017 )
The present review focuses on voltammetric and amperometric methods applied for determination of epinephrine (EP) in last five years (2013-2017). Occurrence, role and biological importance of EP, as well as non-electrochemical methods for its assessment, are firstly reviewed. The electrochemical behavior of EP is then illustrated, followed by a description of the voltammetric and amperometric methods for EP content estimation in various media. Different methods for development of electrochemical sensors are reviewed, starting from unmodified electrodes to different composites incorporating carbon nanotubes, ionic liquids or various mediators. From this perspective, the interaction between functional groups of the sensor material and the analyte molecule is discussed, as it is essential for analytical characteristics obtained. The analytical performances of the voltammetric or amperometric chemical and biochemical sensors (linear range of analytical response, sensitivity, precision, stability, response time, etc.) are highlighted. Numerous applications of EP electrochemical sensors in fields like pharmaceutical or clinical analysis where EP represents a key analyte, are also presented.
Introduction
Epinephrine (EP), also called adrenaline, is an important catecholamine neurotransmitter in the mammalian central nervous system [1].Many life phenomena are related to the concentration of EP in blood.It also served as a chemical mediator for conveying the nerve pulse to efferent organs.Medically, EP has been used as a common emergency healthcare medicine [2,3].EP is used to stimulate heartbeat and to treat emphysema, bronchitis, bronchial asthma and other allergic conditions, as well as the eye disease, glaucoma.Therefore, performing the research of EP has an important significance to medicine and life science [4].EP is synthesized naturally in the body from L-tyrosine by the action of different enzymes.Almost 50 % of the secreted hormone appears in urine as free and conjugated, 3 % as vanilmandelic acid (VAM), the most abundant metabolite in urine [5].Only small amounts of free EP are excreted.Meanwhile, EP is an electroactive compound and can be determined by electrochemical methods [6][7][8][9][10][11].However, actual electrochemical detection of EP has two challenges.One is its low concentration level, while another challenge often encountered is the strong interference arising from electroactive compounds like norepinephrine (NE), dopamine (DA), ascorbic acid (AA) and uric acid (UA) [6].To resolve these problems, one of the most common routes is using a modified electrode to improve the measuring sensitivity of EP and minimize the interference of AA and UA to EP determination [7][8][9][10][11][12].Although many modified electrodes have been demonstrated to be effective for detecting EP, there is still a need to develop a new method with high efficiency and convenience for the detection of EP [13,14].
Injectable EP solutions used by emergency medical personnel and hospitals are principally degraded via oxidation.This degradation can be accelerated by heavy metals, ultraviolet light, exposure to oxygen, and increased pH.Typical preventive measures for hindering oxidative degradation use light-resistant containers, buffered solutions, and/or antioxidants [15][16][17][18][19][20].Due to the crucial role of EP in biochemistry and industrial applications, the determination of EP still presents research interest.Quick monitoring of EP levels during production and quality control stages is important [21][22][23][24].In this review, we investigate the latest progress in modification of electrodes and its improvement in detection of EP.
Voltammetric and amperometric sensors
Voltammetry is a potentiodynamic technique, based on measuring the current arising from oxidation or reduction reactions at the working electrode surface, when a controlled potential variation is imposed [39,40].Amperometry is based on the application of a constant potential to a working electrode, and the subsequent measurement of the current generated by the oxidation/reduction of an electroactive analyte [41][42][43].
1. Voltammetry/amperometry at bare/unmodified electrodes
Bare electrodes without functionalization represent an interesting alternative, in particular when high sensitivity is not required.This approach has been realized by use of a simpler system, resulting in reduced costs for both production and use, and long-term stability.An electrochemical biosensor for the sensitive detection of EP was introduced by Li et al. [44].Their results showed that the magnitude of the oxidation peak current of EP is related to many factors, including the pH value of the supporting electrolyte in the working electrode electrolytic cell, the acidity of the supporting electrolyte in the auxiliary electrode electrolytic cell, the distribution coefficients for different EP species, the properties of electrode surface charge and the molecular configuration of electroactive component.In performing experiments, pH of PBS buffer solution was kept at 7.0 in the working electrode electrolytic cell and HCl solution maintained at 1.0 mol L -1 in the auxiliary electrode electrolytic cell.The standard solutions of different amounts of EP were added to the working electrode electrolytic cell and the oxidation peak current of EP was recorded by cyclic voltammetry (CV).The range of 2.0×10 -7 -1.0×10 -4 mol L -1 , with a detection limit of 6.2×10 -8 mol L -1 was obtained.Satisfactory results have been achieved for the determination of EP in injection.The recovery of the standard addition was in the range of 95.0 -102.0 %.
2. Voltammetry/amperometry at modified electrodes
The need for over-potential diminution and fouling minimization has required the electrode modification with a view to increase sensitivity and obtain more prominent peak separation.These properties are required mainly in complex media such as biological samples particularly prone to interferences, where EP coexists with other electroactive species.
2. 1. Chemically modified electrodes
Numerous electrochemical methods have been developed to determine EP on the basis of its electroactive nature.Most of these methods, however, have two major problems in EP determination which reduce accuracy and sensitivity of the results.The first is that in a natural environment, EP often coexists with a high concentration of electroactive biomolecules like UA, DA, NE, and AA that interfere with each other.The second problem of reported methods is that the product of EP oxidation (epinephrine chrome) can easily transform into polymers, which block its further oxidation on the electrode surface.Hence, despite of considerable investigations, the preparation of a sensitive sensor with satisfactory selectivity and low detection limit with high sensitivity is still of great interest.
Development and application of L-glutamic acid functionalized graphene nanocomposite modified GCE for the determination of EP were reported by Kang et al. [46] Linear relationship between EP concentration and current response measured by DPV method was obtained in the range of 1×10 -7 to 1×10 -3 mol L -1 with a limit of detection of 3×10 -8 mol L -1 .The modified electrode was employed to determine EP in urine with satisfactory results.
Zhang and Wang [47] have described β-Mercaptoethanol self-assembled monolayer modified electrode, fabricated on a bare gold (ME/Au SAMs).The films accelerated the electron transfer as mediators, and showed an excellent electrocatalytic activity for the oxidation of EP.The electrochemical behavior of EP at ME/Au SAMs has been studied by CV and the electrocatalytic mechanism is explored.At potential of -0.044V (vs.SCE) in the aqueous buffer (pH 4.0), the first oxidation wave was observed for EP at the modified electrode (electrochemical oxidation of leucoepinephrine to epinechrome).In contrast, the first oxidation wave was not observed for NE or DA under same conditions.
Fabrication of modified GCE for determination of EP in aqueous solutions was reported by Ahmadian Yazdely et al. [48].Their DPV results exhibited the linear dynamic range from 5.0×10 -8 to 1.1×10 −5 mol L -1 and detection limit of 2.3×10 −8 mol L -1 for EP.In addition, the analytical performance of the modified electrode for quantification of EP in real samples was evaluated.
Sharath Shankar and Kumara Swamy [49] have successfully investigated tetradecyltrimethyl ammonium bromide (TTAB) surfactant immobilized at CPE which has been proposed for simultaneous investigation and determination of EP and serotonin (5-HT) in presence of AA.Voltammetric techniques in the phosphate buffer solution (PBS) (pH 7.4) were applied.The anodic peak of EP was observed at 198 mV (vs.Ag/AgCl/KCl) at the scan rate 50 mV s -1 .The interference studies showed that the modified electrode exhibits excellent selectivity for the determination of EP in the presence of large excess of AA and 5-HT.Differences of the oxidation peak potentials for EP-AA and EP-5-HT were about 215 and 165 mV, respectively.Detection limit of the modified electrode obtained by DPV technique was found to be 0.12 µmol L -1 .The developed method was applied to the determination of EP in synthetic samples with satisfactory results.
Jahanbakhshi [50] reported a synthesis of mesoporous carbon foam (MCF) with particular properties due to simplistic and template-free procedure.The synthesized MCF was characterized by transmission electron microscopy, field emission scanning electron microscopy, X-ray diffraction and BET surface area techniques.Porous MCF, with pore diameters of 5 to 10 nm resulted in extensive specific surface area that modifies the electrode surface.The obtained MCF was dispersed in the Salep solution to prepare a stable suspension (S-MCF).The resultant composite was casted on the surface of GCE to assemble the S-MCF modified GCE electrode (S-MCF/GCE).CV method was used to study electrochemical behavior and determination of EP was conducted by applying DPV method in the presence of UA.In the optimized conditions, the presented sensor was found able to detect the concentration range of 0.1-12 μmol L -1 with a limit of detection of 40 nmol L -1 .The presented methodology possesses a reliable reproducibility, repeatability and stability in biological samples.
Sensitive and selective determination method for EP was developed by Chandrashekar et al. [51] by immobilization of TX-100 surfactant on the bare CPE.The catalytic activity of the modified electrode for the oxidation of EP was determined using CVs recorded at different scan rates.The effect of the solution pH on the voltammetric response of EP was examined using the phosphate buffer solution.The TX-100/CPE demonstrated a good performance for the determination of EP in the concentration range from 10 to 50 μmol L -1 , with a detection limit of 1×10 −6 mol L -1 .The application was conducted for the determination of EP in a human serum sample and the sensor was proven to be rapid, having excellent selectivity and repeatability.
In the research of Dehghan Tezerjani et al. [52], an electrochemical sensor was constructed for determination of EP.The sensor was based on the CPE modified with graphene oxide (GO) and 2-(5-ethyl-2,4-dihydroxyphenyl)-5,7-dimethyl-4 H-pyrido (2,3-d) (1,3) thiazine-4-one (EDDPT) as modifiers.The modified electrode was applied as an electrochemical sensor for oxidation of EP.Under the optimum conditions, the overpotential value for EP oxidation decreased for about 279 mV at the modified CPE more than at non-modified CPE.Also, the designed electrochemical sensor was applied to determine EP in the drug sample and for simultaneous determination of EP, ACT and DA in human serum solutions.
2. 2. Modified electrodes with polymer
In recent years, electrochemically modified electrodes with conductive or redox polymers have been widely used owing to their excellent and unique physical and chemical properties.This kind of modification is established as the best approach for selective determination of some biomolecules because the surface characteristic on the electrode can be modulated by introducing various chemicals with reactive groups.The polymer-modified electrodes showed broad potential windows and can still catalyze electrochemical reactions which have high overpotential and poor selectivity.
Electropolymerization of fuchsine acid (FA) was studied by Taei et al. [53] on the surface of GCE in different electrolyte media.Then, a novel Au-nanoparticle poly(FA) film modified GCE (poly(FA)/AuNP/GCE) was constructed for the simultaneous determination of AA, EP and UA.In addition, for the poly(FA)/AuNP/GCE, oxidation peak potentials of AA-EP and EP-UA were found separated for 150 mV and 180 mV, respectively.At the same time, for the bare GCE, not any separation was noticed.DPV results exhibited the linear dynamic range of 0.5-792.7 μmol L −1 for EP with the detection limit of 0.01 μmol L -1 .The diffusion coefficient for the oxidation reaction of EP on AuNP/poly (FA) film coated GC electrode was calculated as 2.6 (±0.10) × 10 −5 cm 2 s −1 .
Li and Wang [54] have investigated an electrochemical sensor based on the poly(guanine) (PGA) modified GCE that was fabricated by electropolymerization of guanine on the bare GCE surface.This modified electrode exhibited good electrocatalytic property towards the oxidation of EP and UA in 0.1 mol L -1 PBS (pH 4.0), seen as enhanced peak currents and well defined peak separations.Under optimum reaction conditions, oxidation peak currents of EP and UA were proportional to their concentrations in the range of 1.0×10 -5 to 1.0×10 -3 mol L -1 and detection limit of 1.8×10 -6 mol L -1 was determined for both compounds.Finally, this method was efficiently used for the determination of EP in EP injections.
Kocak and Dursun [55] used a modified electrode that was fabricated by overoxidation of polymer film after electropolymerization of p-aminophenol on a bare GCE.Higher catalytic activity was observed for electrocatalytic oxidation of AA, EP, and UA in PBS (pH 7.4) at the overoxidized poly(p-aminophenol) film modified GCE (Ox-PAP/GCE), due to enhanced peak current and well defined peak separations compared to both bare GCE and poly (p-aminophenol) film modified GCE (PAP/GCE).
Taei and Jamshidi [57] introduced a polymerized film of Adizol Black B (ABB) on the surface of GCE for the simultaneous determination of AA, EP and UA.This new modified electrode presented an excellent electrocatalytic activity towards the oxidation of AA, EP and UA by DPV method.The separation of the oxidation peak potentials for AA-EP and EP-UA were at about 180 and 130 mV, respectively.The diffusion coefficient for the oxidation reaction of EP at the poly(ABB) film coated GCE was calculated as 1.54 (±0.10)×10 −4 cm 2 s -1 .
Ma et al. [58] demonstrated an electrochemical sensor based on the silver doped poly-L-cysteine film (Ag-PLC) that has been fabricated for simultaneous determination of DA, EP and UA in the presence of AA.Although voltammetric signals of DA and EP were resolved at the bare GC electrode, the signals of DA and UA were not resolved in a mixture.However, (Ag-PLC) modified electrode does not only separate voltammetric signals of DA, EP and UA with potential difference of 390 and 135 mV between DA-EP in the cathodic peak potential and UA-(DA+EP) in the anodic peak potential respectively, but also shows higher electrocatalytic activity towards DA, UA and EP in the presence of high concentration of AA.For EP, the linear range was determined from 5.00×10 -6 to 1.10×10 -4 mol L -1 .The practical application for this modified electrode was demonstrated by determining the concentration of DA, UA and EP in human urine samples.
Li and Sun [59] introduced a novel paladium doped poly(L-arginine) modified electrode (Pd-PLA/GCE), fabricated by electrochemical immobilization of the paladium doped poly (L-arginine) on a GCE.This modified electrode was used for determination of EP by the CV method.The method was successfully applied to the determination of EP in injection with satisfactory results.
A simple and sensitive poly(L-aspartic acid)/electrochemically reduced graphene oxide modified GCE, poly(L-Asp)/ERGO/GCE, has been constructed by electrochemical reduction of GO that was drop coated on the GCE within 2 mmol L -1 L-aspartic acid in PBS (pH 6).As suggested by Mekassa et al 60, this procedure gives rise to in situ polymerization of L-aspartic acid on the ERGO.Significant enhancement of the peak current response of EP was observed, accompanied with a negative shift in the peak potential value at the composite modified electrode, compared to the bare electrode.Real sample analysis was carried out in the pharmaceutical formulation of EP hydrochloride injection, which revealed good recovery results of 94-109 %.
According to Vieira da Silva [61], the polymerization of ferulic acid (FA), forming poly(FA) on MWCNTs modified GCE was performed and the modified platform applied for simultaneous determination of NADH, EP and DA.CV and CA methods were employed to investigate the electrocatalytic oxidation of NADH, EP and DA on the modified electrode in aqueous solutions.The obtained analytical curve for EP showed linear range between 73-1406 μmol L −1 .The detection limit was 22.2 μmol L −1 for EP.
Poly(ionic liquids), (PILs), have been applied as the linkers between Au nanoparticles and polypyrrole nanotubes (PPyNTs) for the synthesis of Au/PILs/PPyNTs hybrids.As was reported by Mao et al. [62], due to the presence of PILs, high density of well dispersed AuNPs was deposited on the surface of PILs/PPyNTs by anion exchange with Au precursor and in situ reduction of metal ions.The catalytic oxidation peak current obtained by DPV method increased linearly with increasing EP concentration in the range of 35-960 µmol L -1 with a detection limit of 298.9 nmol L -1 , according to the criterion of a signal-to-noise ratio (S/N=3).These results suggested that this modified electrode shows excellent electrocatalytic activity towards this significant hormone in human life.
2. Modified electrodes with carbon nanotubes
Carbon nanotubes (CNTs) have attracted more attention in physical, chemical and material science fields due to their unique electrical conductivity, chemical stability and high mechanical strength and modulus.The subtle electronic properties of carbon nanotubes suggest that they are able to promote electron transfer when used as the electrode material in electrochemical reactions.These properties provided a new manner of electrode surface modification for designing new electrochemical sensors and novel electrocatalytic materials.
In the research performed by Apetrei [63], a biosensor comprising tyrosinase immobilized on a SWCNTs modified GCE was developed for determination of EP.Tyrosinase maintained high bioactivity on this nanomaterial by catalyzing the oxidation of EP to EP quinone, which was electrochemically reduced (-0.07V vs. Ag/AgCl) on the biosensor surface.Under optimum conditions, the biosensor showed a linear response in the range of 10-110 μmol L -1 and a limit of detection was calculated as 2.54 μmol L -1 with a correlation coefficient of 0.977 for EP.The repeatability, expressed as the relative standard deviation for five consecutive determinations of 10 -5 mol L -1 EP solution, was 3.4 %.
Valentini et al. [64] used oxidized single wall carbon nanohorns (o-SWCNHs) for the first time, in order to assemble chemically modified screen printed electrode (SPE) that is selective towards the electrochemical detection of EP in the presence of serotonine-5-HT (S-5HT), DA, NE, AA, ACT and UA.EP neurotransmitter was detected by using DPV in a wide linear range of concentrations (2-2500 μmol L -1 ) with high sensitivity, very good reproducibility (RSD ranging from 2 to 10 % for different SPEs), short response time for each measurement (only 2 s) and low detection limit (LOD = 0.1 μmol L -1 ).
A simple electrochemical sensor for EP has been developed by Ghica and Brett [65].They modified a carbon film electrode (CFE) with MWCNTs in a chitosan matrix.Under optimum conditions (pH 7.0), the MWCNT/CFE electrode showed significant electrocatalytic oxidation of EP with a decrease of overpotential value for about 200 mV and 11-fold increase of the peak current value, compared to the unmodified CFE.The sensor exhibited excellent stability over a period of 6 months and was successfully applied to the analysis of injectable adrenaline solutions.
The electrochemical behavior of a multi walled carbon nanotube paste electrode modified with 2-((7-(2,5-dihydrobenzylideneamino) heptylimino methyl) benzene-1,4-diol (DBHB) was studied by Mazloum Ardakani et al. [66].CV method was used to study the electrocatalytic mechanism of EP electrooxidation at the modified electrode.Catalytic rate constant and diffusion coefficient were obtained for oxidation of EP.By using DPV method, a highly selective and simultaneous determination of EP, acetaminophen and folic acid has been obtained at the modified electrode used as an electrochemical sensor.
Wu et al. [67] reported a sensor for EP that is based on ITO electrode modified with MWCNTs being pre-coated with a polymerized ionic liquid (PILMWNTs).The chitosan film was electrodeposited on the ITO electrode in the presence of EP and the PILMWNTs.This film acts as an excellent recognition matrix due to excellent film forming ability and many functional groups that favor hydrogen bond formation with the target EP.The electrochemical response to EP was linear in 0.2 μmol L -1 to 0.67 mmol L -1 concentration range, and detection limit was as low as 60 nmol L -1 (at S/N =3).
Wang et al. [68] demonstrated a modified GCE that was covered with a layer of MWCNT coated with hexadecyltrimethyl ammonium bromide (CTAB).The modified electrode showed excellent electrochemical catalytic properties for the redox reaction of EP and AA.In the presence of CTAB, the peak separation between EP and AA can be broadened to 256 mV by the CTAB.
Graphite paste electrode (GPE) modified with 1-butyl-3-methylimidazolium hexafluoro phosphate (BMIMPF6) and MWCNTs was prepared for simultaneous voltammetric determination of EP and xanthine (XN) by Rajabi et al. [69].The prepared electrode (BMIMPF6-MWCNT/GPE) showed excellent catalytic activity in the electrochemical oxidation of EP and XN, leading to remarkable enhancement of the corresponding peak currents and lowering the peak potentials.The peak current values of linear sweep voltammograms increased linearly with EP concentrations in the range of 0.30-60 μmol L -1 in 0.1 mol L -1 PBS (pH 7.0).Applicability of this modified electrode as the voltammetric biosensor was demonstrated by simultaneous determination of EP and XN in human urine, human blood serum and ampoule.
In the study of Babaei et al. [70], electrooxidation of EP, ACT and mefenamic acid (MEF) has been investigated by application of nickel hydroxide nanoparticles/MWCNT modified GCE (MWCNT-NHNPs/GCE) using CV and DPV methods.
In another study, Pradhan et al. [71] employed a composite electrode for the amperometric detection of EP.Composite electrode was developed by electropolymerizing bromothymol blue (BTB) on the CPE bulk modified with MWCNTs.Electropolymerization of BTB on the surface of CPE involved much less energy compared to a CPE surface.The modification enhanced the current sensitivity of EP by 5.5 times as compared to the bare CPE.The sensor showed the optimum current response at physiological pH and the response was linear for the concentration of EP in the ranges 0.8-9.0µmol L -1 and 10.0-100 µmol L -1 , respectively.The detection limit was 8×10 -7 mol L -1 .The amperometric response of EP remained unaltered even in the presence of 50-fold excess of UA, AA and 100-fold excess of L-Tryptophan, L-Tyrosine, L-cysteine and nicotinamide adenine dinucleotide.This sensor showed stability, reproducibility, antifouling effects and was successfully applied for the determination of EP in blood serum and adrenaline injection.
Thomas et al. [72] developed an amperometric sensor for the determination of EP which was fabricated by modifying the CPE with pristine multi walled carbon nanotubes (pMWCNTs).Bulk modification, followed by a drop casting of sodium dodecyl sulfate (SDS) onto the surface for its optimal potential application was performed.Analytical applications of the modified electrode were demonstrated by determining EP in spiked blood serum and adrenaline tartrate injection.
Filho et al. [73] developed an electrochemical method for the single and simultaneous determination of DA and EP in human body fluids, using a GCE modified with nickel oxide nanoparticles and carbon nanotubes within a dihexadecyl phosphate film.SWV and DPV methods were applied.By using DPV with the proposed electrode, a separation of ca.360 mV between the peak reduction potentials of DA and EP was present in binary mixtures.The detection limit of EP was determined as 8.2×10 -8 mol L -1 .
Koteshwara Reddy et al. [74] checked out an efficient electrochemical sensor for selective detection of EP.It was fabricated with the aid of a functionalized MWCNT-chitosan biopolymer nanocomposite (Chit-f CNT) electrode.MWCNTs were successfully functionalized with the aid of nitric acid and confirmed by the Raman spectral data.Functionalized carbon nanotubes (f CNT) were dispersed in chitosan solution and the resulting bio nanocomposite was used for the fabrication of sensor surface by drop and cast method.Electrochemical characteristics of the fabricated sensor were understood using CV and DPV analysis for the detection of EP in PBS (pH 7.4).
2. 4. Modified electrode with nanoparticles and nanocomposites
Nanotechnology and nanoscience represent new and enabling platforms that promise to provide a broad range of novel uses and improved technologies for environmental, biological and other scientific applications.One of the reasons staying behind the intense interest is that nanotechnology permits the controlled synthesis of materials, where at least one dimension of a structure is less than 100 nm.Nanostructured materials have also been incorporated into electrochemical sensors for biological and pharmaceutical analyses.While they offer unique advantages, including enhanced electron transfer, large edge plane/basal plane ratios and rapid kinetics of the electrode processes.
In a study of Sadeghi et al. [75], CPE was modified with zinc oxide (ZnO) nanoparticles and 1,3-dipropylimidazolium bromide was used as a binder.It was found that the oxidation of EP at the surface of electrode occurs at about 80 mV less positive potential than at unmodified CPE.DPV peak current values showed a linear relationship with concentration of EP in the range of 0.09-800 μmol L -1 , with a detection limit of 0.06 μmol L -1 .The proposed sensor was successfully applied for the determination of EP in real samples.
As suggested by Babaei et al. [76], simultaneous determination of EP and ACT can be performed using a GCE modified with a MWCNTs, nickel hydroxide nanoparticles (NHNPs) and Mg-Al layered double hydroxide (LDH) composite (MWCNTs-NHNPs-LDH/GCE).Based on DPV method, the oxidation of EP exhibited a dynamic range between 0.04-60 µmol L -1 and detection limit (3) of 11 nmol L -1 .This method was used for the determination of EP in real samples, using the standard addition method.
Gold nanoparticles/polyaniline nanocomposite thin film was deposited on to the surface of GCE by Langmuir-Blodgett (LB) technology to fabricate a new voltammetric sensor (GNPs/PAn-LBGCE) for EP and UA detection, as was reported by Zou et al. [77].Electrochemical behavior of EP and UA at the modified electrode was investigated in PBS (pH 6.6).
Silai et al. [78] have reported a modified electrode that was prepared by immobilizing Ptnanoparticles into a chitosan film.The investigation of the influence of experimental conditions (scan rate, frequency, pH) on the electrochemical behavior of EP was realized by the CV method.
Novel MCM/ZrO2 nanoparticles modified CPE was fabricated and used by Mazloum-Ardakani et al. [79], in order to study the electrooxidation of EP and ACT and their mixtures.The modified electrode showed electrocatalytic activity toward EP and ACT oxidation with a decrease of the overpotential value by 173 mV for EP at the surface of the MZ-CPE and an increase in peak current at pH 7.0.
Jin and Zhang [80] used the nanogold modified GCE obtained by electrodeposition, which can catalytically oxidize and accumulate EP.In this research, effects of changes of pH and concentration of PBS on the electrochemical behavior of EP were studied.This modified electrode could be applied for determination of EP in the presence of AA.DPV data showed that under optimal conditions, the obtained anodic peak currents were linearly dependent on the EP concentration in the range of 1.0×10 -4 -1.0×10 -6 mol L -1 .
Razavian et al. [81] employed electrochemical sensor that was developed and tested for detection of L-tyrosine in the presence of EP.The electrode was prepared by surface modification of a GCE with nafion and cerium dioxide nanoparticles.The modified electrode exhibited a significant electrochemical oxidation effect of EP in a 0.2 mol L -1 Britton-Robinson (BR) buffer solution (pH 2).The electro-oxidation peak current increased linearly with the EP concentration in the molar concentration range of 5 to 220 μmol L -1 .By employing DPV method for simultaneous measurements, two reproducible peaks for L-tyrosine and EP in the same solution with a peak separation of about 443 mV were detected.
Nitrogen doped three dimensional porous graphene (NG) modified electrode was fabricated by Yang et al. [82].The obtained data showed that electrooxidation of EP at the modified electrode is greatly facilitated, which was ascribed to the excellent properties of NG.The modified electrode was used for simultaneous determination of EP and metanephrine (MEP).DPV peak currents of EP increased linearly with their concentration within the range of 1.0 μmol L -1 to 1.0 mmol L -1 , with a sensitivity of 0.021 μA / (μmol L -1 ) for EP.The detection limit for EP was ascertained to be 0.67 μmol L 1 .Additionally, the detection of EP and MEP was found possible in the presence of AA and UA.The modified electrode was applied to the detection of EP and MEP in human plasma samples with recoveries from 98.9 % to 100.9 %, and EP hydrochloride injections with recoveries from 100.3 % to 104.6 %.
Chen and Ma [83] used a graphene modified GCE obtained via drop casting method and applied it to the simultaneous detection of EP, UA, and AA by CV method in a PBS solution (pH 3.0).The oxidation potentials of EP, UA, and AA at the graphene modified GCE were 0.484, 0.650, and 0.184 V (vs.Ag/AgCl), respectively.Peak separations between EP and UA, EP and AA, and UA and AA were about 166, 300, and 466 mV, respectively.
A hybrid membrane, consisting of aminated graphene and Ag nanoparticles, (AgNPs), was prepared on the surface of GCE by the CV method, where aminated graphene (GR-NH2) acted as a matrix for immobilizing AgNPs.The morphology and electrochemical properties of this hybrid membrane were characterized together with the voltammetric behavior of EP in a study of Huanhuanin et al. [84].The membrane exhibited excellent eletrocatalytic activity for the redox reaction of EP and resolved the electrochemical response of EP and UA into two oxidation peaks.
According to Mak et al. [85], organic electrochemical transistors (OECTs) were found to be excellent transducers for various types of biosensors.It was highly sensitive EP sensor based on OECTs prepared on glass substrates by solution process.The device performance was optimized by immobilizing Nafion and carbon based nanomaterials on the gate electrodes of the OECTs.The detection limit of the sensors was as low as 0.1 nmol L -1 , which could cover the concentration level of EP in medical detections.
In a study performed by Beitollahi et al. [86], CPE modified with vinyl ferrocene (VF) and CNTs was used for the sensitive and selective voltammetric determination of EP, which could be related to the strong electrocatalytic effect of the VF and CNT towards this compound.The mediated oxidation of EP at the modified electrode was investigated by CV.SWV method of EP at the modified electrode exhibited linear dynamic ranges with a detection limit of 3.0×10 -8 mol L -1 .SWV was also used for simultaneous determination of EP and tryptophan at the modified electrode.Quantification of EP and tryptophan in some real samples was performed by the standard addition method.
Zhang et al. [87] described a facile preparation of polydopamine (PDA)-nanogold composite modified GCE used for the sensitive determination of EP, DA, AA and UA simultaneously.Under mild spontaneous reaction condition, DA as a reducing agent and monomer and HAuCl4 as an oxidant trigger for DA polymerization were mixed together with the source of gold nanoparticles to yield a composite of DA polymer and gold nanoparticles.These composite particles were then anchored on GCE by electropolymerization of the remaining DA monomer.The resultant electrode exhibited excellent electrocatalytic redox activities toward EP, DA, AA and UA.Furthermore, although the oxidation peaks of EP and DA at the modified electrode appeared at the same potential of 230 mV (vs.Ag/AgCl), three well defined oxidation peaks were generally obtained for AA, EP, DA and UA (50, 230, 380 mV vs. Ag/AgCl).
In a study of Redin el al. [88], a green approach for the preparation of carbon black (CB) and electrochemically reduced graphene oxide composite (ERGO) was described.Electrochemical sensors were based on screen printed carbon electrodes (SPCEs), fabricated on poly (ethylene terephthalate) (PET).The SPCE/CB-ERGO sensor was tested with DA, EP and paracetamol (PCM), exhibiting an enhanced electrocatalytic performance compared to the bare SPCE.
In another study, Gupta et al. [89] have synthesized NiO/CNTs nanocomposite and applied it for fabrication of NiO/CNTs nanocomposite modified CPE (CPE/NiO/CNTs) as SWV sensor for the determination of EP.The electrooxidation signal of EP showed an irreversible response at 0.3 V (vs.Ag/AgCl).The oxidation current of EP was doubled compared to a CPE.At the best electrochemical conditions, the voltammetric oxidation signal of EP showed linear dynamic range (0.08-900.0 μmol L -1 ), with detection limit of 0.01 μmol L -1 .
Electrochemical sensor developed by Anithaa et al. [90] for the simultaneous determination of EP and xanthine is based on the gamma irradiated SDS-WO3 NPs.The fabricated sensor exhibited wide linear range (0.009-1000 μmol L -1 ) with low detection limit (1.8 nmol L -1 ) for EP.
Interferences from compounds present in biological media and pharmaceuticals
Interference studies were carried out with several chemical substances prior to the application of the proposed method for the assay of EP in urine samples and the injection solution.The potential interfering substances were chosen from the group of substances commonly found with EP in pharmaceuticals and biological fluids.In biological environments, AA is commonly present with EP and may be oxidized at similar potential as EP.
In the research performed by Kang et al. [46], CVs of EP and AA were respectively recorded at the L-glutamic acid-graphene/GCE.The results showed that the oxidation peak of EP is not affected by presence of AA.This means that the modified electrode is able to distinguish EP from AA.
The influence of various foreign species on the determination of 50.0 μmol L -1 EP, 100.0 μmol L -1 AA and 50.0 μmol L -1 UA was investigated by Taei et al. [53].The tolerance limit was taken as the maximum concentration of the foreign substance(s) which caused an approximately ±5 % relative error in the determination.It was also found that Mg +2 , Ca +2 , SO4 -2 , Br -, K + , NO -3 , ClO4 -glycine, glucose, sucrose, lactose, fructose, valine, aspartic acid, urea, and saturation of starch solution did not interfere with the determination of these compounds.However, greater amounts of cysteine (40-fold), oxalate ion (100-fold), and citric acid (30-fold) did cause interference in the simultaneous determination of EP, AA and UA by poly(fuchsine acid) modified GCE.
In their study, Li and Wang [54] have illustrated that K + , Na + , Ca 2+ , Mg 2+ , sucrose and glucose do not interfere significantly, while L-glutamic acid, Cu 2+ and Fe 2+ ions showed a certain effect on the examinations of EP and UA.
Sadeghi et al. [75] studied the influence of various substances as potential interfering compounds on the determination of EP by the SWV method under optimum conditions.A study was performed by a novel biosensor based on ZnO nanoparticle/1,3-dipropylimidazolium bromide ionic liquid modified CPE.The tolerance limit was defined as the maximum concentration of the interfering substance like glucose, fructose, lactose, sucrose tryptophan, histidine, glycine, valine, methionine, lucine, alanine, phenylalanine, Ca 2+ , Li + , ClO4 -, SO4 2-, SCN -, Na + , Mg 2+ , K + , AA, urea, cysteine and UA that caused an error less than 5 % for the determination of EP.The results showed that the peak current of EP was not affected by all conventional cations, anions and organic substances.
As stated by Babaei et al. [70], interferences of AA, L-glutamic, L-alanin, aspartic acid and aspirin in determination of EP were significant only at relatively high concentrations, confirming that the proposed nickel hydroxide nanoparticles/MWCNTs modified GCE (MWCNT-NHNPs/GCE) was likely to be free from interferences from common components of biological samples.
Wang et al. [68] have illustrated the influence of some metal ions and anions that usually exist in biological fluid on the determination of 5.0×10 -5 mol/L EP.If the ±5 % error was allowed, 5.0×10 -3 mol/L of K + , Na + , Fe 2+ , Mg 2+ , Cl -, SO4 2-did not show obvious interference on a modified GCE, fabricated by covering with a layer of MWCNTs coated with hexadecyl trimethyl ammonium bromide (CTAB).
In another work, Apetrei et al. [63] investigated the influence of various interfering agents on determination of EP.The interfering substances Na + , S2O5 2-, Cl -, urea, tartaric acid, hydrochloric acid, glucose and glycine did not show any influence on the biosensor response when detecting EP.An absence of significant modification of the peak current recorded in the presence of interfering species was demonstrated.Therefore, tyrosinase immobilized on a single-walled carbon nanotube modified GCE (tyrosinase/SWCNT-GCE) can be considered to be a good biosensor for recognition of EP.
The influence of various foreign species on the determination of 50 μM EP was investigated by Mekassa et al. [60] under optimum experimental conditions.Potentially interfering substances were chosen from the group of substances commonly found with EP in pharmaceutical formulations and biological fluids.The tolerance limit was defined as the maximum concentration of the foreign substance(s) that caused an approximately ±5 % relative error in the determination of EP.According to the obtained results, AA, citric acid, D-glucose, lactose, glycine, Mg 2+ , Ca 2+ , Na + , and K + did not show any interference effect in the determination of EP.
Study of Vieira da Silva [61] showed that influence of interference on the electrode response can be useful to set up the sample preparation with the goal to minimize their effects.Interference from electroactive compounds typically present in a physiological sample (e.g., serotonin (SER), AA and UA) commonly hinders the accurate determination of EP.The selectivity of the sensor was examined in the presence of SER, AA and UA.
Analytical performances of electrochemical epinephrine sensors
The analytical performances of electrochemical methods depend on the sensor's construction and some of the most illustrative examples are extensively reviewed in Table 1.
Some applications of electrochemical epinephrine sensors in pharmaceutical and biological fluid analysis
Electrochemical EP sensors have widespread application in pharmaceutical and biomedical analysis, as shown in Table 2.
Conclusions
In past five years, utilization of electroanalytical methods for pharmaceutical analysis has significantly increased, especially for EP assessments.However, there is a limited number of publications concerning a combination of pre-concentration and electrochemical detection of EP.Electrochemical techniques are often preferred to laborious instrumental methods for EP determination, which is due to the simplicity of procedure and instrumentation, minimum requirements with respect to sample pretreatment, as well as fast response, sensitivity and low cost.Also, accurate results can be obtained in real time and complex media.Different modalities of sensor development already described in the literature are presented, starting from bare to chemically modified sensors.Recent advances imply the use of carbon nanotubes and various composites, for which large surface area and electrocatalytic activity greatly enhance the analytical signal, diminishes the peak potential corresponding to EP oxidation and solves peak overlapping problems in complex samples.Provided that adequate pretreatment and cleaning steps are included, several examples of viable EP determination in various media performed by bare electrodes, even in the presence of interfering compounds are also presented.Method performances and application areas depend on the chosen electrochemical technique.It can be generally concluded that different ways of construction and expected performances of sensor electrodes are adequate and tuned to the nature of the analysed compound and respective matrix.The nature of the electrode material and surface groups formed, as well as their interaction with analyte molecules, greatly influence the electrooxidation rate, as well as pH value of the analysed matrix, electrolyte type, and the peak potential and height.The mechanism and rate of electrooxidation are strongly dependent on the following factors: electrode nature and modifiers, electrode pre-treatment, surface groups, pH, electrolyte and presence of other compounds.The interaction between the respective form of analyte molecule present at some pH value (range) and the functional groups of the electrode/modifier layer is found essential for determining electrooxidation rate and electrode
Table 1 .
Some analytical performances attained in electrochemical determination of EP.
Table 2 .
Numerical data on EP content determined in various analysed systems. | 9,081.8 | 2018-12-08T00:00:00.000 | [
"Materials Science"
] |
Chronic HIV Infection Increases Monocyte NLRP3 Inflammasome-Dependent IL-1α and IL-1β Release
Antiretroviral treatment (ART) has converted HIV from a lethal disease to a chronic condition, yet co-morbidities persist. Incomplete immune recovery and chronic immune activation, especially in the gut mucosa, contribute to these complications. Inflammasomes, multi-protein complexes activated by innate immune receptors, appear to play a role in these inflammatory responses. In particular, preliminary data indicate the involvement of IFI16 and NLRP3 inflammasomes in chronic HIV infection. This study explores inflammasome function in monocytes from people with HIV (PWH); 22 ART-treated with suppressed viremia and 17 untreated PWH were compared to 33 HIV-negative donors. Monocytes were primed with LPS and inflammasomes activated with ATP in vitro. IFI16 and NLRP3 mRNA expression were examined in a subset of donors. IFI16 and NLRP3 expression in unstimulated monocytes correlated negatively with CD4 T cell counts in untreated PWH. For IFI16, there was also a positive correlation with viral load. Monocytes from untreated PWH exhibit increased release of IL-1α, IL-1β, and TNF compared to treated PWH and HIV-negative donors. However, circulating monocytes in PWH are not pre-primed for inflammasome activation in vivo. The findings suggest a link between IFI16, NLRP3, and HIV progression, emphasizing their potential role in comorbidities such as cardiovascular disease. The study provides insights into inflammasome regulation in HIV pathogenesis and its implications for therapeutic interventions.
Introduction
Efficient antiretroviral treatment (ART) has transformed HIV from a lethal disease to a chronic condition.HIV is, however, associated with significant co-morbidities such as non-AIDS defining cancer and cardiovascular disease (CVD), partly related to incomplete immune recovery and persistent inflammation [1].The initial destruction of the immune system and in particular CD4 T cells, which is a hallmark of untreated progressive HIVinfection, occurs to a large degree in the Gut-Associated Lymphoid Tissues [2][3][4].This causes an impaired mucosal barrier, exposing the systemic circulation and subsequently other organs to pathogen-associated molecular patterns (PAMPs) (e.g., lipopolysaccharide [LPS], flagellin and lipotechoic acid [LTA]), and metabolites (e.g., trimethylamine-N-oxide [TMAO]) from the gut microbiota, thereby contributing to a state of chronic immune activation and inflammation [5][6][7][8][9].Furthermore, co-infection with cytomegalovirus (CMV) or Epstein-Barr virus (EBV) frequently occurs and contributes to the chronic low-grade inflammation in PWH [10,11].
Inflammasomes are multiprotein complexes that can be activated by one of several cytosolic innate immune receptors.The activated receptor binds an adaptor protein termed ASC, which in turn binds to caspase-1 that subsequently cleaves and activates the potent inflammatory cytokine pro-interleukin (IL)-1β and the pore-forming protein gasdermin D, resulting in a massive release of IL-1β and other cytosolic content into the extracellular space [12,13].In most cell types, inflammasome activation requires two steps.In the first "priming" step, the pro-inflammatory transcription factor NF-κB is activated by innate immune receptors sensing microbial components like LPS or cell damage, or by pro-inflammatory cytokines from nearby cells.This leads to the transcription of both inflammasome receptors, such as NLRP3, and inactive pro-IL-1β.In the second step, the inflammasome receptors sense one form or another of danger molecules, indicating cellular damage, such as extracellular adenosine triphosphate (ATP) or mono sodium urate crystals.Some receptors, such as IFI16, recognize cytosolic microbial DNA [14].IFI16 may also be crucial for suppressing reactivation of latent EBV [15].NLRP3 is a special inflammasomeforming receptor because it senses nearby non-apoptotic cell death indirectly through dire changes of the cytosol induced by extracellular damage associated molecular patterns (DAMPs) acting on surface receptors [16].Recently, CMV was shown to activate NLRP3 inflammasomes in the THP1 monocytes cell line [17].
Preliminary data suggest the involvement of IFI16 and NLRP3 inflammasomes in chronic HIV infection.IFI16 has been reported to be required for the death of lymphoid CD4 T cells infected with HIV [18].Inflammasome components seem to be upregulated in monocytes in PWH, as immunological responders (IR) had lower NLRP3 and caspase-1 levels, but not IFI16 levels, compared to immunological non-responders (INR) [19].However, whether chronic HIV infection alters the release of the prototypical NLRP3 product, IL-1β from monocytes remains to be investigated.Moreover, whereas most literature has focused on IL-1β, IL-1α related to HIV has gained less attention.IL-1α is synthesized in an active form, is not a substrate for caspase-1 and its regulation and release is more complex [20].However, the inflammasome-induced gasdermin D pores in the plasma membrane promote maturation and release of IL-1α, making this cytokine both more potent and abundant in the extracellular space [21].Hence, both IL-1α and IL-1β depend on inflammasome activation to effectively mediate their pro-inflammatory signaling.
The ability to release the two IL-1 cytokines is of potential relevance for HIV-related comorbidities.We have previously shown that soluble markers of IL-1 activation predict first-time myocardial infarction, independent of HIV-related and traditional risk factors [22], as well as faster lung function decline independent of smoking [23].In the present study, we investigated whether IFI16 and NLRP3 expression in human monocytes were associated with HIV viremia and CD4 T cell depletion.We also explored NLRP3 inflammasome function in monocytes from ART-treated and untreated PWH, quantifying both IL-1α and IL-1β secretion.Finally, we investigated inflammasome priming, hypothesizing that monocytes from PWH are pre-primed for inflammasome activation upon ATP stimulation.
PWH and HIV-Negative Donors
PWH were on average 47 years of age, predominantly male.Treated PWH had received ART for a median of 12 years, all were virally suppressed (<50 copies/mL plasma) and had a median CD4 + T cell count of 628 cells/µL, whereas untreated PWH had a median CD4 + T cell count of 410 cells/µL.The patient characteristics are shown in ( 1A,B).Furthermore, both IFI16 and NLRP3 correlated negatively with CD4 T cell counts, indicating decreased levels in relation to HIV-related immunodeficiency (Figure 1C,D).When stratified into three groups with high, medium, or low viral load (Figure 1E), IFI16 mRNA expression was significantly higher in the group with high viral load compared to medium or low viral load, as well as HIV-negative donors (Figure 1F).A more complex pattern was seen for NLRP3 mRNA expression.Thus, although NLRP3 mRNA expression was seemingly higher in PWH with high compared to low viral load, NLRP3 expression was significantly reduced in monocytes from PWH compared to HIV-negative donors (Figure 1G).Altogether, whereas the associations with viral load were more complex and not significant for NLRP3, our data support that HIV infection induces IFI16 in human monocytes, and both IFI16 and NLRP3 expression is higher in PWH with more pronounced immunodeficiency (i.e., with lower CD4 T cell counts).These data further support that chronic HIV infection may increase inflammasome activity in monocytes and that this release at least partly could be a enuated by ART.Furthermore, monocytes from untreated PWH also secreted more TNF, an NLRP3-independent cytokine, in response to LPS.This finding suggests that the priming step may be more efficient in monocytes from untreated PWH compared to monocytes from treated PWH and HIV-negative donors.We next investigated the release of IL-1α and IL-1β as well as TNF, as a cytokine that will be released by "signal 1" alone, from monocytes, first primed with medium only or LPS as signal 1 for 1-6 h, and subsequently activated with extracellular ATP as a signal 2 for 30 min (Figure 2A-C).Cytokine secretion from HIV-negative donors cells and cells from ART-treated PWH were of similar levels, although a slightly reduced IL-1α secretion was observed in monocytes from treated PWH.However, monocytes from untreated PWH secreted more IL-1α and IL-1β than both treated PWH and HIV-negative donors.These data further support that chronic HIV infection may increase inflammasome activity in monocytes and that this release at least partly could be attenuated by ART.Furthermore, monocytes from untreated PWH also secreted more TNF, an NLRP3-independent cytokine, in response to LPS.This finding suggests that the priming step may be more efficient in monocytes from untreated PWH compared to monocytes from treated PWH and HIVnegative donors.We further investigated whether these differences were due to cytokine priming or alternatively caused by an increase in inflammasome components.RT-PCR quantification revealed significantly increased IL-1β mRNA in LPS-stimulated monocytes from untreated PWH compared to that of monocytes from treated PWLH (Figure 3B).IL-1β mRNA levels in untreated PWH were also higher than that of healthy control cells.Monocytes from both treated and untreated PWH had an early and higher increase of TNF mRNA after LPS stimulation than monocytes from HIV-negative donors (Figure 3C), in line with the later increase of TNF protein in untreated PWH (Figure 2C).There were no statistically significant differences in IL-1α, NLRP3, or IFI16 mRNA expression between the groups (Figure 3A,D,E).These data again support that increased cytokine secretion is mediated through more efficient cytokine synthesis rather than increased levels of inflammasome components, and both signal 1 and in particular signal 2 are more efficient in untreated PWH.
We further investigated whether these differences were due to cytokine priming or alternatively caused by an increase in inflammasome components.RT-PCR quantification revealed significantly increased IL-1β mRNA in LPS-stimulated monocytes from untreated PWH compared to that of monocytes from treated PWLH (Figure 3B).IL-1β mRNA levels in untreated PWH were also higher than that of healthy control cells.Monocytes from both treated and untreated PWH had an early and higher increase of TNF mRNA after LPS stimulation than monocytes from HIV-negative donors (Figure 3C), in line with the later increase of TNF protein in untreated PWH (Figure 2C).There were no statistically significant differences in IL-1α, NLRP3, or IFI16 mRNA expression between the groups (Figure 3A,D,E).These data again support that increased cytokine secretion is mediated through more efficient cytokine synthesis rather than increased levels of inflammasome components, and both signal 1 and in particular signal 2 are more efficient in untreated PWH.
Monocytes from PWH Are Not Primed for Inflammasome Activation In Vivo
To investigate whether monocytes in PWH with chronic HIV infection are primed for inflammasome activation in vivo, we activated monocytes from the same cohort with ATP without any LPS priming in vitro.There was no increase in IL-1α, IL-1β, or TNF release in monocytes from treated or untreated PWH compared to HIV-negative donor cells (Figure 4A-C).Furthermore, no difference in IL-1α and IL-1β levels were seen in plasma from PWH, treated or untreated, compared to HIV-negative donors (Figure 5A,B).These data could indicate that circulating monocytes in PWH are not significantly more primed for NLRP3 inflammasome activation as compared with circulating monocytes of HIV-negative donors, at least when ATP is used as signal 2.
Monocytes from PWH Are Not Primed for Inflammasome Activation In Vivo
To investigate whether monocytes in PWH with chronic HIV infection are primed for inflammasome activation in vivo, we activated monocytes from the same cohort with ATP without any LPS priming in vitro.There was no increase in IL-1α, IL-1β, or TNF release in monocytes from treated or untreated PWH compared to HIV-negative donor cells (Figure 4A-C).Furthermore, no difference in IL-1α and IL-1β levels were seen in plasma from PWH, treated or untreated, compared to HIV-negative donors (Figure 5A,B).These data could indicate that circulating monocytes in PWH are not significantly more primed for NLRP3 inflammasome activation as compared with circulating monocytes of HIV-negative donors, at least when ATP is used as signal 2. A,B) Monocytes were incubated with medium only for 6.5 h or exposed to 3 mM ATP for the last 30 min as indicated.IL-1α and IL-1β were quantified in the conditioned media and the levels of ATP-treated cells compared to corresponding untreated control cells (Sidak's multiple comparisons test).There are no statistically significant differences (ns).(C) TNF was quantified in the same conditioned media as indicated and all mean levels were compared to (Tukey's multiple comparisons test).There are no statistically significant differences.HIV-negative donors (controls) (n = 33), treated PWH (n = 21), and untreated PWH (n = 17).
Monocytes from PWH Are Not Primed for Inflammasome Activation In Vivo
To investigate whether monocytes in PWH with chronic HIV infection are primed for inflammasome activation in vivo, we activated monocytes from the same cohort with ATP without any LPS priming in vitro.There was no increase in IL-1α, IL-1β, or TNF release in monocytes from treated or untreated PWH compared to HIV-negative donor cells (Figure 4A-C).Furthermore, no difference in IL-1α and IL-1β levels were seen in plasma from PWH, treated or untreated, compared to HIV-negative donors (Figure 5A,B).These data could indicate that circulating monocytes in PWH are not significantly more primed for NLRP3 inflammasome activation as compared with circulating monocytes of HIV-negative donors, at least when ATP is used as signal 2.
Discussion
In this study, we investigated the expression of IFI16 and NLRP3 in circulating monocytes from ART-treated and untreated PWH.Our main findings can be summarized as follows: (i) IFI16 and NLRP3 expression correlated negatively with CD4 T cell counts, indicating that more progressive disease is related to upregulation of inflammasome proteins in monocytes; (ii) IFI16, but not NLRP3, was higher among PWH with high viral load; (iii) Monocytes from untreated PWH secreted more IL-1α and IL-1β as well as TNF than treated PWH upon stimulation; (iv) Monocytes from treated or untreated PWH were not pre-primed for inflammasome activation by ATP in vivo.
IFI16 and NLRP3 are important sensors in the innate immune response to HIV.Both IFI16 and NLRP3 are implicated in CD4 T cell depletion during HIV disease progression [24,25].While IFI16 seem to induce pyroptosis in cells undergoing abortive infections, NLRP3 may induce pyroptosis in infected cells.Our findings herein suggest that both IFI16 and NLRP3 are regulated with higher expression in untreated PWH as compared with treated PWH.NLRP3 and IFI16 mRNA levels in monocytes were inversely correlated with CD4 T cell counts, and for IFI16 there was a gradual rise relation to increasing viral load, further suggesting a link with HIV progression.
Our findings may also be relevant in relation to HIV-related co-morbidities that are an increasing challenge even in the ART era.Thus, NLRP3 seems to link HIV infection to comorbidities such as atherosclerotic cardiovascular disease [26] and neuroinflammation [27], both by inducing CD4 T cell depletion, a predictor of comorbidities [28], and by fueling chronic low-grade inflammation, in particular IL-1 activity, which has been linked to atherosclerotic events in PWH [22] and in the general population [29].
IFI16 recognizes lentiviral DNA in macrophages and CD4 T cells [23].IFI16 is complex because it can both assemble inflammasomes and activate the IRF3 transcription factor, inducing interferon-β transcription [30,31].This makes IFI16's implication in the HIV pathogenesis logical and predictable.The role of NLRP3, on the other hand, is less obvious.However, NLRP3 is a remarkable cytosolic innate immune receptor, indirectly sensing the death of other cells without sensing a particular ligand [16].Similarly, NLRP3 inflammasomes may be activated by pore forming toxins from microbes [11].Phagocytes, such as macrophages and neutrophils, may also activate NLRP3 through phagocytosis of crystals made from urate or cholesterol, leading to similar intracellular distress due to "frustrated phagocytosis" [32].Thus, at some stage during an HIV infection, IFI16-mediated pyroptosis may induce and activate NLRP3 in bystander cells, which in turn may contribute to chronic inflammation through additional release of IL-1α and IL-1β with or without pyroptosis.Recently, however, multiple inflammasome receptors, including NLRP3, were reported to synergize in pro-inflammatory cell death in bone marrow derived mouse macrophages, a process termed PANoptosis [33].Although IFI16 was not investigated, this new concept of inflammasome activation leads to the hypothesis that IFI16 and NLRP3 may have simultaneous and converging actions during HIV induced CD4 T cell depletion.
In the present study, IL-1β and TNF release of monocytes from non-treated PWH was increased compared to monocytes from both treated PWH and HIV-negative donors (Figure 2B), which also correlated as expected with the pro-IL-1β mRNA expression (Figure 3B).TNF mRNA also featured an expected kinetic profile compared to its protein counterpart [34,35].IL-1α release of monocytes from non-treated PWH was also increased compared to that of treated PWH (Figure 2A).However, we found no significant difference of NLRP3, IFI16, or IL-1α mRNA monocyte expression between these groups after LPS priming.Thus, the increased IL-1α cytokine release of monocytes from non-treated PWH does not seem to be explained by the levels of inflammasome receptor components or IL-1α synthesis.The regulation of IL-1α secretion is complex and may also be influenced by other factors not investigated in this study.
ATP did not induce IL-1α/IL-1β release from monocytes without LPS priming in vitro in any of the patient groups, including untreated PWH.Hence, our data do not suggest that circulating monocytes in PWH, whether ART-treated or not, are pre-primed for inflammasome activation.This is in contrast to our hypothesis that PWH would have already primed inflammasomes due to microbial translocation and low-grade immune activation [5].However, our findings correspond to a recent report showing that although the NLRP3 inflammasome was upregulated in PWH with defective immune recovery, markers of microbial translocation were not elevated compared to immunological responders [19].Notwithstanding, our findings also suggest that when exposed to both signal 1 and signal 2, monocytes from untreated PWH released increased levels of IL-1α and IL-1β, and this could clearly be relevant for the situation within the micro environment such as within the gastrointestinal tract where the cells are exposed to higher levels of LPS (signal 1) and within an atherosclerotic lesion where the cells are exposed to cholesterol crystals as signal 2 in NLRP3 activation [36].Moreover, for PWH, in particular ART-naïve individuals [37], NLRP3 activation may be facilitated in vivo within the microenvironment due to a disturbed redox status [38].
A stronger activation to danger signals could be of relevance for vulnerability to certain risk factors for comorbidities, including cholesterol crystals, which are known to activate the NLRP3 inflammasome in the atherosclerotic process [31].In our previous work, we showed that soluble markers of IL-1 activation measured at multiple time points over several years predicted first-time MI in PWH, but that the IL-1 activity remained mainly unchanged after ART initiation [39].Of note, this increased risk was independent of HIV-related and traditional risk factors, including lipid profiles [22].Moreover, in the present study, we also show increased release of alpha isoform of IL-1 from monocytes of untreated PWH, suggesting the involvement of gasdermin D. Notably, a preclinical study has suggested that IL-1α blockade affected early atherosclerosis, whereas anti-IL-1β treatment, but not IL-1α neutralization, limited progression and inflammation in established lesions [40].These findings could also have implications for the management of atherosclerotic diseases in human, including PWH, as canakinumab, a monoclonal antibody against IL-1β, in contrast to anakinra, an IL-1 receptor blocker, do not inhibit IL-1α.
The present study has some limitations such as a relatively low number of individuals and lack of longitudinal data.The monocytes were isolated by plastic adherence.Although this method yields a purity of >90% monocytes [41], the cell culture is not completely void of lymphocytes which may affect the experiments.Moreover, lack of protein data for IFI16 and NLRP3 weaken our data on these molecules.Furthermore, because of lack of cell materials, we were not able to examine other relevant functions in monocytes such as gasdermin D-dependent cell death.Moreover, correlations and associations do not necessarily mean any causal relationship and we lack mechanistic data that could support our association data.Another major limitation is that IL-1α and IL-1β protein were detected with ELISA only and not by other methods such as Western blot.However, our findings further support the involvement of IFI16 and NLRP3 inflammasomes in HIV pathogenesis involving increased release of both IL-1 isotypes that may clearly also be related to the increased occurrence of certain comorbidities such as cardiovascular disorders in PWH.
Study Cohort
We included 20 untreated PWH, 22 ART-treated PWH, and 35 sex-and age-matched HIV-negative donors.Untreated PWH were recruited immediately prior to starting their ART treatment.Five participants were excluded due to experimental errors (2 HIV-negative donors, 3 untreated PWH) (Table 1).Of the remaining participants, RNA was collected from 54 participants (28 HIV-negative donors, 9 treated PWH, and 17 untreated PWH).
Cell Counts, Monocytes Isolation, Culture, and Stimulation
CD4 T-cells and HIV in peripheral blood were measured as part of clinical practice.For monocyte isolation, whole blood was collected into 8 mL sodium heparinized CPT vacutainer and inverted 10 times to ensure homogenization of the sodium heparin anticoagulant and blood.The vacutainer was centrifuged at 1740× g for 20 min at room temperature, resulting in an upper layer of plasma over a cloudy band of PBMC.The PBMC layer and most of the plasma were pipetted into a 50 mL Falcon tube and the cells was spun down at 300× g for 10 min.Plasma was pipetted off and stored at −80 • C until analyses.The cell pellet was carefully resuspended and washed in RPMI 1640 twice (300× g for 10 min).The cells were then counted and seeded into 24 wells Nunclon Delta surface culture dishes at 300,000 cells/mL in RPMI 1640 without serum for 1 h for monocytes to adhere.The cells were then washed twice with RPMI 1640 to remove lymphocytes (purity of >90% monocytes [41]) and further cultured overnight in RPMI 1640 medium containing stable glutamine, 25 mM HEPES, 10% heat-inactivated FBS, 5 U penicillin/mL, and 50 µg/mL streptomycin.For NLRP3 inflammasome activation, monocytes were primed with LPS (0.1 ng/mL) for 1, 2, 4, and 6 h prior to harvesting, and then activated with 3 mM ATP for the last 30 min.
ELISA
Conditioned media were collected and centrifuged at 300× g for 10 min to remove any detached cells, then stored at −80 • C until analysis.IL-1β and tumor necrosis factor (TNF) were quantified with DuoSet ELISA Kits (R&D Systems, Minneapolis, MN, USA).IL-1α were quantified with ELISA MAX TM Deluxe Set Human IL-1α (sensitivity 0.6 pg/mL) (BioLegend, San Diego, CA, USA).
mRNA Quantification
RNA was extracted by using RNeasy Mini Kit (QIAGEN).cDNA was synthesized by using qScript cDNA SuperMix (Quantabio, Beverly, MA, USA).Real-time RT-PCR was performed with Brilliant III Ultra-Fast SYBR Green QPCR Master Mix (Agilent Technologies, Santa Clara, CA, USA) on a 7900HT Fast Real-Time PCR System (Thermo Fisher Scientific).Primer sequences are listed in Table 2. Relative gene expression was calculated using the ∆∆CT method.
Statistical Methods
Data were analyzed in Graphpad Prism ver.6.0.Correlations were investigated with linear regression analysis.For comparisons of several means to a control mean, Dunnett's test were used.For comparing selected independent means, Sidak's multiple comparison test was used.For comparing every mean with every other mean, Tukey's multiple comparisons test was used.
Ethics
The human samples (plasma/cDNA) were stored in biobanks approved by the Regional Committee for Medical Research Ethics South-Eastern Norway (permit numbers 2012/521, 2015/629 and 33256) and conducted according to the ethical guidelines outlined in the World Medical Association's Declaration of Helsinki for use of human tissue and subjects.All participants gave written informed consent.
Figure 1 .
Figure 1.The correlation of inflammasome expression in monocytes with viral load and CD4 numbers in PWH (n = 21).(A) Linear regression analysis reveals that IFI16 mRNA expression in monocytes increases with higher viral load.(B) A similar correlation pa ern with NLRP3 mRNA expression was not significant.(C,D) IFI16 and NLRP3 mRNA expression in monocytes decrease with increasing CD4 counts (linear regression analysis).(E) PWH were stratified according to viral load: high viral load (HVL, >10,000 copies/mL, n = 9), medium viral load (MVL, 1000-9999 copies/mL, n = 5), and low viral load (<1000 copies/mL, n = 7).Lines are mean viral load in blood (copies/mL) with standard error of the mean (SEM).(F,G) IFI16 and NLRP3 mRNA expression were quantified in the stratified patient groups presented in (E) and compared to gender-and age-matched HIV-negative donors (HC, n = 9).Mean with SEM are indicated in the column dotplots.*** p < 0.001, ** p < 0.01, * p < 0.05 compared to HIV-negative donors (HC) (F,G), (Dunne 's multiple comparisons test) or as indicated (Sidak's multiple comparisons test (G)).ns: not significant.
Figure 4 .
Figure 4.No sign of in vivo priming of monocytes from PWH. (A,B) Monocytes were incubated with medium only for 6.5 h or exposed to 3 mM ATP for the last 30 min as indicated.IL-1α and IL-1β were quantified in the conditioned media and the levels of ATP-treated cells compared to corresponding untreated control cells (Sidak's multiple comparisons test).There are no statistically significant differences (ns).(C) TNF was quantified in the same conditioned media as indicated and all mean levels were compared to (Tukey's multiple comparisons test).There are no statistically significant differences.HIV-negative donors (controls) (n = 33), treated PWH (n = 21), and untreated PWH (n = 17).
Figure 5 .
Figure 5. Inflammasome-dependent cytokines in plasma from PWH and HIV-negative donors (controls.(A,B) Plasma were obtained from HIV-negative donors (n = 33), treated PWH (n = 21), and untreated PWH (n = 17).IL-1α and IL-1β were quantified with ELISA.Columns are mean with SEM.Means were compared with Tukey's multiple comparison test, there are no statistically significant differences.
Figure 4 .
Figure 4.No sign of in vivo priming of monocytes from PWH. (A,B) Monocytes were incubated with medium only for 6.5 h or exposed to 3 mM ATP for the last 30 min as indicated.IL-1α and IL-1β were quantified in the conditioned media and the levels of ATP-treated cells compared to corresponding untreated control cells (Sidak's multiple comparisons test).There are no statistically significant differences (ns).(C) TNF was quantified in the same conditioned media as indicated and all mean levels were compared to (Tukey's multiple comparisons test).There are no statistically significant differences.HIV-negative donors (controls) (n = 33), treated PWH (n = 21), and untreated PWH (n = 17).
Figure 4 .
Figure 4.No sign of in vivo priming of monocytes from PWH. (A,B) Monocytes were incubated with medium only for 6.5 h or exposed to 3 mM ATP for the last 30 min as indicated.IL-1α and IL-1β were quantified in the conditioned media and the levels of ATP-treated cells compared to corresponding untreated control cells (Sidak's multiple comparisons test).There are no statistically significant differences (ns).(C) TNF was quantified in the same conditioned media as indicated and all mean levels were compared to (Tukey's multiple comparisons test).There are no statistically significant differences.HIV-negative donors (controls) (n = 33), treated PWH (n = 21), and untreated PWH (n = 17).
Figure 5 .
Figure 5. Inflammasome-dependent cytokines in plasma from PWH and HIV-negative donors (controls.(A,B) Plasma were obtained from HIV-negative donors (n = 33), treated PWH (n = 21), and untreated PWH (n = 17).IL-1α and IL-1β were quantified with ELISA.Columns are mean with SEM.Means were compared with Tukey's multiple comparison test, there are no statistically significant differences.
Figure 5 .
Figure 5. Inflammasome-dependent cytokines in plasma from PWH and HIV-negative donors controls.(A,B) Plasma were obtained from HIV-negative donors (n = 33), treated PWH (n = 21), and untreated PWH (n = 17).IL-1α and IL-1β were quantified with ELISA.Columns are mean with SEM.Means were compared with Tukey's multiple comparison test, there are no statistically significant differences.
Table 1 .
PWH and HIV-negative control demographics.We first measured IFI16 and NLRP3 mRNA expression in monocyte from 17 untreated PWH.IFI16 expression, but not NLRP3, correlated positively with viremia, mostly driven by individuals with high viral load (Figure | 6,175.4 | 2024-06-28T00:00:00.000 | [
"Medicine",
"Biology"
] |
A rapid flow strategy for the oxidative cyanation of secondary and tertiary amines via C-H activation
An efficient continuous flow protocol has been developed for bond C-H activation which promotes the α-cyanation of secondary and tertiary amines using magnetic nano-ferrites.
Synthesis and characterization of catalyst
Magnetic nano-ferrites (Fe 3 O 4 ) were synthesized according to reported methods 26 and further characterized by X-ray powder diffraction (XRD), scanning electron microscope (SEM), energy dispersive X-ray analysis (EDX) and X-ray photoelectron spectroscopy (XPS). The XRD (Fig. 1) and SEM (Fig. 2) confirmed the formation of single-phase Fe 3 O 4 nanoparticles. The presence of iron further supported by XPS ( Figure S2) and EDX ( Figure S3).
Results and Discussion
The magnetic nano-ferrites (Fe 3 O 4 ) were then employed for C-H bond activation and resulting cyanation of amines in a 1/16″ (i.d. 0.8mm and 10m in length) stainless steel coiled tube flow reactor (5.03mL total internal dead volume). The coil reactor is totally immersed in a Paratherm ® NF mineral oil bath. The oil bath was set upon a magnetic hot plate and continuously stirred to maintain a uniform temperature. The temperature attained by the oil bath facilitates efficient heating of the coiled reactor allowing transfer of heat via a thin film of reaction mixture flowing within the coiled tube. This allows for the reaction mixture (i.e., thin-film), coupled with the nano-ferrite catalyst, to rapidly attain the needed the reaction's activation energy. The reaction mixture was pumped through the pre-heated coil via the inlet port using a peristaltic pump. This facilitated not only the lateral movement of the reaction mixture within the heated reaction zone, but also creates a consistent and well-mixed reaction fluid within the reactor. The reaction output was then collected at the exit port. Several experimental trials for the cyanation of N, N-dimethylaniline were performed to establish optimized reaction conditions. All reactions were conducted in the presence of magnetic nano-ferrites (Fe 3 O 4 ) by varying temperature and flow rate respectively (Table 1, entries 1-15). A reaction mixture was first prepared by dissolving N, N-dimethylaniline in a water and methanol solution (1:1 ratio). Further, 25mg of Fe 3 O 4 catalyst, NaCN (1.1 mmol) and 30% (aq) hydrogen peroxide (1 mmol) were added (Fig. 3). This mixture was then pumped through the coil reactor at room temperature and ambient pressure. Several reactions were performed at the room temperature while varying the flow rate (and subsequently the residence time) ( Table 1, [13][14][15]. Upon optimizing the reaction conditions, the scope of the reaction was explored using a variety of tertiary and secondary amines ( Table 2, entries 1-6). Importantly, the presence of an electron withdrawing (e.g. bromo group on ortho-and para-position; Table 2, entries 2-3) and an electron donating group (e.g. methyl group on ortho-and para-position; Table 2, entries 4-5) did not affect the reaction rate and delivered the full conversion to desired products.
A plausible mechanism has been proposed for the oxidative cyanation of amines wherein the reaction follows an oxidative and reductive mechanism. Iron (II), 1, reacts with H 2 O 2 leading to the formation of reactive oxo-iron (IV) species, 2 which subsequently reacts with a tertiary amine to give an iminium ion, 4. This intermediate, 4, reacts with in-situ generated HCN and delivers the corresponding α-aminonitrile (Fig. 4) 19 .
Conclusions
Magnetic nano-ferrites coupled with a continuous flow reactor provides a protocol that has been used to arrive at a more sustainable approach for the synthesis of α-aminonitriles via C-H activation. This efficient method generates the desired products in less than 10 min of reaction. This strategy offers major improvements over previously reported methods which often require longer reaction times and higher temperatures rendering it more attractive in term of efficiency and ease of synthesis. Moreover, this approach is further simplified as the nano-ferrites can be easily separated using external magnet upon completion of the reaction. The recycled nano-ferrites can then be reused again without any demonstrated loss of its catalytic activity (Supplementary Information). Therefore, qualitatively speaking in terms of costs and energy, the developed protocol is more appealing and is an improved alternative over the conventional methods for the C-H activation.
Disclaimer. The views expressed in this article are those of the authors and do not necessarily represent the views or policies of the U.S. Environmental Protection Agency. Any mention of trade names or commercial products does not constitute endorsement or recommendation for use. | 1,038.8 | 2017-11-24T00:00:00.000 | [
"Biology",
"Chemistry",
"Materials Science"
] |
Coronal loop kink oscillation periods derived from the information of density, magnetic field, and loop geometry
Context. Coronal loop oscillations can be triggered by solar eruptions, for example, and are observed frequently by the Atmospheric Imaging Assembly (AIA) on board Solar Dynamics Observatory (SDO). The Helioseismic and Magnetic Imager (HMI) on board SDO o ff ers us the opportunity to measure the photospheric vector magnetic field and carry out solar magneto-seismology (SMS). Aims. By applying SMS, we aim to verify the consistency between the observed period and the one derived from the information of coronal density, magnetic field, and loop geometry, that is, the shape of the loop axis. Methods. We analysed the data of three coronal loop oscillation events detected by SDO / AIA and SDO / HMI. First, we obtained oscillation parameters by fitting the observational data. Second, we used a di ff erential emission measure (DEM) analysis to diagnose the temperature and density distribution along the coronal loop. Subsequently, we applied magnetic field extrapolation to reconstruct the three-dimensional magnetic field and then, finally, used the shooting method to compute the oscillation periods from the governing equation. Results. The average magnetic field determined by magnetic field extrapolation is consistent with that derived by SMS. A new analytical solution is found under the assumption of exponential density profile and uniform magnetic field. The periods estimated by combining the coronal density and magnetic field distribution and the associated loop geometry are closest to the observed ones, and are more realistic than when the loop geometry is regarded as being semi-circular or having a linear shape. Conclusions. The period of a coronal loop is sensitive to not only the density and magnetic field distribution but also the loop geometry.
Introduction
Coronal loop oscillations, which are frequently triggered by occasional explosions, such as coronal mass ejections (CMEs) or magnetic flux rope eruptions, can be used to diagnose the physical parameters of the local plasma environment, which are difficult to measure directly (Roberts et al. 1984). In particular, the well-characterised transversal kink oscillation is a typical mode of coronal loop oscillations, which was first detected by Transition Region And Coronal Explorer (TRACE) in 1999 (Aschwanden et al. 1999Schrijver et al. 1999Schrijver et al. , 2002Nakariakov et al. 1999). Approximating a coronal loop as a magnetic flux tube with uniform magnetic field strength and density distribution, the Alfvén speed can be estimated by measuring the period of the kink oscillations (Roberts et al. 1984;Ruderman & Erdélyi 2009;Aschwanden & Schrijver 2011;Aschwanden et al. 2013). With an empirical ratio of external to internal density, namely ε = n e /n i ∼ 0.1 (Nakariakov et al. 1999;Nakariakov & Ofman 2001), the magnitude of the average magnetic field strength can then be estimated (Roberts et al. 1984).
Early observations revealed the fundamental mode of kink oscillations. The first overtone of coronal loop kink oscillations was detected for the first time by analysing the high temporal and spatial resolution data from TRACE (Verwichte et al. 2004). The ratio between the period of fundamental mode and the first overtone was found to deviate from 2, a canonical value of a straight loop with uniform magnetic field and density distribution, implying non-uniformity of the coronal loops. Since then, with the commissioning of the Solar Dynamics Observatory (SDO, Pesnell et al. 2012), finer coronal loop oscillation events with the first overtone have been observed (Guo et al. 2015;Pascoe et al. 2016;Li et al. 2017;Duckenfield et al. 2018). Moreover, using wavelet analysis, Duckenfield et al. (2019) found a coronal loop oscillation event with a second overtone but without an obvious first overtone. The detection of these high-order overtones has become an effective means to analyse the dynamics of coronal loops and to derive their physical parameters.
From a theoretical perspective, Andries et al. (2005), Goossens et al. (2006), and Van Doorsselaere et al. (2007) worked out the relationship between the period ratio P 1 /P 2 and the density stratification, where P 1 and P 2 correspond to the periods of the fundamental and the first overtone modes, respectively. Dymova & Ruderman (2005) derived the governing equation for the kink mode oscillation of magnetic flux tube by linearising the magnetohydrodynamics (MHD) equations.
Their work provides a valuable basis for investigating the eigenfunction of the kink oscillations. For instance, Erdélyi & Verth (2007) derived three analytic solutions of the governing equations, with assumptions of a step function, a linear function, and a hyperbolic cosine density profile, in conjunction with constant magnetic field, respectively. These authors also obtained a numerical solution to the case with an exponentially stratified density profile. Additionally, Scott & Ruderman (2012) considered the effect of a non-planar loop, and Ruderman et al. (2017) discussed the influence of cross-section expansion. Many of the above aspects were discussed by Andries et al. (2009).
While oscillation-based solar magneto-seismology (SMS) can be applied to estimate the local magnetic field of a coronal loop, one can also use a magnetic field model to obtain the three-dimensional (3D) magnetic field in the corona, including the local magnetic field of a coronal loop. These magnetic field models include potential field, linear force-free field, and non-linear force-free field (NLFFF) models. For example, in the Cartesian coordinate system, a linear force-free field equation can be solved with the Green's function method and a Fourier transform method (Schmidt 1964;Chiu & Hilton 1977;Seehafer 1978). For a potential field in the spherical coordinate system, the governing equation is reduced to the Laplace equation, ∇ 2 Φ = 0, and B = −∇Φ, where the spherical harmonic transformation technique can be used (Schatten et al. 1969;Newkirk & Altschuler 1969;Schrijver & De Rosa 2003). The results of Guo et al. (2015) showed that the magnetic field of a coronal loop obtained with a potential field model is consistent with that derived with the oscillation-based SMS. In addition, the 3D morphology can be reconstructed from the extrapolated magnetic field, or can alternatively be obtained using stereoscopic observations and the triangulation method.
Coronal loop oscillations are described by a number of coupled physical and geometric parameters. In previous investigations, the density and magnetic field, which dominate the dynamics of a coronal loop, were the research focus. In the present paper, the loop geometry is taken into account, in addition to the density and magnetic field, using a comprehensive approach. Specifically, oscillation periods are obtained from the oscillation evolution time-distance diagram; the density distribution is detected using a DEM analysis; and the geometry and magnetic field are reconstructed by magnetic field extrapolation. The obtained physical and geometric parameters are substituted into the governing equation to determine the computed periods. We show that, in the case of linearised MHD equations, a coronal loop oscillation can be treated as a single string oscillation. Also, we consider three typical configurations for the coronal loop geometry, as follows: (1) Under the assumption of a linear loop geometry, an ingenious variable substitution is used to obtain an analytical solution; (2) with approximation of a semi-circular loop geometry, the shooting method is implemented to find a numerical solution; and (3) regarding the height distribution of the extrapolated magnetic field as the loop height, a numerical solution with the shooting method can be derived as well.
Eventually, the computed periods are compared with the observed ones to investigate the impact of the different loop geometries on the nature of the oscillation. The ultimate aim is to explore whether the computed periods derived from the actual physical and geometrical parameters are consistent with the observed ones. This work indeed takes advantage of the forward modelling research method instead of the routine inversion method, which aims to obtain the average magnetic field by oscillation period and density. We do not consider an inversion because we wish to investigate the distribution of the mag-netic field and not simply its average strength, but it is difficult to invert the magnetic field distribution using only the fundamental tone.
The paper is organised as follows: The oscillation, density, and magnetic fields are diagnosed in Sect. 2. The string model, corresponding to the governing equation, an analytical solution, and numerical solutions to the governing equation, is introduced in Sect. 3. A discussion and conclusions are provided in Sect. 4.
Analysis of oscillation parameters
Explosive events in the solar atmosphere may disturb coronal loops and trigger coronal loop kink oscillations. The kink oscillation can be used to estimate the Alfvén speed and then to determine the average strength of the magnetic field (Tomczyk et al. 2007;Erdélyi & Taroyan 2008;Verwichte et al. 2013). An efficient approach to studying coronal loop oscillations is to plot a time-distance diagram of the coronal loop evolution. By fitting an oscillation profile, a series of oscillation parameters can be obtained, including the period (Guo et al. 2015;Pascoe et al. 2016;Li et al. 2017;Duckenfield et al. 2018Duckenfield et al. , 2019. In the present paper, we also take advantage of the oscillation profile fitting to determine the oscillation parameters, where the fitting formula is Here, A 00 , A 01 , A 1 , t 0 , τ 1 , and φ 01 represent the displacement, linear drift velocity, oscillation amplitude, reference time, damping timescale, and initial phase, respectively. P 1 is the fundamental period. We can also use a combined damped cosine model to fit the profile (Guo et al. 2015;Pascoe et al. 2016;Li et al. 2017;Duckenfield et al. 2018) in order to obtain additional parameters such as the first overtone period (Andries et al. 2009;Morton & Erdélyi 2010). Although Duckenfield et al. (2019) detected the second overtone using a wavelet analysis, it is generally very difficult to detect higher order harmonic signals because an extremely low level of noise is required. For convenience, we plan to verify the consistency of the fundamental mode between the observed and calculated results. Therefore, it is enough to use the damped cosine model (Eq. (1)) to fit the profile (see, e.g., Morton & Erdélyi 2010). We select several slices perpendicular to the loop axis using the tools provided by Solar SoftWare (SSW) and choose the oscillation profiles along the slices whose time-distance evolution can be identified easily from the background. For each time-distance diagram, we visually determine the oscillation profile of the coronal loop. By repeating the sampling ten times, we fit Eq. (1) to the mean data and the statistical standard deviations are used to represent the error bar. The final fitting results are shown in Figs. 1g-i and the oscillation parameters are listed in Table 1.
Here, Loop #1 represents the loop oscillation event that occurred at 19:05-19:35 UT on 2010 October 16 and was triggered by a GOES M2.9-class flare (Aschwanden & Schrijver 2011;Kumar et al. 2013); Loop #2 represents the loop oscillation event that occurred at 22:20-22:35 UT on 2011 September 6 and was triggered by a GOES X2.1-class flare (Verwichte et al. 2013); and Loop #3 represents the loop oscillation event that occurred at 1:10-1:50 UT on 2012 March 7 and was triggered by a GOES X5.4-class flare. The 171 Å images of these loops observed by the Atmospheric Imaging Assembly (AIA) on board A48, page 2 of 11 Figs. 1a-c. We analyse the base difference movies, in which the first frame is subtracted from other frames, and find that all the loops present characteristic transversal oscillations, whose oscillation profiles are shown in Figs. 1df. Regarding the parameter errors listed in Table 1, the Monte Carlo method was used to randomly sow points within the error range of each data point, and statistical standard deviations through 100 times fitting were used to represent the error bar. It should be noted here that although two of the three chosen cases have been studied by other colleagues, our methods for measuring magnetic field are not exactly the same, and the geometry of the coronal loop is taken into account in our work.
On the other hand, we also have a new scientific target, which is to measure the physical parameters of the coronal loops and then use a forward modelling method to solve the oscillation period. Figure 1d shows that Loop #1 is a decayless oscillation (τ 1 = ∞), which was explained by Kumar et al. (2013) as being due to successive impacts of a fast-mode wave and a slower 'EIT wave'. Considering the uncertainties, the fitting period 382.7 ± 2.6 s is consistent with the result of 373 ± 30 s derived by Aschwanden & Schrijver (2011). Also, for Loop #2, the fitted period 148.9 ± 1.3 s is consistent with 150 ± 5 s obtained by Verwichte et al. (2013) within the uncertainty range. In particular, to the best of our knowledge, the oscillation parameters of Loop #3 have not yet been analysed.
These oscillation parameters, especially the oscillation period, are sufficient to decipher average physical quantities such as the magnetic field strength of the loop (Roberts et al. 1984;Andries et al. 2009;Morton et al. 2011). Generally, the loop length can be obtained easily, and the density can be measured using the DEM analysis (Sect. 2.2), although the density is often simplified and considered to be constant along a coronal loop. The average magnetic field strength can then be derived with the assumption ε = n e /n i = 0.1. Furthermore, if the periods of the high-order overtones are measured, we can obtain further information in addition to the average magnetic field strength, such as density scale height (Andries et al. 2005;Van Doorsselaere et al. 2007), which can describe the variation of the density rather than an average quantity.
Density diagnostics using DEM analysis
DEM analysis is used for temperature and density diagnostics (Aschwanden et al. 2013). Several algorithms have been proposed and their effectiveness has been validated (Weber et al. 2004;Hannah & Kontar 2012;Aschwanden et al. 2013;Plowman et al. 2013;Cheung et al. 2015;Su et al. 2018). Here, we adopt the Oriented Coronal CUrved Loop Tracing (OCCULT) code and the single Gaussian forward fitting method proposed by Aschwanden et al. (2013) to detect the loop segment and then perform the DEM analysis for temperature and density diagnostics. We fit the intensity profiles along the slices in all six extreme-ultraviolet (EUV) passbands from SDO/AIA using a Gaussian function plus a linear background profile to obtain the background-subtracted EUV fluxes, F Loop λ . With the single-Gaussian DEM fitting, we then derive the peak emission measure, EM i , peak temperature T i , and the Gaussian temperature width, σ T . Accordingly, the electron density, n i , is computed as follows (Aschwanden & Schrijver 2011;Aschwanden et al. 2013;Verwichte et al. 2013;Guo et al. 2015;Dai et al. 2021) Here, the index i denotes the value measured inside the coronal loop, w = 2 √ 2 ln 2σ w is the loop width, and σ w is the Gaussian loop width fitted along the cross-sectional profiles.
The results of DEM analysis are shown in Fig. 2. Figures 2d-l depicts the distributions of the temperature T i , density n i , and loop width w along the three oscillating loops. It can be seen that the maximum amplitudes of the kink oscillation (Figs. 1g-i) are comparable to the width of the loops shown in Figs. 2j-l, which reflects the rationality of the approximation of minor amplitude and the linearisation of MHD equations. The goodness of the fitting is shown in Figs. 2m-o, which indicates that the fitting results are acceptable. It is worth noting that the OCCULT method (Aschwanden et al. 2013) cannot identify the loop as a whole with the complicated EUV backgrounds. Therefore, we sample the loop coordinates interactively with an interactive data language (IDL) code before using the SSW program aia_loop_autodem.pro to obtain the final results.
According to the DEM analysis results shown in Fig. 2, we find that the average temperatures of Loops #1, #2, #3 are 1.07, 0.89, 1.66 MK, respectively. These are typical coronal temperatures (Aschwanden et al. 2013). The temperature distributions of the three studied loops are nearly isothermal as shown in Figs. 2d-f. Besides, the average electron density of the three loops is n i = 0.43 × 10 9 , 0.75 × 10 9 , and 1.12 × 10 9 cm −3 , respectively. Although the density distribution profile is noisy due to line-of-sight (LOS) interference, the trends that the footpoint has higher density and the apex point has a lower density can be seen, which indicates decreasing density with altitude. However, the density distribution in the middle of Fig. 2i is abnormally high, indicating the possible existence of background threads. In Sect. 2.3, the density variation with height is fitted by a function that decays exponentially with loop height: where H is the density scale height, n f is the density at the footpoint, and h(s) is the height along the loop, which represents the loop geometry. The loop length and the height variation h(s) along the loop are obtained by 3D reconstruction of coronal loops with magnetic field extrapolation in Sect. 2.3.
3D magnetic field reconstruction using magnetic field extrapolation
In this section, we show how we processed the HMI data with the 180 • ambiguity being removed in the HMI pipeline. In addition to the pipeline process, we corrected the projection effect by a rotation matrix R(P, B, B 0 , L, L 0 ) (Gary & Hagyard 1990;Guo et al. 2017), which corrects both the vector directions and the geometry. The boundary conditions for potential magnetic field extrapolation were then prepared by a preprocess program, which makes the boundary conditions force-free and torque-free, and extracted the radial magnetic field from the vector magnetic field. Finally, we adopt the potential magnetic field extrapolation algorithm in the Message Passing Interface Adaptive Mesh Refinement Versatile Advection Code (MPI-AMRVAC; Keppens et al. 2003;Porth et al. 2014;Xia et al. 2018 three loops, h a , are found by seeking the maximum of the modified height. For the definition of the modified height, we randomly choose three points in each loop to determine the loop plane and compute its normal vector, that is, the direction cosine α i , β i , γ i (the subscript i denotes the index of each loop). We then apply a rotation matrix to convert it to the vertical direction. The dashed lines in Figs. 3d-f represent the average magnetic field calculated by In addition, using the results of the DEM analysis employed, the density scale height and the footpoint density are fitted with Eq.
(3) and shown in Figs. 3g-i. Accordingly, an average magnetic field is estimated using the solar magneto-seismological method, which is given by (Roberts et al. 1984) where we adopt n e /n i = 0.1 as an empirical density ratio between external and internal plasma (Nakariakov et al. 1999;Nakariakov & Ofman 2001), P kink is P 1 as shown in Table 1 m p = 1.67 × 10 −24 g is the proton mass, and µ = 1.2 is the average molecular weight with the consideration of the coronal abundance. We list the results of B kink and B in Table 1 and find that the magnetic field strength derived by SMS and magnetic field extrapolation is consistent within the range of errors. This reflects the rationality of these two independent approaches to compute the magnetic field. However, other studies reveal a coronal magnetic field exceeding the results of traditional SMS by one or two orders of magnitude, which do not match ours. For example, Vourlidas et al. (2006) and Brosius & White (2006) found a coronal magnetic field of several kilogauss by studying the polarisation of radio emission. Other authors have detected a coronal magnetic field of a few hundred to thousands of Gauss using spectropolarimetry (Schad et al. 2016;Kuridze et al. 2019) and microwave spectral fitting (Chen et al. 2020a,b). The magnetic field extrapolation matches the coronal loops well, as displayed in Figs. 3a-c, which show that the extrapolated geometric structure and the observed results (in the 171 Å waveband) coincide approximately, except Loop #1 in Fig. 3a. One reason for the misalignment in this case is that this loop is not in an active region and its magnetic field is much weaker than that of the other two cases, as listed in Table 1. Therefore, the precise position of its footpoint is difficult to locate in the magnetogram, which may cause primary errors for our measurement of L, h(s), and B(s). In addition, our solar magneto-seismological result is similar to that of Aschwanden & Schrijver (2011), while our magnetic field extrapolation method is more elaborate because we corrected the projection effect due to the solar spherical surface and located the footpoint with stereoscopic information, both of which were not considered in this latter study. As shown in Fig. 3d, the apex magnetic field B apex ≈ 3 G seems more acceptable than 6 G in Aschwanden & Schrijver (2011). This is because (1) we obtained B = 4.3 ± 0.1 G and B kink = 3.9 ± 0.4 G, which are close to each other; but in Aschwanden & Schrijver (2011), B = 11 G is much larger than B kink = 4.0 ± 0.7 G; and (2) it is reasonable that we had B apex = 2.8 ± 1.03 G < B kink = 3.9 ± 0.4 G while it is contradictory that B apex = 6 G > B kink = 4.0 ± 0.7 G in Aschwanden & Schrijver (2011).
Subsequently, we reconstructed the geometry (shape of the loop axis and height) of the coronal loop by extrapolating a potential-field model as shown in Figs. 3d-f. As projection correction of the magnetic field involves both vector direction correction and geometric correction, the shape of the coronal loop reconstructed here is not affected by projection effects. For the loop geometry, we use the interpolation function of the height distribution along the loop, h(s), instead of its semi-circular shape h(s) = L sin(πs/L)/π. It is worth noting that the interpolation function, h(s), is an irregular profile but is closer to the real morphology of the coronal loop. In Fig. 3e, the profile of Loop #2 deviates from a semi-circle, and therefore the traditional model with a semi-circular approximation would not work well in computing the oscillation periods. In contrast, our model would perform well, as discussed in Sect. 3. For the inclination, Verth et al. (2008) mentioned that the neglected inclination leads to a small overestimation factor of 1-2. In our cases, the inclination of the three loops is different. Nevertheless, we assume them to be vertical with the aforementioned operation, which is equivalent to introducing a modified density scale height to remove the effect of inclination. We also assume a planar coronal loop, which is feasible in most cases. In our research, such an approximation is reasonable, except for in the case of Loop #3. As revealed in Fig. 3c, Loop #3 shows an obvious pitch of helix. However, this effect is negligible (Scott & Ruderman 2012), which can be seen in our later results.
Figures 3g-i shows the density profiles fitted by Eq. (3). The density scale heights H of these three coronal loops are listed in Table 1. The apex heights of the loops, h a , as shown in Table 1 are derived by taking the maximum of the height profile (Figs. 3d-f), which is approximately equal to L/π. The density stratification is characterised by h a /H, which is 0.54 ± 0.09 for Loop #1 and 1.09 ± 0.04 for Loop #3. For Loop #2, we find h a /H = 1.57 ± 0.06, which is different from the results of Verwichte et al. (2013), who showed h a /H = 0.985 for the same case. The apex height of Loop #2 determined with the STEREO-A/EUVI 171 Å images by Verwichte et al. (2013) is almost the same as the value reconstructed by the potential field model, demonstrating the validity of the geometry information obtained in our 3D magnetic field model. Also, the magnetic field strength was derived from the potential field source surface (PFSS) model in Verwichte et al. (2013), which is similar to our results from the potential field model. The discrepancy in the density scale height between our work and this latter study is attributed to the fitting of the footpoint density. We use a footpoint density of n f = 17.4 × 10 8 cm −3 whereas Verwichte et al. (2013) use n f = 7 × 10 8 cm −3 . Figures 3d-i shows that the density distribution and magnetic field strength distribution have a similar decreasing tendency. The density decrease is due to the gravity stratification, n i (h) ∝ e −h/H , in hydrostatic equilibrium. The magnetic field attenuation is due to the dipole potential field, B(h) = B 0 (1 + h/h d ) −3 , which decays with height (Erdélyi & Verth 2007). However, Schad et al. (2016) found a case where B 0 = 29380 G with spectropolarimetric inversions, which infer a loop magnetic field with strength far beyond the dipole field approximation.
The force-free field models have relatively simple solutions and their magnetic tension and pressure forces balance each other exactly. However, they are too simple to describe the real observation with complex magnetic structures, especially for the more limited potential field models. More importantly, boundary and initial conditions are not accurate enough for observations and more physics should be included in dynamic cases. With all these disadvantages, the potential field model is chosen because it agrees better with observations than the NLFFF model, and is more affordable than dynamic models.
String model
In order to derive a formula to relate the oscillation period to the coronal loop parameters, we use the analogy of a string to represent the oscillating coronal loop instead of solving the full MHD equations. Figure 4 shows the physical approximation of the string model; an inhomogeneous string that deviates from its equilibrium position after being disturbed. Considering that the coronal loop is actually a magnetic flux tube, if a plasma element P 0 deviates from its equilibrium position, it will be subjected to a restoring force due to the elastic nature of the magnetic field line (Fig. 4). Because of the condition of low plasma-β (the ratio of the gas pressure to the magnetic pressure), we only take the magnetic pressure into account and ignore the thermal pressure. Accordingly, the force applying on P 0 in the magnetic field of the coronal loop can be expressed as where µ 0 is the permeability of vacuum, j is the current density, and B is the magnetic induction intensity. The first term on the right-hand side of Eq. (6) represents the magnetic pressure gradient. The second term represents the magnetic tension force. It is the magnetic tension force that makes a magnetic field line behave like a string. We decompose the magnetic tension force term in the orthogonal natural coordinate system, which is an orthogonal curvilinear coordinate system with Lamé coefficients of 1: whereb andr are the unit vector along the magnetic field and the normal unit vector, respectively. In addition, we use the relation db db = db dα dα db and the formula of the analytic geometry db dα = R c , where R c is the radius of curvature of the magnetic field line. Eventually, we find that the force exerted on the plasma P 0 is The second term on the right-hand side of Eq. (8) exactly cancels out the effect of magnetic pressure gradient in the direction of the magnetic field. According to the equilibrium conditions, the magnetic pressure in other directions should also be balanced by the external pressure. The ultimate restoring force, accordingly, is the first term on the right-hand side of Eq. (8), which points to the centre of the curvature and has the effect of pulling the plasma back to its equilibrium position. We consider a plasma element P 0 from s to s + ds; the force along the magnetic field line is at balance, and so the restoring force is normal to the field line. Adding an external magnetic pressure gradient, the total restoring force becomes A48, page 7 of 11 A&A 664, A48 (2022) where we assume B = (B 0 + b)b (b B 0 ) and define the pressure perturbation P ≡ B 2 /2µ 0 − B 2 0 /2µ 0 ∼ Bb/µ 0 . Therefore, the momentum equation of the plasma element is where ψ is the displacement from an equilibrium position, ρ(s) is the distribution of density along the coronal loop, and R c is the radius of the curvature given by Here ψ s = ∂ψ/∂s, ψ ss = ∂ 2 ψ/∂s 2 , with the approximate relation cos α ≈ 1, ψ s = tan α ≈ α 1. We now come up to the equation of coronal loop oscillations: Alternatively, using the velocity u = ∂ψ/∂t instead of ψ, we have where v A = B(µ 0 ρ) −1/2 is the Alfvén speed. According to the fact that the magnetic tension disturbance propagates at the Alvén speed, P satisfies the following wave equation: where ∇ 2 is the Laplace operator. With Fourier analysis and the tube boundary condition, the governing equation can be obtained by the combination of Eqs. (13) and (14) where c 2 k = 2B 2 [µ 0 (ρ i + ρ e )] −1 is the kink mode speed. Equation (15) is the governing equation of coronal loop oscillations. Here we use a simplified string model to derive it instead of solving the MHD equations, which helps us to build up a physical picture for understanding coronal loop oscillations. Now that we have such a specific physical picture, we can discuss the damping mechanism and other issues in later follow-up works.
Analytical solution under a linear loop geometry
Under a number of approximations and assumptions, an analytical solution to the governing equation (Eq. (15)) can be found. Erdélyi & Verth (2007) derived three sets of analytical solutions for a step-function density profile, a linear density profile, and a hyperbolic cosine density profile, respectively. Here, we derive another meaningful solution with an exponential density profile, which corresponds to the case where the coronal loop is approximated as two segments of straight lines as shown as the dashdotted lines in Fig. 5. Compared with other density profiles, an exponential profile is the simplest case with physical meaning, and so it is also of great value for our discussion. For the linear loop geometry without magnetic field variation, its geometric parameters meet the relationship where h a is the apex height of the coronal loop. Let us take the midpoint of the loop as the origin, that is, s = 0; here s is from −L/2 ≤ s ≤ L/2. By substituting Eq. (16) into Eq. (3), the density distribution can be obtained: Here n a = n f exp(−h a /H) is the density at the apex. For simplicity, we define a new scale height H L = HL/2h a . As expected, a loop with a linear loop geometry would have an exponential density distribution. Substituting the density profile into the governing Eq. (15) and considering the boundary condition u = 0 at s = ±L/2, we have where the density ratio ε = n e /n i is a constant. Considering the symmetry or antisymmetry, we consider the right half-segment of the coronal loop, that is, s > 0. Here, we introduce a new variable η = 2ωH L √ λe s/H L ; then Eq. (18) is reduced to This is the Bessel equation of order zero, and therefore the solution to it is u n (s) = C n J 0 2ω n H L √ λe s/H L , (s > 0; n = 1, 2, 3 . . .).
Considering the boundary condition u(±L/2) = 0, we derive the eigenfrequencies where µ (0) n represents the nth zero of the Bessel function of order zero. On the other hand, the solution needs to be physical, which requires the continuity of the eigenfunction and its derivative. There are two situations: (1) in the case of odd parity, we supplement the boundary condition u(0) = 0; (2) in the case of even parity, we have u (0) = 0. In particular, the supplementary boundary conditions are as follows: where J 1 is the Bessel function of order one. The eigenvalues satisfying the supplementary boundary conditions, Eq. (22), and the intrinsic boundary condition, u(±L/2) = 0, will be the subsets of Eq. (21), that is, Here n s is the integer which meets both Eq. (22) and the intrinsic boundary. The overtone period concerned is then where v A,f = B(µ 0 ρ f ) −1/2 is the Alfvén speed at the footpoint of the coronal loop. In general, it is difficult to satisfy both Eq. (22) and u(−L/2) = u(L/2) = 0 simultaneously. This means that there is usually no physical solution satisfying the intrinsic boundary conditions. Despite this, we can use the solution that best meets the odd or even parity condition as the approximation of the eigenfunctions. The supplementary boundary condition serves as a filter. This approximation means that different L and H will pick out different µ n s . For instance, Fig. 6 reveals the numerical and analytical solution in the case √ λL = 1 s, H L = L. Ignoring the discontinuity of the analytical solution and its first derivative, the eigenvalues and their profiles are close to the numerical one. The period ratio P 1 /P 2 = ω 2 /ω 1 = 1.72 < 2 for the analytical solution and P 1 /P 2 = 1.92 < 2 for the numerical solution both show that the density stratification results in a period ratio of less than 2, implying that the analytical solution is reasonable to a certain extent.
Equation (24) offers the overtone period of a coronal loop with linear loop geometry and uniform magnetic field. This result shows the following properties qualitatively. First, P n ∝ L/v A,f √ (1 + ε)/2, which corresponds to Eq. (5), and P kink = 2L/v A √ (1 + ε)/2, which is derived under the approximation of uniform density distribution (Roberts et al. 1984). In addition, Eq. (24) also shows the influence of the density variation, namely, P n ∝ H/h a , which is the density stratification of the coronal loop (a loop with a semi-circle profile has the density stratification πH/L). It is reasonable that for two coronal loops, where the magnetic field, the shape of the loop axes, and the density of the footpoint are the same except for the density scale height, the loop with the larger density scale height will have a longer period, because less density variation means more inertia. If Eq. (24) gives the same result as Eq. (5) in the example of Fig. 6, then H/h a = 2.75, which means a weak stratification and a nearly uniform density distribution. However, the analytic solution is unreasonable in some sense. This is probably because the simplified model takes many assumptions. One possible unreasonable result is that the period ratio P 1 /P 2 is discrete, which contradicts the previous works where P 1 /P 2 was found to be a continuous function of the density stratification L/πH (Andries et al. 2005;Goossens et al. 2006). Nevertheless, as the results given by Eqs. (24) and (5) differ by only a factor related to density stratification πH/µ 1 s h a , Eq. (24) is valuable when we want to quickly estimate the period of the fundamental tone with the density stratification taken into account.
Calculating the fundamental period with shooting method
For actual cases, the coronal loop geometry deviates from the linear loop geometry assumed by the analytical solution above. If the real path is considered, the density distribution (Eq. (3)) is so complicated that an analytical solution is unattainable. We need to adopt a numerical method to calculate the period in actual situations. In this section, combining the density, height, and A48, page 9 of 11 A&A 664, A48 (2022) For convenience in the numerical solution, we introduce the characteristic length L, time L/v A,f , and magnetic field strength B av in order to define the dimensionless quantities y = u(s)/v A,f , x = s/L, b = B/B av , and τ = (2π/ω)/(L/v A,f ). The governing equation is non-dimensionalised, which reads where h(x) is the profile of the coronal loop. Let us take the left footpoint of the loop as the origin where x = 0 and the range of x is from x = 0 to x = 1. If we use a semi-circle profile to approximate a coronal loop, h(x) is expressed as More precisely, we can describe the real loop geometry using the interpolation function h(x) of the height distribution of the extrapolated magnetic field. The normalized loop geometries are shown in Fig. 5. We can see that in the three coronal loop oscillation events, the actual loop geometries of those loops do not deviate very much from the semi-circular shape.
Here we use the shooting method to solve the boundary value problem in Eq. (25). In detail, we use Wolfram Mathematica to build an interactive window to adjust the period parameters to find the approximate period as the initial value of the shooting method. Then, in the vicinity of a given initial value, we use a seeking algorithm to obtain the final oscillation period satisfying the boundary conditions. The final results are shown in Table 2, in which the observed values P obs , analytical solutions P anl , numerical solutions with the semi-circle loop geometry P sc , and the numerical solutions with the real loop geometry P real are compared. The deviation from P obs is provided in the parentheses following the calculated periods. The accuracy of these three periods increases progressively. More specifically, P real is the closest to P obs and their average deviation is 10.6%. The deviation of P sc , 18.3%, is slightly larger than this latter and the deviation of P anl is the largest at 39.3%. This indicates that the eigenvalues of the governing equation are sensitive to the coronal loop geometry.
Discussion and conclusions
In this paper, we process three randomly selected coronal loop oscillation events, where the oscillation periods of the coronal loops are fitted. In all three events, only the fundamental tone is detected, and there is no obvious higher overtone component. We estimate the density distribution of the coronal loop using DEM diagnostics, and then we use the exponential decay model to fit the density scale height. Next, we use the potential field model to extrapolate the magnetic field distribution of the coronal loop, and thereby reconstruct the 3D structure of the loops. This analysis led us to three important results, as follows. 1. Combining the information available on the density and oscillations, we estimate an average magnitude of the magnetic field strength of B kink = 3.9 ± 0.4, 24.9 ± 0.8, and 14.4 ± 0.5 G for the three events considered. These values are consistent with the results derived by applying the magnetic field extrapolation B = 4.3 ± 0.1, 22.9 ± 0.1, and 16.0 ± 0.1 G, respectively. 2. We used a string model to derive the approximated governing equation of the coronal loop and find an analytic solution (Eq. (24)) under the assumption that the loop has a linear loop geometry, exponentially stratified density, and uniform magnetic field. This solution requires a correction factor πH/µ 1 s h a to Eq. (5) when taking the influence of the density variation into account. It is shown that a loop with higher density scale height H has a longer period, as expected. 3. We used both analytical and numerical methods to compute the periods with the information of density, magnetic field, and different loop geometries. The periods calculated with the extrapolated loop geometries are closest to the observed ones, which are better than those periods calculated with the loop geometry taken as a semi-circle or a linear shape. There are several uncertainties in our calculations and some improvement can be made in future, which is discussed from the aspects of oscillation analysis, DEM diagnostics, assumptions in the calculations, and magnetic field extrapolation as follows.
In our oscillation analysis, we sampled the data within a certain time window and the non-linear fitting model is incomplete, which would cause some errors in deriving the oscillation parameters. In fact, we also tried to measure the oscillation period using spectral analysis methods such as discrete Fourier transform and wavelet transform. However, due to the fact that we sampled the data in a certain time window, the frequency detected by the former method is limited in resolution, which means that the period value cannot be obtained accurately. On the other hand, the wavelet transform depends on the choice of a suitable wavelet function. While it is more effective to use spectral analysis to confirm the existence of higher order overtones, it is more precise to acquire the period of the fundamental frequency using a fitting method.
The density and temperature distributions are diagnosed with the DEM analysis in Sect. 2.2, where the density distributions are obtained with the SSW routine aia_loop_autodem.pro.
To have the density of the entire loop, we sampled the data manually. As a DEM analysis needs to satisfy the assumption of optical thickness and relies on the integration along the LOS, the final temperature diagnostics has a large error and will affect the density measurement through error transmission. In addition, if the starting and ending points of our sampling data are not consistent with the actual footpoints of the coronal loop, the density distribution will affect the fitting results of the density scale height H and the density at the footpoint n f .
It is worth noting that the coronal loop structure obtained in our simulation has an inclination and the loop geometry deviates from a semi-circle, which were taken into account in our calculations. We selected three points at a coronal loop to determine the loop plane, and then the influence of the inclination was eliminated by using the rotation matrix to rotate it to the A48, page 10 of 11 vertical direction. Next, the profile of the coronal loop was represented by an interpolation function of the height distribution along the loop. Finally, the two complicated factors, the inclination and loop geometry, were considered in the governing equations. The results show that the coronal loop geometry has a significant influence on the periods (Table 2). A loop with different paths and the same magnetic and density distribution would have markedly different oscillation periods.
In our measurement of Loop #1 (as shown in Fig. 3), the magnetic field B kink derived using the solar magnetoseismological method is 3.9 ± 0.4 G while Aschwanden & Schrijver (2011) obtained B kink = 4.0 ± 0.7 G. We adopted ε = 0.1 which is close to ε = 0.08 ± 0.01 used in Aschwanden & Schrijver (2011). However, our derived plasma density n i = 4.3 × 10 8 cm −3 is larger than n i = (1.9 ± 0.3) × 10 8 cm −3 obtained in Aschwanden & Schrijver (2011), and the loop length L = 96.1 ± 10.98 Mm in our measurement is much smaller than L osc = 143 ± 20 Mm used in Aschwanden & Schrijver (2011). We integrated the length of the selected field lines as the coronal loop length, whereas Aschwanden & Schrijver (2011) adopted the trigonometric method. Our loop length is sensitive to the accuracy of the magnetic model, and there is some considerable error as indicated by the mismatch between the simulated magnetic field and the observed coronal loops (see Fig. 3a). The average magnetic field strength B = 4.3 ± 0.10 G obtained via Eq. (4) is much lower than B = 11 G derived by Aschwanden & Schrijver (2011). Here, we used the magnetic field extrapolation derived by the potential field model, which is more accurate on small scales than the PFSS model applied in Aschwanden & Schrijver (2011). This is because the PFSS method makes use of the synoptic map of the SDO/HMI magnetogram, which is constructed from the observations during a whole rotation. As a result, the magnetic field obtained by the PFSS method is less accurate than the magnetic field extrapolation using the real-time magnetogram. However, our magnetic field extrapolation in the Cartesian coordinates adopts the linear approximation using a plane tangent to the solar surface at the image centre (Gary & Hagyard 1990). This would cause deviations near the solar limb or with a relatively large field of view. The aforementioned error can be eliminated with the extrapolation in the spherical coordinate (Gilchrist & Wheatland 2014;Guo et al. 2016a,b).
In conclusion, in the three chosen coronal loop oscillation events, we measured the density distribution with a DEM analysis and obtained the distribution of magnetic field strength as well as the information on loop geometry with magnetic field extrapolations. We then used the physical and geometrical parameters to compute the oscillation periods, which deviate from the observed values by only 10.6% on average. That is to say, the period derived by considering the realistic density, magnetic field, and loop geometry comprehensively coincides with the observed period. In addition, our multi-tool research shows that the loop geometry significantly affects the oscillation properties of coronal loops, which indicates that the period is sensitive not only to the density and magnetic field but also to the loop geometry. | 10,668 | 2022-06-03T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Comparative estrogenic activity of wine extracts and organochlorine pesticide residues in food.
The human diet contains industrial-derived, endocrine-active chemicals and higher levels of naturally occurring compounds that modulate multiple endocrine pathways. Hazard and risk assessment of these mixtures is complicated by noadditive interactions between different endocrine-mediated responses. This study focused on estrogenic chemicals in the diet and compared the relative potencies or estrogen equivalents (EQs) of the daily consumption of xenoestrogenic organochlorine pesticides in food (2.44 micrograms/day) with the EQs in a single 200-ml glass of red cabernet wine. The reconstituted organochlorine mixture contained 1,1,1-trichloro-2-(p-chlorophenyl)-2-(o-chlorophenyl)ethane, 1,1,1-trichloro-2,2-bis(p-chlorophenyl)ethane, 1,1-dichloro-2,2-bis(p-chlorophenyl)ethylene, endosulfan-1, endosulfan-2, p,p'-methoxychlor, and toxaphene; the relative proportion of each chemical in the mixture resembled the composition reported in a recent U.S. Food and Drug Administration market basket survey. The following battery of in vitro 17 beta-estradiol (E2)-responsive bioassays were utilized in this study: competitive binding to mouse uterine estrogen receptor (ER); proliferation in T47D human breast cancer cells; luciferase (Luc) induction in human HepG2 cells transiently cotransfected with C3-Luc and the human ER, rat ER-alpha, or rat ER-beta; induction of chloramphenicol acetyltransferase (CAT) activity in MCF-7 human breast cancer cells transfected with E2-responsive cathepsin D-CAT or creatine kinase B-CAT plasmids. For these seven in vitro assays, the calculated EQs in extracts from 200 ml of red cabernet wine varied from 0.15 to 3.68 micrograms/day. In contrast, EQs for consumption of organochlorine pesticides (2.44 micrograms/day) varied from nondetectable to 1.24 ng/day. Based on results of the in vitro bioassays, organochlorine pesticides in food contribute minimally to dietary EQ intake.
receptor (1)(2)(3)(4)(5). Many of the more important classes of persistent organochlorine (OC) pollutants bind to these three receptor systems (6,7) and it has been hypothesized that some of these compounds may be responsible for reproductive problems in wildlife, decreased male reproductive capacity, and breast cancer in women (1)(2)(3)(4)(5). The validity of these hypotheses has been questioned (8)(9)(10) and ongoing research will help resolve these complex issues.
Environmental estrogens or xenoestrogens have been a major focal point of concern because in utero exposure to estrogenic compounds such as the drug diethylstilbestrol can adversely affect both male and female offspring; moreover, lifetime estrogen exposure is a known risk factor for breast cancer in women (11,12). The human diet contains a highly complex mixture of different endocrine-active chemicals including estrogenic flavonoids, lignans, sterols, and fungal metabolites in vegetables, fruits, nuts, and grain-derived products (13)(14)(15). Levels of xenoestrogens in the diet have not been fully described; however, at least seven OC contaminants have been identified in the U.S. Food and Drug Administration (U.S. FDA) market basket survey (16) and these include 1,1dichloro-2,2-bis(p-chlorophenyl)ethylene (p,p'-DDE), 1,1,1-trichloro-2-(p-chlorophenyl)-2-(o-chlorophenyl)ethane (o,p'-DDT), p,p'-methoxychlor, endosulfan-1, endosulfan-2, toxaphene, and 1,1,1trichloro-2,2-bis(p-chlorophenyl)ethane (p,p'-DDT). The daily intake of this pesticide mixture is approximately 2.44 pg/day. The estrogenic activity of these compounds has been confirmed in some assays (6,7); however, the effects of reconstituted mixtures of OC pesticides (OC mix) have not been investigated. This study compared the in vitro estrogenic activity of naturally occurring estrogens in two wine extracts with a reconstituted mixture of OC pesticides in food using several estrogenresponsive bioassays. A limitation of the reconstituted mixture of pesticides is that it contains only those compounds previously identified as estrogens. It is possible that other contaminants may also exhibit estrogenic activity. The mixtures exhibited a range of estrogenic potencies in these in vitro bioassays, and estrogen equivalents in one 200-ml glass of red wine were significantly higher than observed for the estimated daily intake of the OC mix.
The wine and whiskey extracts were prepared using the following extraction methods: The alcohol beverage (200 ml) was evaporated to dryness in vacuo at < 60°C; the residue was then resuspended in 200 ml methanol and stirred vigorously for 4 to 6 hr at 20°C. The resulting mixture was filtered to remove solid debris and the methanol extract was evaporated to dryness. The final extraction utilized ethanol:chloroform (15:85, 100 ml) and the mixture was vigorously stirred for 12 to 18 hr. The ethanol-chloroform extract was filtered, evaporated to dryness, and redissolved in 2 ml water:ethanol (85:15 by volume) buffered with 0.05 M sodium bicarbonate. These extracts were also diluted in the same aqueous ethanol buffer and used in the bioassays. All
Bioassays for Estrogenic Activity
The bioassays utilized in this study have previously been reported (17)(18)(19)(20)(21)(22)(23) and include ER binding using B6C3F1 mouse uterine cytosol; MCF-7 and T47D cell proliferation (for 14 days); induction of chloramphenicol acetyltransferase (CAT) activity in MCF-7 cells transiently transfected with plasmid cathepsin D (pCATH)-CAT and pCKB-CAT; and induction of luciferase (Luc) activity in HepG2 cells transiently cotransfected with complement C3-Luc and an ER expression plasmid. The human CATH construct contains a A promoter insert (-365 to -10) (24) ligated into a pBL/TATA/CAT plasmid derived from pBL/CAT2. The promoter region was derived from a construct originally provided by A. Hasilik (University of Muenster, Muenster, Germany). The creatine kinase B (CKB)-CAT construct contains a 2.9-kb region from the rat CKB gene promoter and was provided by P. Benfield (Dupont Corp., Wilmington, Delaware) (25). Rat ER-o and ER-3 expression plasmids were obtained from R. Day (University of Virginia, Charlottesville, Virginia) and J.-A. Gustafsson (Karolinska Institute, Huddinge, Sweden), respectively; D. McDonnell (Duke University, Durham, North Carolina) provided the human ER-a (hER) expression plasmid.
Results
The results illustrated in Figure 1 show that unlabeled E2 and the chablis and cabernet wine extracts competitively displaced [3H]E2 from the mouse uterine ER, whereas the whiskey extract and the highest concentration of the OC mix did not significantly displace the radiolabeled hormone. The whiskey extract was inactive in all of the assay systems and additional results for this extract are not presented in this study. The effects of the wine extracts and the OC mix on proliferation of ER-positive T47D breast cancer cells were determined in this study using assay procedures previously described (23). The results in Figure 2A Concentration, ng/ml that the highest concentration of the red cabernet wine induced a near maximal cell proliferation response (observed for 1 nM E2), whereas lower responses were observed for the chablis and the OC mix ( Figure 2B). Similar results were obtained in MCF-7 cells. In T47D cells cotreated with 1 nM E2 and different concentrations of the OC mix, an antiestrogenic response was observed ( Figure 2B). Similar results were obtained with the chablis wine, whereas the more highly estrogenic cabernet wine extract was not antiestrogenic in this assay (data not shown).
The results summarized in Figure 3 also show that the red cabernet was significantly more estrogenic than the chablis wine or the OC mix in MCF-7 cells transiently transfected with pCKB-CAT or pCATH-CAT. These results complement the effects of these mixtures on proliferation of T47D ( Figure 2) and MCF-7 cells (data not shown).
The results in Figure 4 summarize the effects of the individual OC pesticides and the OC mix on induction of Luc activity in HepG2 cells transiently cotransfected with hER (human), ER-a (rat), and ER-1 (rat) expression plasmids. The results show that all of the OC pesticides and the OC mix were active in the HepG2 cell assays. There were only minor differences in activities of individual pesticides using the different ER expression plasmids. The HepG assay system was more sensitive to the estrogenic A U ae *4 *g. h. activity of the OC pesticides and reconstituted mixture than the other in vitro assays used in this study (Figures 1-3) or in the yeast-based assay system (18). The estrogen equivalents (EQs) for the highest concentration of the OC mix and red cabernet wine extract could be estimated by comparing their estrogenic activity to that observed for E2 alone. The EQ values derived from all assays for extracts of red wine (200 ml) varied from 0.15 to 3.68 pg/day, whereas values for the OC mix were < 1.24 ng/day (Table 1).
Discussion
The human diet contains relatively high levels of estrogenic compounds, particularly the bioflavonoids, which are ubiquitous in fruits, nuts, vegetables, and grain products. Kuhnau (13) estimated that the average daily intake of flavonoids is approximately 1 g per day; however, only a fraction of this total would constitute estrogenic compounds. Consumption of natural estrogenic compounds is high in Far Eastern countries in which high levels of soy-based products are an important part of the diet. Setchell and co-workers (26) recently reported that 4-month-old infants on soybased formula consume over 40 mg of total soy-based isoflavones per day. Moreover, plasma levels of estrogenic isoflavones in adults and infants who consume soy foods and soy infant formula can be as high as 105 to 106 pg/ml. NZSt'~~~~~~~~~~~~~~~~~~~1 Q Figure 3. Induction of chloramphenicol acetyltransferase activity in MCF-7 cells after treatment with 170-estradiol, the organochlorine pesticide mixture, and wine extracts. DMS0, dimethylsulfoxide. MCF-7 cells were transiently transfected with 4 or 5 pg hER and 10 pg CKB-CAT or 5 pg CATH-CAT constructs and treated with various mixtures. CAT activity was determined as described (22). The designations of 10/1 and 100/1 are concentration factors for the reconstituted wine extracts and represent the equivalents of 1 and 10 ml of wine added to each plate, which contains 10 ml media. The highest concentration of the OC mix was 12.2 pg/mI. Results are expressed as means ± SE for three separate experiments. The red wine extracts significantly induced CAT activity in MCF-7 cells transiently transfected with (A) CATH-CAT or (B) CKB-CAT constructs, whereas the white wine extracts exhibited lower activity and the OC mix was inactive.
Gavaler and co-workers (27)(28)(29)(30)(31)(32)(33) have previously investigated the estrogenic activity of bourbon and bourbon extracts in both in vivo and in vitro models. Bourbon extracts contain estrogenic flavonoids and sterols that bind to the ER, and after administration to ovariectomized rats, there was an increase in uterine wet weight and decreased plasma luteinizing hormone (LH) levels. These data clearly demonstrate an in vivo estrogenic response. The estrogenic activity of bourbon extracts was also confirmed in clinical studies in which bourbon extracts (equivalent to greater than three drinks/day) were administered to four postmenopausal women for 28 days. LH and folliclestimulating hormone levels decreased and prolactin, high-density lipoprotein cholesterol, and steroid hormone-binding globulin levels increased during the treatment but returned to background levels after 5 weeks (1 week postexposure). It was also reported that white chablis and red cabernet wines also competitively bound to the ER, and similar results were obtained in the present study ( Figure 1). In contrast, the OC mix did not competitively bind to the mouse ER and therefore the comparative estrogenic potency of wine extracts and the OC mix were investigated in multiple assays.
Results of this study demonstrate that the red cabernet wine extract was active in all bioassays whereas with the exception of the ER-binding assay, white wine extracts exhibited lower estrogenic activity than the red cabernet, and the whiskey extracts were inactive. The OC mix exhibited estrogenic activity in the HepG2 cell assay but was inactive or only minimally active in the cell proliferation, ER-binding, and transient transfection assays in MCF-7 cells. The E2 equivalents could be calculated for the highest concentrations of the red cabernet wine extract and OC mix for each assay by estimating the concentration of E2 required to induce the same response (assuming a linear E2 dose-response curve). This approach was utilized to compare EQs for wine extracts and the OC mix in seven E2-responsive assays and to calculate daily EQs for the OC mix (2.44 pg) and a glass of red wine (200 ml). The results obtained for ER binding, T47D cell proliferation, and induction of CAT activity in MCF-7 cells transiently transfected with CKB-CAT or CATH-CAT constructs indicate that EQs associated with a 200-ml glass of wine varied from 0.15 to 0.6 pg/day, whereas the OC mix was inactive or gave minimal EQ values in these assays ( , and (C) ER-f (rat). HepG2 cells were cotransfected with C3-Luc and various ER expression plasmids and then treated with different concentrations of the OC pesticides, OC mix, and wine extracts as previously described by Ramamoorthy et al. (22). All the compounds and mixtures exhibited estrogenic activity in HepG2 cells transiently transfected with hER, rat ER-a, or ER-P expression plasmids. These results demonstrate the differential sensitivity of diverse assay systems for determining EQs; however, the overall results suggest that a single glass of red wine contains significantly higher in vitro EQs than the daily intake (2.44 pg) of OC pesticides in food. This type of approach, coupled with more extensive in vitro and in vivo studies that take into account differences in absorption, metabolism, and distribution, may be useful for the hazard and risk assessment of natural and xenoestrogen mixtures as well as other classes of endocrine-active compounds. | 2,954 | 1998-12-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Hopping from Chebyshev polynomials to permutation statistics
We prove various formulas which express exponential generating functions counting permutations by the peak number, valley number, double ascent number, and double descent number statistics in terms of the exponential generating function for Chebyshev polynomials, as well as cyclic analogues of these formulas for derangements. We give several applications of these results, including formulas for the $(-1)$-evaluation of some of these distributions. Our proofs are combinatorial and involve the use of monomino-domino tilings, the modified Foata-Strehl action (a.k.a. valley-hopping), and a cyclic analogue of this action due to Sun and Wang.
Introduction
Let π = π 1 π 2 · · · π n be a permutation (written in one-line notation) in S n , the set of permutations of [n] = {1, 2, . . . , n}. We say that π i (where i ∈ [n − 1]) is a descent if π i > π i+1 , and that π i (where 2 ≤ i ≤ n − 1) is a peak of π if π i−1 < π i > π i+1 . Define des(π) to be the number of descents of π and pk(π) to be the number of peaks of π. The descent number des and peak number pk are classical permutation statistics whose study dates back to MacMahon [8] and to David and Barton [5], respectively.
The nth Eulerian polynomial 1 A n (t) := π∈Sn t des(π) *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>2010 Mathematics Subject Classification. Primary 05A15; Secondary 05A05, 33C45. 1 We note that many works instead define the nth Eulerian polynomial to be π∈Sn t des(π)+1 . encodes the distribution of the descent number des over S n , and the nth peak polynomial P pk n (t) := π∈Sn t pk(π) is the analogous polynomial for the peak number pk. It is well-known [6,Théorème 5.6] that the (−1)-evaluation of the Eulerian distribution is given by the formula where E n is the nth Euler number defined by ∞ n=0 E n x n n! = sec(x) + tan(x).
(The Euler numbers E n for odd n are called tangent numbers, and those for even n are called secant numbers.) No combinatorial formula for P pk n (−1) is known, although this sequence does appear on the OEIS [13, A006673]. The first several terms of this sequence are given in the following table: n 1 2 3 4 5 6 7 8 9 10 P pk n (−1) 1 2 2 −8 −56 −112 848 9088 25216 −310528 The exponential generating functions for A n (t) and P pk n (t) have the following well-known expressions: 2 x n n! = e (1−t)x − 1 1 − te (1−t)x ; P pk (t; x) := ∞ n=1 P pk n (t) The work in this paper was originally inspired by the curious observation that A(−1; x) and P pk (−1; x) can be expressed as the logarithmic derivative of the exponential generating function of some non-negative integer sequence. For the Eulerian polynomials, this sequence {f n } n≥0 is simply f n := (n + 1) mod 2, i.e., the sequence 1, 0, 1, 0, . . . , whose exponential generating function is given by F (x) := 1 + x 2 2! + x 4 4! + · · · = cosh(x).
For the peak polynomials, this sequence is the sequence of Pell numbers, which has been widely studied in combinatorics and number theory. The Pell numbers {g n } n≥0 are defined by the recursive formula g n := 2g n−1 + g n−2 for n ≥ 2 with initial values g 0 = 1 and g 1 = 0.
The first several terms of this sequence are below: Note that the indexing here is slightly different from the usual indexing of the Pell numbers (see OEIS [13, A000129]). The exponential generating function of {g n } n≥0 is given by Theorem 1. The exponential generating functions for the Eulerian and peak polynomials evaluated at t = −1 can be expressed as the logarithmic derivative of F (x) and G(x), respectively. That is: While Theorem 1 can be proven directly by algebraically manipulating the generating function formulas for A(t; x), P pk (t; x), F (x), and G(x), one of our goals in this paper is to present a combinatorially-flavored proof. In Section 2, we define several other relevant permutation statistics and introduce a key ingredient of our proof: the modified Foata-Strehl group action (a.k.a. valley-hopping). In Section 3, we define a variant of the Chebyshev polynomials of the second kind which specialize to both the numbers f n and the Pell numbers g n . Like the ordinary Chebyshev polynomials of the second kind, our modified Chebyshev polynomials have as a combinatorial model monomino-domino tilings of a rectangle, but with slightly different weights. We present a formula (Theorem 3) involving these modified Chebyshev polynomials for the joint distribution of two statistics: the peak number, and the total number of double ascents and double descents. We give a combinatorial proof of Theorem 3 which involves tilings and valley-hopping, and a special case of this result implies Theorem 1 (b). We transform Theorem 3 into similar results for other permutation statistics, which we then use to prove Theorem 1 (a) and to prove that the (−1)-evaluation of the double descent distribution over S n yields the tangent numbers E 2n+1 .
In Section 4, we turn our attention to counting derangements by cyclic analogues of the permutation statistics studied in Sections 2-3. Using a variant of valley-hopping due to Sun and Wang [16] for derangements, we prove a cyclic analogue of Theorem 3 and use it to derive formulas relating exponential generating functions counting derangements by cyclic statistics with the exponential generating function for our modified Chebyshev polynomials. We use this to prove a result similar to Theorem 1 for the excedance and cyclic peak distributions over derangements, and to prove that the (−1)-evaluation of the cyclic double descent distribution over derangements yields the secant numbers E 2n .
For a list of statistics st 1 , st 2 , . . . , st m and corresponding variables t 1 , t 2 , . . . , t m , we define the polynomials {P · · · t stm(π) m . and we let x n n! be their exponential generating function. 4 For example, we have and P (pk,dbl) (s, t; x) = ∞ n=1 P (pk,dbl) n (s, t) x n n! ; we will consider these on the way to proving Theorem 1. Our proof will make use of a bijection based on a group action on S n induced by involutions which toggle between double ascents and double descents; we will spend the remainder of this section defining this action and the associated bijection. For π ∈ S n , fix k ∈ [n]. We may write π = w 1 w 2 kw 4 w 5 where w 2 is the maximal consecutive subword immediately to the left of k whose letters are all smaller than k, and w 4 is the maximal consecutive subword immediately to the right of k whose letters are all smaller than k. For example, if π = 467125839 and k = 5, then π is the concatenation of w 1 = 467, w 2 = 12, k = 5, the empty word w 4 , and w 5 = 839.
Define ϕ k : S n → S n by ϕ k (π) = w 1 w 4 kw 2 w 5 , if k is a double ascent or double descent of π, π, if k is a peak or valley of π. 3 We note that many works on permutation enumeration do not use these conventions, and simply restrict the possible positions of valleys, double ascents, and double descents to the interval from 2 to n − 1. What we call "valleys" are sometimes called "left-right valleys" or "exterior valleys", what we call "double ascents" are sometimes called "right double ascents", and what we call "double descents" are sometimes called "left double descents". (See, e.g., [18].) 4 In the case where we have a single statistic st, we write these simply as P st n (t) and P st (t; x). Equivalently, ϕ k (π) = w 1 w 4 kw 2 w 5 if exactly one of w 2 and w 4 is nonempty, and ϕ k (π) = π otherwise. For any subset S ⊆ [n], we define ϕ S : S n → S n by ϕ S = k∈S ϕ k . It is easy to see that ϕ S is an involution, and that for all S, T ⊆ [n], the involutions ϕ S and ϕ T commute with each other. Hence the involutions {ϕ S } S⊆[n] define a Z n 2 -action on S n which is often called the modified Foata-Strehl action or valley-hopping. This action is based on a classical group action of Foata and Strehl [7], was introduced by Shapiro, Woan, and Getu [10], and was later rediscovered by Brändén [2].
LetS n denote the set of permutations of [n] with no double ascents and where each double descent is assigned one of two colors: red or blue. 5 Then valley-hopping induces a map Φ fromS n to S n defined in the following way. Given a permutation π inS n , let R(π) be the set of red double descents in π and letπ be the corresponding permutation of π in S n , that is, the permutation obtained by forgetting the colors on the double descents. Then let Φ(π) = ϕ R(π) (π). For example, if π = 726539841, then Φ(π) = 265379418. (See Figure 1.) Lemma 2. The map Φ :S n → S n is a (pk, dbl)-preserving bijection.
Proof. The inverse Φ −1 of the map Φ can be described in the following way. Let Dasc(π) be the set of double ascents of π and Ddes(π) the set of double descents of π. If S ⊆ Ddes(π) and if π has no double ascents, then let π S be the permutation inS n obtained by coloring the double descents in S blue and all other double descents red. Given a permutation π in S n , let Φ −1 (π) = (ϕ Dasc(π) (π)) Ddes(π) . Then Φ is a bijection betweenS n and S n . The claim that Φ preserves the pk and dbl statistics follows from the easy fact that valley-hopping preserves these statistics as well.
Hopping from Chebyshev polynomials to permutation statistics
The numbers f n = (n + 1) mod 2 and the Pell numbers g n are specializations of the sequence of polynomials {V n (s, t)} n≥0 defined by V n (s, t) = 2tV n−1 (s, t) − sV n−2 (s, t) for n ≥ 2 with initial values V 0 (s, t) = 0 and V 1 (s, t) = 1. More precisely, f n = V n−1 (−1, 0) and g n = V n−1 (−1, 1) for all n ≥ 1. These polynomials are a variant of the Chebyshev polynomials of the second kind, which can be obtained from the V n (s, t) by substituting s = 1.
The ordinary generating function for these modified Chebyshev polynomials V n (s, t) is given by the formula so V n (s, t) counts tilings of a 1 × (n − 1) rectangle with two types of monominoes, each weighted t, and one type of domino, each weighted −s. Also, let be the (shifted) exponential generating function for the polynomials V n (s, t). It can be shown (using the exponential generating function for the usual Chebyshev polynomials of the second kind) that V (s, t; x) has the closed-form expression
A Chebyshev formula for the bidistribution (pk, dbl)
We now present our main theorem from this section.
Proof. From the combinatorial interpretation of multiplication of exponential generating functions (see, e.g., [15,Proposition 5.1.3]), it suffices to show that where the second sum is over all ordered set partitions B of [n] into blocks B 0 , B 1 , . . . , B k . Thus, the right-hand side of Equation (3) counts these set partitions together with: • a tiling of an 1 × (|B 0 | − 1) rectangle with two types of monominoes (colored red and blue), each weighted t, and one type of domino, each weighted −s; • for each 1 ≤ i ≤ k, a tiling of an 1 × (|B i | − 2) rectangle with the same types of shapes and weights as above; and each block (other than B 0 ) is given an additional weight of s. We place an ∞ in the first block, write out each block in decreasing order, and separate adjacent blocks with a bar, as in ∞ > π 1 > π 2 > · · · > π |B 0 | | π |B 0 |+1 > π |B 0 |+2 > · · · > π |B 0 |+|B 1 | | · · · | π n−|B k |+1 > · · · > π n . Here, we consider the tiling on each block as being a tiling on all but the first and last elements of the block. Now we define a sign-reversing involution on these objects in the following way: Find the first pair of elements (π i , π i+1 ) where there is a domino, or where π i and π i+1 are in separate blocks and π i > π i+1 . If (π i , π i+1 ) is covered by a domino, then we remove the domino and insert a new bar in between π i and π i+1 , thus splitting their block into two blocks. If π i and π i+1 are in separate blocks and π i > π i+1 , then we merge the two blocks and cover (π i , π i+1 ) with a domino. (See Figure 2.) This involution swaps a domino (weighted −s) with an additional block (weighted s), and after cancellation we are left with those objects with no dominoes and such that π i < π i+1 whenever π i and π i+1 are in separate blocks.
If we treat any one of these remaining objects π = π 1 π 2 · · · π n as a permutation, we see that π has no double ascents and has each double descent colored either red or blue (depending on the color of the corresponding monomino). Hence, π belongs toS n and contributes a weight of s pk(π) t dbl(π) to the right-hand side of Equation (3). The result then follows from applying the (pk, dbl)-preserving bijection Φ.
3.2.
A Chebyshev formula for the quadruple distribution (pk, val, dasc, ddes) We shall now derive from Theorem 3 an analogous result for the joint distribution of the four statistics pk, val, dasc, and ddes.
Theorem 4 and the formula (2) for V (s, t; x) can be used together to derive the closedform formula This is equivalent to a classical formula of Carlitz and Scoville [3]; see also [14,Exercise 1.61a].
The following corollary states several specializations of Theorem 4.
Further specializing Corollary 5 (a) at t = −1 implies Theorem 1 (a). Next we show that the (−1)-evaluation of the double descent distribution over S 2n+1 gives the tangent numbers E 2n+1 .
if n is even.
Similar reasoning can be used to prove the formula (1) for Eulerian polynomials evaluated at t = −1.
Counting derangements by cyclic statistics
Recall that a derangement is a permutation with no fixed points, i.e., a permutation for which π i = i for all i. Let D n be the set of derangements in S n . Our goal in this section is to provide an analogous treatment of the material from the previous section but for counting derangements with respect to several "cyclic statistics" that we will define shortly.
When writing permutations in cycle notation, we adopt the convention of writing each cycle with its largest letter in the first position, and writing the cycles from left-to-right in increasing order of their largest letters. (This convention is sometimes called canonical cycle representation.) For example, the permutation π = 649237185 in one-line notation is written as π = (42)(716)(8)(953) in cycle notation.
Every letter of a derangement is either a cyclic peak, cyclic valley, cyclic double ascent, or cyclic double descent. Define cpk(π), cval(π), cdasc(π), and cddes(π) to be the number of cyclic peaks, cyclic valleys, cyclic double ascents, and cyclic double descents of π, respectively. These "cyclic statistics" were studied earlier by, e.g., Zeng [17], Shin and Zeng [12], and Sun and Wang [16]. 6 These statistics are also closely related to a classical permutation statistic, the excedance number. We say that i ∈ [n] is an excedance of π if i < π i and let exc(π) denote the number of excedances of π. Then i is an excedance of π if and only if i is a cyclic valley or cyclic double ascent of π, and it is well-known that the excedance number exc and the descent number des are equidistributed over S n .
Define the map o : S n → S n , where the input is a permutation in canonical cycle representation and the output is a permutation in one-line notation, by erasing the parentheses. Continuing the example with π = (42)(716) (8)(953), we have o(π) = 427168953. It is easy to see that this map is a bijection; we can recover the cycles of π by noting the left-to-right maxima of o(π): given a permutation σ = σ 1 σ 2 · · · σ n , we say that σ i is a left-to-right maximum of σ if σ j < σ i for all 1 ≤ j < i.
Our work in this section will rely on a cyclic variant of valley-hopping introduced in [16]. Define θ k : D n → D n by θ k (π) = o −1 (ϕ k (o(π))), where the 0th letter of o(π) is treated as 0 rather than ∞. Similarly, for a subset S ⊆ [n], define θ S : D n → D n by θ S = k∈S θ k . Then the cyclic modified Foata-Strehl action (or cyclic valley-hopping) is the Z n 2 -action defined by the involutions θ S . It is easy to see that cyclic valley-hopping toggles between cyclic double ascents and cyclic double descents, but does not change cyclic peaks or cyclic valleys.
LetD n denote the set of derangements of [n] with no cyclic double ascents and where each cyclic double descent is assigned one of two colors: red or blue. Then cyclic valleyhopping induces a mapΦ fromD n to D n defined in the analogous way as the map Φ from Section 2, but with R(π) being the set of red cyclic double descents. It then follows from the same reasoning as in the proof of Lemma 2 thatΦ is a (cpk, cdbl)-preserving bijection, where cdbl(π) := cdasc(π) + cddes(π) is the total number of cyclic double ascents and cyclic double descents of π. · · · t stm(π) m . 6 Chow et al. [4] also derived various formulas for counting permutations by cyclic peaks and cyclic valleys, but their definition of these statistics differ from ours in that they do not allow the first or last letter of a cycle to be a cyclic peak or cyclic valley. x n n! be their exponential generating function. 7 These encode the distributions of permutation statistics over derangements. We now present a cyclic analogue of Theorem 3 for derangements.
A cyclic analogue of Theorem 3 for derangements
Proof. It suffices to show that where the second sum is over all ordered set partitions B of [n] into parts B 1 , . . . , B k . We interpret the right-hand side of (4) as in the proof of Theorem 3 (without the initial block B 0 with an ∞) and apply the same sign-reversing involution; the objects that remain after cancellation are of the form with no dominoes and such that c i < c i+1 whenever c i and c i+1 are in separate blocks. Now, rather than treating these remaining objects as permutations in one-line notation, we want to treat them as permutations in cycle notation with blocks corresponding to cycles. In doing so, we merge two adjacent blocks whenever the first element of the second block in the pair is not larger than all elements from all preceding blocks, i.e., whenever the element is not a left-to-right maximum of the underlying permutation written in one-line notation; this guarantees that the resulting permutations are correctly written in canonical cycle representation and is clearly reversible.
Moreover, these permutations are derangements because each block has size at least 2, and they have no cyclic double ascents and have each cyclic double descent colored either red or blue (depending on the color of the corresponding monomino). In other words, these permutations π are precisely the elements ofD n and each contributes a weight of s cpk(π) t cdbl(π) to the right-hand side of Equation (4). The result then follows from applying the (cpk, cdbl)preserving bijectionΦ.
We extend Theorem 7 to an analogous result for the joint distribution of the statistics cpk, cval, cdasc, and cddes over D n . Proof. First, observe that cpk(π) = cval(π) for all derangements π. The cyclic valley-hopping map θ S , where S is the set containing all cyclic double ascents and cyclic double descents of π, is a (cpk, cval)-preserving involution on D n that switches cyclic double ascents with cyclic double descents. Thus, we have D (cpk,cval,cdasc,cddes) n (s, t, u, v) = D (pk,dbl) n st, for all n ≥ 1, which along with Theorem 7 proves the result.
Counting derangements by excedances
In the remainder of this section, we examine specializations of Theorem 8 that give rise to formulas for individual cyclic statistics, beginning with the excedance number. The excedance polynomials D n (t) := D exc n (t) have been well-studied; for example, it is known that they have exponential generating function 9] and are γ-positive [1,12,16]. From Theorem 8 we obtain the following. It follows from Corollary 9 that the exponential generating function for the excedance polynomials D(t; x) evaluated at t = −1 is the reciprocal of the exponential generating function F (x) = cosh(x) for the sequence 1, 0, 1, 0, . . . .
This identity can be used to rederive the classical result due to Roselle [9] that Let W n (t) := V n (t, (1 + t)/2). By the definition of V n (t), the polynomials W n (t) satisfy the recurrence W n (t) = (1 + t)W n−1 (t) − tW n−2 (t) for n ≥ 2 with initial values W 0 (t) = 0 and W 1 (t) = 1. From this recurrence, it is easily seen that W n (t) = 1 + t + · · · + t n−1 (8) for n ≥ 1. By using the characterization of W n (t) provided by Equation (8)
Counting derangements by cyclic peaks
Next, we examine the distribution of the cyclic peak number over derangements.
Corollary 11. D cpk (t; x) = 1 1 − tV (t, 1; x) By specializing (5) appropriately, we derive the formula The first several polynomials D cpk n (t) are given in the following table.
Proposition 12. For all n ≥ 2, the number of derangements in D n with exactly one cyclic peak is 2 n−2 .
Proof. It is easy to see that every derangement π of [n] with exactly one cyclic peak has exactly one cycle and can be written in the form (c 1 c 2 · · · c k c k+1 · · · c n ) where c 1 = n is the only cyclic peak of π, the sequence c 2 · · · c k is decreasing (with c k = 1), and the sequence c k+1 · · · c n is increasing. Thus, for every letter i between 2 and n − 1, either i belongs to the decreasing sequence or the increasing sequence, and these n − 2 choices completely determine the derangement π.
It follows from Corollary 11 that the exponential generating function for the numbers D cpk n (−1) is the reciprocal of the exponential generating function G(x) for the Pell numbers.
We do not have a combinatorial interpretation for the numbers D cpk n (−1) themselves. The first several of these numbers appear in the following table.
Counting derangements by cyclic double descents
Lastly, we study the distribution of the cyclic double descent number over derangements.
The exponential generating function formula where β = (t + 3)(t − 1), can be obtained by specializing (5). The first several of the polynomials D cddes n (t) appear in the following table.
For the purpose of this proof, let us temporarily modify our convention for cycle notation so that we write each cycle with its smallest letter in the first position, and write the cycles from left-to-right in decreasing order of their smallest letters. For example, whereas we previously wrote π = 649237185 as π = (42)(716)(8)(953) in canonical cycle representation, now we write π as π = (8)(395)(24)(167). Let o ′ : S n → S n be the map defined by taking a permutation in cycle notation under this new convention and erasing the parentheses, yielding a permutation in one-line notation. This is a bijection; we can recover the cycles of π by noting the left-to-right minima of o(π): given a permutation σ = σ 1 σ 2 · · · σ n , we say that σ i is a left-to-right minimum of σ if σ j > σ i for all 1 ≤ j < i. Proposition 15. An letter i ∈ [n] is a fixed point or cyclic double descent of π ∈ S n if and only if i is a short run of o ′ (π).
In particular, this proposition implies that the number of derangements of [n] with no cyclic double descents is equal to the number of permutations of [n] with no short runs.
We divide into cases. First, suppose that i ∈ [n] is a fixed point of π. Then σ j and σ j+1 are both left-to-right minima, so σ j−1 > σ j > σ j+1 . Now suppose that i ∈ [n] is a cyclic double descent of π. Note that the first letter of a cycle (under our current convention) cannot be a cyclic double descent. If i is neither the first nor last letter of its cycle in π, then σ j−1 > σ j > σ j+1 . Otherwise, if i is the last letter of its cycle in π, then σ j−1 > σ j and σ j+1 is a left-to-right minimum; thus σ j−1 > σ j > σ j+1 . In each case, it follows that σ j = i is a short run of o ′ (π). Hence, every fixed point and cyclic double descent of π is a short run of o ′ (π); the reverse direction is similar.
Finally, we give a cyclic analogue of Theorem 6 for derangements. In other words, the (−1)-evaluation of the cyclic double descent distribution over D 2n gives the secant numbers E 2n .
Proof. By comparing Theorem 7 with Corollary 14, we have D cddes n (−1) = D (cpk,cdbl) n (1, 0) for all n ≥ 1. Observe that D (cpk,cdbl) n (1, 0) is the number of derangements of [n] with no cyclic double ascents or cyclic double descents. Because the number of cyclic peaks of any permutation is equal to its number of cyclic valleys, it is evident that there are no such permutations for odd n, and it is easy to see that the map o defined earlier is a bijection between such permutations for an even n and reverse-alternating permutations of [n]: permutations π = π 1 π 2 · · · π n satisfying π 1 > π 2 < π 3 > π 4 < · · · > π n . Since there are E n reverse-alternating permutations in S n , the proof follows. | 6,706 | 2018-12-27T00:00:00.000 | [
"Mathematics"
] |
Discovery of new therapeutic targets in ovarian cancer through identifying significantly non-mutated genes
Mutated and non-mutated genes interact to drive cancer growth and metastasis. While research has focused on understanding the impact of mutated genes on cancer biology, understanding non-mutated genes that are essential to tumor development could lead to new therapeutic strategies. The recent advent of high-throughput whole genome sequencing being applied to many different samples has made it possible to calculate if genes are significantly non-mutated in a specific cancer patient cohort. We carried out random mutagenesis simulations of the human genome approximating the regions sequenced in the publicly available Cancer Growth Atlas Project for ovarian cancer (TCGA-OV). Simulated mutations were compared to the observed mutations in the TCGA-OV cohort and genes with the largest deviations from simulation were identified. Pathway analysis was performed on the non-mutated genes to better understand their biological function. We then compared gene expression, methylation and copy number distributions of non-mutated and mutated genes in cell lines and patient data from the TCGA-OV project. To directly test if non-mutated genes can affect cell proliferation, we carried out proof-of-concept RNAi silencing experiments of a panel of nine selected non-mutated genes in three ovarian cancer cell lines and one primary ovarian epithelial cell line. We identified a set of genes that were mutated less than expected (non-mutated genes) and mutated more than expected (mutated genes). Pathway analysis revealed that non-mutated genes interact in cancer associated pathways. We found that non-mutated genes are expressed significantly more than mutated genes while also having lower methylation and higher copy number states indicating that they could be functionally important. RNAi silencing of the panel of non-mutated genes resulted in a greater significant reduction of cell viability in the cancer cell lines than in the non-cancer cell line. Finally, as a test case, silencing ANKLE2, a significantly non-mutated gene, affected the morphology, reduced migration, and increased the chemotherapeutic response of SKOV3 cells. We show that we can identify significantly non-mutated genes in a large ovarian cancer cohort that are well-expressed in patient and cell line data and whose RNAi-induced silencing reduces viability in three ovarian cancer cell lines. Targeting non-mutated genes that are important for tumor growth and metastasis is a promising approach to expand cancer therapeutic options.
It is ranked 5th overall for cancer death in women [1][2][3]. The Cancer Genome Atlas (TCGA) program performed a comprehensive "omics" characterization of HGS-OvC (TCGA-OV). They studied 489 ovarian cancer samples integrating copy number variations, transcriptomic, methylation arrays, and micro-RNA expression data and performed exome sequencing for 316 of the samples [4]. Patients in the TCGA-OV project had advanced primary ovarian cancer with 5% diagnosed at stage 2, 79% at stage 3 and 16% at stage 4. TCGA-OV researchers identified mutations that are important in ovarian tumors by comparing pathogenic variants to those found in the Catalogue of Somatic Mutations in Cancer (COSMIC) and Online Mendelian Inheritance in Man (OMIM), and by predicting the mutations' impacts on protein function. The TCGA-OV study further analyzed the significance of all mutated genes compared to the background mutation rate (BMR), which represents the rate of random mutation. These estimates assume that most observed mutations are neutral and don't have any selective advantage or disadvantage [5,6].
Estimating the significance of gene mutation done by the TCGA-OV research network relied mainly on frequency-based criteria, where a gene is identified as having a driver mutation if it is altered in significantly more patients than expected based on the background model. Mutations in some genes, such as TP53, are detected in large populations of different cancers whereas some mutations exhibit low rates in cancers. For each gene, they calculated the probability of seeing the observed set of mutations and reported nine significant mutations out of the 9986 observed mutated genes. TP53 was found to be mutated in more than 96% of all samples as previously reported [7][8][9][10]. BRCA1/2 variants were also found in 22% of tumors (a combination of germline variation and somatic mutations). The TCGA-OV group also identified significantly mutated genes that occur at a low frequency, in only 2-6% of tumor samples. These genes are RB1, NF1, FAT3, CSMD3, GABRA6, and CDK12 [4].
While characterizing the spectrum of somatic mutations in ovarian cancer in the TCGA-OV study has a high impact, cancer arises from a complex interplay between genes in cells and environmental factors [11] and both mutated and non-mutated genes interact to enable the acquisition of the hallmarks of cancer [12]. Understanding which non-mutated genes are important for tumors could lead to the development of new and more effective drug targets. Most studies have focused only on mutated genes because it is difficult to assign significance to non-mutated genes since most genes in a single patient would be non-mutated. However, using high-throughput sequencing data of many patients, it is possible to estimate the significance of non-mutated genes by comparing observed mutation frequencies to expected mutation frequencies and identifying genes with lower mutation frequencies than expected.
In this study, we used a computational biology approach and set up in-silico mutagenesis experiments. This allowed us to identify a subset of genes that were observed to have fewer mutations in observed data than expected from simulation data which we called nonmutated genes. We hypothesized that non-mutated genes were essential to tumor function. Pathway analysis showed that non-mutated genes interact in cancerrelated pathways. Gene expression studies showed that non-mutated genes were well-expressed in cell lines and ovarian cancer tissues from patients. We also verified the relevance of these genes to tumor biology using proofof-concept siRNA-based experiments. We conclude that non-mutated genes are potentially important for ovarian cancer tumor biology and could lead to new therapeutic strategies.
In-silico mutagenesis approach
We obtained somatic mutation data from the TCGA Ovarian Cancer Project from the GDC Data Portal (https:// portal. gdc. cancer. gov/). We implemented a method to efficiently simulate mutations across a set of nucleotide sequences in Matlab as previously described in Malek, Halabi and Rafii [13]. The TCGA-OV data consisted of 316 patients, so we performed a simulation run 316 times. Since mutations were random, each simulated run of 316 patients was also repeated 100 times. In total, there were 31,600 simulated runs. Each simulation run consisted of simulating the mutagenesis of 140,362,938 nucleotide bases. Furthermore, since different bases undergo different mutation rates, it was necessary to implement a way to differentially mutate different sets of nucleotide bases. The sequence space was therefore divided into nucleotide bases that were (1) A Keywords: RNAi, Epithelial ovarian cancer, RNA-Seq, Non-mutated genes, Unmutated genes, Cancer somatic mutation, Mutated genes, Simulated mutation 20:244 or T (2) C or G (3) CG or GC. Mutations at these different sets were assigned different mutation rates. We used the background mutation rates (BMR) published in the TCGA study [4] in Additional file 1: Table S2.2b (A/T mutations: 8.54 × 10-7, C/G mutations: 1.2 × 10-6, CG/ GC mutations: 4.31 × 10-6 and insertions-deletions at 2.2 × 10-7). Since no information about insertionsdeletion sequence specificity was available, we added the indel mutation rate to the other categories. Three different random mutation vectors were generated of a length equal to the number of bases in the A/T, C/G, and CG/ GC vectors. Each random mutation vector consisted of 0's and 1's with the frequency of 1's occurring randomly at a density equal to the TCGA published background mutation rate. The three different mutation vectors were then combined to form the final mutation vector that had within it all simulated mutations. We used a reduced sequence library corresponding to the sequences that overlapped with the Agilent SureSelect v2 probe sequences. Obtaining the chromosomal locations of each probe from Agilent generated this reduced library. We then identified the regions corresponding to those probes by detecting the overlaps between the exon coordinates and the probe coordinates. The final exon sequence library consisted of 40,362,938 bases. After carrying out simulated mutagenesis, the total number of mutations per gene was calculated by identifying all the exons corresponding to a gene. All exons sharing the same gene symbol were considered the same gene.
Identification and pathway analysis of specific candidate genes
With the ability to calculate the simulated mutation frequency for each gene, it is possible to compare the observed mutation frequency in a gene with the expected simulated mutation frequency. To prioritize the genes with the largest deviations from random simulation expectation, we looked at the top 50 genes where the observed mutation rate was lower or higher than the expected mutation rate, comparing the observed and simulated frequencies based on rank of the observed/ expected from simulation mutation frequency ratio. To guarantee coverage in the TCGA-OV dataset, we restricted our analysis to genes that were mutated at least once in the TCGA data since the publicly available data only included a list of mutations per patient and not the coverage across all positions. We also performed pathway analysis using Ingenuity Pathway Analysis software (IPA from Qiagen, content version March 12, 2022). IPA software consists of a database of published relationships between genes with tools to analyze and visualize pathways. A list of the top 50 non-mutated genes with the observed/expected ratio of each gene was generated and uploaded to IPA. Network diagrams were generated among the genes in the list with genes colored in shades of red based on their observed/ expected ratio with the reddest indicating the lowest ratio. Networks were either built using the IPA tools (CONNECT, PATHWAY EXPLORER, TRIM, KEEP) or identified automatically by IPA software as indicated. Automatically generated network significance was assessed with an IPA generated score which represents the negative exponent of the right-tailed Fisher's exact test result (described in the IPA documentation: http:// qiagen. secure. force. com/ Knowl edgeB ase/ artic les/ Basic_ Techn ical_Q_ A/ Listi ng-of-Netwo rks).
Gene expression analysis of cell Lines and TCGA-OV patient data
RNA from six different cell lines were isolated using Qiagen Allprep DNA/RNA miniprep kit as per manufacturer instructions. Library preparation was done with Nugen's Ovation Single Cell RNA-Seq System. Sequencing (Illumina 100 bp paired-end reads) was done on Illumina HiSeq 2500. Alignment was done with RNA Star to GRCH37 [16]. Mapping to genes was done with Rsubread using the FeatureCounts function [17]. Normalization and quantification of gene expression was done with edgeR [18]. All genes with any read count were included. The reads per kilobase of transcript per million mapped reads (RPKM) measure was calculated for all genes in all cell lines and used for distribution comparison.
Publicly available gene expression data from the TCGA-OV project was downloaded from the GDC data portal (https:// portal. gdc. cancer. gov/ legacy-archi ve) using the following filters: Primary-site = Ovary, Data-category = Gene expression and Platform = HT_ HG-U133A. This data consisted of gene-level, robust multiarray analysis (RMA) normalized and backgroundcorrected expression values for 12,042 genes from primary ovarian cancer biopsies. The RMA values were used as provided. The gene expression data files were further filtered to include only those that had somatic mutation data. Somatic mutation data was similarly obtained from the GDC data portal (https:// portal. gdc. cancer. gov). The intersection between gene expression data and somatic mutation data files resulted in 315 samples for further expression analysis.
Custom scripts in Matlab software (Mathworks) were used for further analysis and visualization. To analyze the distribution of gene expression of both cell lines and TCGA-OV we used the non-parametric, two-sample Kolmogorov-Smirnov test as implemented in Matlab software (version 2019a).
Methylation and copy number analysis of TCGA-OV primary ovarian cancer samples
We downloaded from the GDC data portal (data release 32) methylation Beta value data obtained from Illumina human methylation 27 chip from 605 samples. Beta values represent the fraction of methylation at a specific site with 0 representing no methylation and 1 representing complete methylation. We then excluded from the analysis non-primary and non-cancer samples which resulted in 582 samples for further analysis. We aggregated the Beta value data from all patients for the top 50 non-mutated (41 matches) and top 50 mutated genes (33 matches) and compared their distribution using the twosample Kolmogorov-Smirnov test implemented in Matlab software (version 2021a). When one gene matched multiple methylation sites, the beta values were aggregated across the gene.
Similarly, for copy number analysis we downloaded from the GDC data portal (data release 32) 'Gene Level Copy Number' data obtained from the Affymetrix snp 6.0 array from 589 samples. We excluded non-primary cancer samples to obtain 562 samples for comparison. We matched 48 of the top 50 non-mutated genes and 43 of the top 50 mutated genes and aggregated all the copy number data across all samples. Distributions were compared using the non-parametric, two-sample Kolmogorov-Smirnov test as implemented in Matlab software (version 2021a).
To assess the degree of knockdown, cells were seeded in 96-well culture plates at a density of 5000 cells/well. cDNA synthesis was carried out 72 h after cell siRNA using TaqMan Gene Expression Cells-to-Ct kit (Thermo-Fisher). Normalization was done using the included B-actin probe in the Cells-to-Ct Control kit (Thermo-Fisher). All qPCR reactions were performed in triplicate and Cq values were averaged.
Cell viability assay
We used Promega's CellTiter-Glo ® assay in 96 well plates. Briefly, cells were seeded at 5000 cells per well in 96-well plates and allowed to attach overnight at 37 °C. Twentyfour hours after attachment, cells were transfected with individual siRNAs at 10 nM using Lipofectamine Max (Thermo Fisher). Twenty-four hours after siRNA treatment the transfection media was replaced with serumfree media. We used the same siRNA concentrations and transfection reagents in all cell lines and experiments. In addition, positive and negative siRNA controls were added in different wells. Seventy-two hours after transfection, 100 μl of CellTiter-Glo ® reagent was added to 100 μl of medium containing cells in a 96-well plate and viability was evaluated using EnVision Workstation version 1.12 from PerkinElmer. All experiments were performed in triplicates. Student's t-test was used to compare the proliferation fraction of the knockdowns with that of the negative control. A p-value less than 0.05 was considered significant.
Morphological marker staining
Cells were incubated 72 h after transfection with Invitrogen's Live Cell stain CellMask Orange/Red for the plasma membrane and Hoechst 33342. Both the cell morphology of the cells and the nuclear morphology were visualized by confocal microscopy (Zeiss LSM 510).
Wound healing assay
Cancer cells (50000 cells/well) were plated in 24-well plates in triplicate. Twenty-four hours after siRNA transfection, cells were starved from serum. A scratch was made in all the wells with a 1 μL pipette tip fortyeight hours after siRNA transfection. Images were taken directly after the scratch (0H) and again after 24 h (24H) and 48 h (48H). Edges were identified with manual inspection and wound healing was quantified as the ratio of the pixel distance at the timepoints relative to the 0H distance. Student's t-test was used to calculate significance of differences between the ANKLE2 knockdown and the control by combining the 24H and 48H data.
Chemotherapy
Paclitaxel/taxol and carboplatin were purchased from National Center for Cancer Care and Research (NCCCR; Doha, Qatar) pharmacy. Briefly, cancer cells (5000 cells/ well) were plated in 96-well plates in triplicate for each condition. Twenty-four hours after siRNA treatment, the cells were starved from serum. Forty-eight hours after siRNA transfection, each drug suspended in phosphate buffered saline (PBS) was added to each well at a concentration of 50 µM and viability was analyzed after 24 h. Student's t-test was used to compare the proliferation fraction of different pairs. A p-value less than 0.05 was considered significant.
In-silico identification and pathway analysis of non-mutated genes
We obtained the publicly available mutation data from the TCGA-OV project as described in the methods. The mutation data consisted of somatic mutations from ovarian cancer tissues in 316 patients. To determine which, if any, genes were potentially significantly non-mutated, we performed simulated mutagenesis on a reduced exon sequence library (Fig. 1a). We then compared the simulated mutagenesis results to the observed mutation data.
Our comparison of simulated mutations to the observed mutations showed that most genes had a mutation rate similar to what is expected randomly, as the observed/ simulated ratio was close to 1:1 for the vast majority of genes (Fig. 1b). However, a few genes were observed with mutatio n rates both higher and lower than expected (Fig. 1b, Table 1, Additional file 1: Table S1a). Among well-known genes, TP53 and MMP8 are mutated the most (Additional file 1: Table S1b). Notably, among the genes that mutated the least is vascular endothelial growth factor (VEGFA), a molecule that plays an established role in tumor angiogenesis [12]. We then conducted a detailed literature search on the top 20 genes mutated less than expected to understand if they could be playing an important role in tumor biology (Table 1). Although we found no relevant cancer literature for several of these genes, others were found to be highly interesting in terms of cell biology including TRAPPC9, which plays a role in NF-κB signaling, ANKLE2, which plays a role in mitosis, and VEGFA, which plays a role in angiogenesis (references provided in Table 1).
To understand further if these genes were acting independently or could be part of pathways, we performed network analysis on the set of 50 top non-mutated genes. We observed, with a constructed pathway, that ten of these genes interact directly or indirectly through the AKT and NF-κB pathways (Fig. 2). The activation of the AKT/ NF-κB pathways is associated with resistance to therapy in advanced ovarian cancer [14,96]. IPA also automatically identified a set of genes that significantly interact with the NF-κB pathway shown as network 2 in Additional file 1: Figure S1a. Furthermore, three additional networks were automatically identified by IPA among the Top 50 non-mutated genes (Additional file 1: Figure S1a). We show one network consisting of 12 genes including ANKLE2, SHROOM3 and ELFN2 interacting through direct connections with VIRMA, a nuclear/cytosolic protein involved in RNA methylation/adenylation and implicated in different cancers [97,98]. These results show that the identified non-mutated genes interact in cancer related biological pathways.
Gene expression, methylation and copy number analysis of mutated and non-mutated genes
To investigate further the biological relevance of our computational findings, we assayed the gene expression of multiple ovarian cancer and normal cell lines using RNA sequencing (Additional file 1: Figures S2 and S3). The set of genes that were mutated less than expected were found to be expressed at a higher level than genes mutated more than expected (Fig. 3a). We confirmed similar findings using the TCGA-OV patient data (Fig. 3b) which consists of gene expression data from 315 ovarian cancer biopsies from 315 different patients. To determine if the mutation itself can affect gene expression we looked at expression distributions when a gene is both mutated and non-mutated in the TCGA-OV data and no significant difference was observed between mutation state and expression state in 25 out of 28 genes (Additional file 1: Figure S4). The best example of this is seen in the TP53 gene which has the most mutations; the expression level of the mutated and non-mutated TP53 samples have a similar distribution (Additional file 1: Figure S4). We also compared the expression of non-mutated genes between cancer cell lines and normal cell lines (Fig. 3c) and the expression of mutated genes between cancer cell lines and normal cell lines (Fig. 3d). We found no significant difference in either of these comparisons in contrast to the significant differences across the gene sets. Moreover, we searched across the top 50 nonmutated genes shown in Additional file 1: Figure S2 for genes whose expression is low in non-cancer cells and high across all cancer cells. We could not identify such genes but we did identify several non-mutated genes whose expression was low in both non-cancer cell lines and high in three out of the four cancer cell lines. These genes include ELFN2, CELSR1 and TRPV6. We therefore conclude that non-mutated genes could play a role in tumor biology due to their relatively high expression when compared to the expression of the most mutated genes.
We then performed an analysis of methylation states comparing the aggregated methylation beta value of nonmutated and mutated genes using the TCGA-OV data as described in the Methods. As shown in Additional file 1: Fig. 3 Gene expression distributions of non-mutated and mutated genes. Distribution of gene expression of non-mutated and mutated genes for our cell line data (a) and for patient data from TCGA-OV project (b). In blue is the expression distribution of top 50 non-mutated genes and in orange is the distribution of the top 50 mutated genes. Note that the expression of non-mutated genes is significantly higher than the expression of mutated genes in both the cell line and patient data. Distribution of gene expression of non-mutated genes (c) and mutated genes (d) in non-cancer cell lines and cancer lines. Note that there is no significant difference in gene expression difference between non-cancer and cancer cell lines in either mutated or non-mutated genes. The inset for each panel shows the cumulative density plots of the same data Figure S5a, non-mutated genes are overall significantly less methylated than mutated genes. Non-mutated genes show higher peaks than mutated genes at low beta values (0 to 0.2) while mutated genes show higher peaks than non-mutated genes at high beta values (0.8-1). Following the methylation analysis, we also performed copy-number analysis on the same TCGA-OV dataset as shown in Additional file 1: Figure S5b. Here, we also find a significant difference between non-mutated and mutated genes with non-mutated genes having overall more genes with copy number greater than 2 while mutated genes having more genes with copy number less than 2. The methylation and copy number results are consistent with the gene expression results. Lower methylation and higher copy number is associated with greater gene expression which is what we observe when comparing non-mutated genes to mutated genes.
Functional effect of siRNA knockdown of non-mutated genes
To determine if non-mutated genes could directly impact cancer cell line growth we selected nine genes for proofof-concept in-vitro gene silencing experiments. We selected seven genes among the top 10 ranked genes (rank 1-4, 6-7, and 9), VEGFA (rank 14), as it is known to be involved in angiogenesis, and SLC12A9 (rank 41), as it is a plasma membrane-embedded cation transporter which may be more easily targeted. We first determined that the transcripts were successfully knocked down at levels greater or equal to 50 percent of the negative control (Additional file 1: Figure S7). We then performed silencing experiments on these nine genes with three different ovarian cancer cell lines (Fig. 4) and one non-cancer ovarian epithelial cell line (Additional file 1: Figure S6). We found a significant reduction of viability following silencing of 7 out of 9 genes in SKOV3, 9 out of 9 genes in OVCAR, 9 out of 9 genes in APOCC and 2 out of 9 genes in the non-cancer ovarian epithelial cell line. We therefore conclude that silencing of these genes affects cancer cell lines significantly more than the noncancer cell line.
To examine further functional effects, we selected ANKLE2 as it was interestingly found to play a role in cell division [26,99,100]. Silencing of ANKLE2 resulted in significant morphologic changes in SKOV3 (Fig. 5a) where cells were growing in packed scattered colonies with a fibroblast-like shape before the knockdown. We also observed a significant reduction in migration in SKOV3 ANKLE2−SiRNA using a scratch assay (Fig. 5b). Finally, we evaluated the impact of ANKLE2 knockdown on chemoresistance. SKOV3 cells are paclitaxel/ taxol-resistant [101,102] but we found that ANKLE2 knockdown enhanced the cytotoxic effects of paclitaxel compared with negative controls as shown in Fig. 5c. In contrast to paclitaxel effects, no chemotherapeutic sensitivity was observed with carboplatin (Fig. 5c).
Discussion
Here we focused on better understanding the role of non-mutated genes in ovarian cancer. We showed that in a few genes there are differences between simulated and observed mutation frequencies in the TCGA ovarian cancer cohort of 316 patients. These genes fell into two categories-genes that are mutated more than expected (such as TP53) and genes that are mutated less than expected which we call here non-mutated genes. The non-mutated gene set was especially interesting because this was a set of genes that could be selected against mutation due to their role in tumor biology and that could offer new therapeutic strategies. The TCGA study in ovarian cancer uncovered 9,984 genes mutated in 316 patients with a very heterogeneous distribution among patients [94]. Not only are there many mutations but, with the exception of TP53, patients share few mutations. This mutational diversity makes treatment strategies that target mutated genes difficult as every patient may have a different combination of mutated genes. However, treatment strategies that target non-mutated genes may be more effective as these non-mutated genes would be the same in different patients if the observed non-mutation is due to selection against mutation. Indeed, one of the genes we identified as non-mutated is VEGFA, which is known to be involved in promoting cancer angiogenesis [65-67, 69, 103]. We found that the non-mutated genes are members of cancer-relevant networks. For example, SHROOM3 has a role in regulating cell shape in tissues [30] and is connected with the SNF complex, which mobilizes nucleosomes, remodels chromatin and opens up the transcription-binding domains leading to an increase in transcription [104]. Growing studies support the role of SNF complex in cancer development, as several subunits possess intrinsic tumor-suppressor activity [105]. Furthermore, several non-mutated genes interact indirectly and directly with the AKT network which modulates the function of numerous substrates involved in the regulation of cell survival, cell cycle progression and cellular growth, and neo-vascularization [106,107]. One of the most interesting genes we observed to be significantly non-mutated was ANKLE2, which is a member of the LEM family of inner nuclear membrane proteins. This gene functions as a mitotic regulator through the post-mitotic formation of the nuclear envelope [26]. Our inhibition strategy confirmed the important role of ANLKE2 in different tumor-associated phenotypic traits.
We observed that generally non-mutated genes are well-expressed in both non-cancer and cancer tissues. This could limit the clinical use of targeting non-mutated genes as there could be significant side effects due to deleterious effects on non-cancer cells. However, targeting non-mutated genes could still be a viable strategy if cancer cells display greater sensitivity than non-cancer cells to inhibition of non-mutated genes. The greater sensitivity of cancer cells to radiation or chemotherapy compared to non-cancer cells has resulted in the wide use of these treatment modalities although with significant side effects. We have performed one experiment showing that non-cancer ovarian epithelial cells are less susceptible than cancer cell lines to the effects of silencing in our viability assay. While these results are promising, they need to be further validated across different cell lines and esp ecially across different cellular contexts. Cells grown in 2D monocultures are very different from cells grown in co-culture with other cells or in 3D organoids and from cells in tissues. It will be interesting to explore the differential sensitivity of cancer cells and non-cancer cells to non-mutated gene inhibition in future studies. Furthermore, we identified several non-mutated genes where three out of four cancer cells had high expression but where expression was low in non-cancer cells. These genes may be interesting therapeutic targets if this pattern is also seen across more cancer and non-cancer cells as targeting them could result in reduced toxicity t o noncancer cells.
A related point that could affect therapeutic effectiveness is if these genes might be non-mutated because they are housekeeping genes and any mutation would be highly deleterious to all cells. The commonly known housekeeping genes are the ACTB gene, which is part of actin protein family, RAB7A, which belongs to the RAS oncogene family, and the GAPDH gene (Glyceraldehyde 3-phosphate dehydrogenase). In our study, they did not display any selection against mutation. The top 50 nonmutated genes identified are not part of classical housekeeping genes to our knowledge.
Our analysis is novel, as most studies have focused on mutated genes. Further data can help refine our analysis, as we used only the restricted publicly available datasets in this work. Sequencing with better coverage, such as whole-genome sequencing, would be an improvement to this analysis since we would get a much better coverage distribution. In addition, it would be interesting in future studies to develop single-cell and deep sequencing experiments in rapidly div iding cancer cells in culture across different time points to determine the distribution of mutations including low frequency mutations. With sufficient coverage it will also be possible to determine if there are significantly non-mutated genes in this context. Comparing our data to the TCGA-OV study [4] shows that among the top 50 mutated genes we identified are two of the nine genes the TCGA identified as being significant. These genes are TP53 and RB1. The TCGA-OV used complex statistical models considering sequence context in addition to considering the ove rall prevalence of mutations. Our random mutation model here is relatively simple and the mutation probability of a specific base is independent of any other base. One possibility this limitation raises is that genes can be observed to be non-mutated not because they are selected against but because they have a sequence context that greatly reduces the chance of mutation. While these mutation-resistant genes could still make interesting targets if cancer cells are sensitive to them, their identification would require both high coverage data and improved mutation simulation models. Our overall approach here was to combine random simulation results with pathway analysis, gene expression and functional testing of selected genes.
In this study, we exploited large-scale cancer genomic databases and bioinformatics approaches to discover novel therapeutic candidates. Our combined bioinformatics and silencing approach could potentially lead to discoveries of interesting candidates without the need for complex, costly, high-throughput screening approaches. Understanding the broader landscape of non-mutated genes using combined TCGA datasets could lead to understanding key selection processes in place in cancer evolution and identifying critical steps that could be used as therapeutic targets.
Conclusions
While extensive cancer research has focused on understanding genes whose mutations are selected for (mutated genes), comparatively little is known about genes whose mutation is selected against (non-mutated genes). Identifying non-mutated genes could lead to new therapies as non-mutated genes could be important for cancer survival and growth. We first identified potential non-mutated genes by comparing mutations observed in an ovarian cancer cohort with mutations expected in a random mutagenesis model and selecting genes with the greatest difference from random expectation. We then found that non-mutated genes interact in known pathways and are well-expressed in cell lines and patient tumors suggesting functional importance. Finally, we found that when we reduced the expression of selected non-mutated genes in ovarian cancer cell lines, the growth of all the cell lines was significantly reduced. This study is a first proof-of-concept showing that targeting non-mutated genes is a plausible cancer therapy approach.
Additional file 1: Table S1. Top 20 genes mutated more and less than random expectation; Figure S1. Automated pathway analysis of nonmutated genes; Figure S2. Expression of top 50 genes mutated less than expected in different cell lines; Figure S3. Expression of top 50 genes mutated more than expected in different cell lines; Figure S4. Gene-level expression distribution in TCGA-OV patients in mutated and non-mutated samples; Figure S5. Methylation and copy number analysis in TCGA-OV data; Figure S6. Cell viability in non-cancer ovarian epithelial cells; Figure S7. Knockdown efficiency. | 7,358.4 | 2022-05-26T00:00:00.000 | [
"Medicine",
"Biology"
] |
An effective method for small objects detection based on MDFFAM and LKSPP
Object detection is one of the research hotspots in computer vision. However, most existing object detectors struggle with the identification of small targets. Therefore, the paper proposes two modules: the MDFFAM (Multi-Directional Feature Fusion Attention Mechanism) and the LKSPP (Large Kernel Spatial Pyramid Pooling), to enhance the detector's effectiveness in identifying subtle faults on the surface of mechanical equipment. LKSPP aims to expand the receptive field to capture high-level semantic features through large kernels. Meanwhile, the MDFFAM allows the network to efficiently utilize spatial location information and adaptively recognize detection priorities. In the detection task, MDFFAM effectively captures feature information in three spatial directions: width, height, and channel, with the location information fully utilized to establish stable long-range dependencies. Moreover, LKSPP boasts a larger receptive field and imposes less computational burden compared to the SPPCSPC by YOLOv7. Finally, experiments demonstrate that the proposed module effectively improves the detection accuracy for small targets, surpassing the state-of-the-art object detector, YOLOv7. Remarkably, MDFFAM incurs almost negligible computational overhead.
where y h c=1 (h) is the output at height h, y w c=1 (w) is the output at width w.The convolutional layer with a fixed kernel size provides the input X directly; hence, it can be considered a collection of local descriptors.Similarly, the result in the C × 1 × 1 channel direction can be expressed as: The above three formulas enable the decomposition of input X into three feature encodings along different spatial directions, forming a set of spatial direction-sensitive quantities and aggregating feature information along C, H, and W spatial directions.Compared with the SE block that generates individual feature vector, MDFFAM retains precise location information and establishes more robust long-range dependencies.
Attention generation
In the second step, features are captured along the three spatial directions and generate multi-directional attention.The details are as follows: the three spatial directional features derived from Eqs. (1), (2), and (3) are successively convolved.After applying the Sigmoid activation function, the feature aggregation maps g h , g w , and g c serve as the attention weights for the different spatial directions, expressed as: where Conv () is a convolutional layer with a 1 × 1 kernel and output channel c, δ() is the Sigmoid activation function.g h ∈ R C×H×1 , g w ∈ R C×1×W , and g c ∈ R C×1×1 are the attention weights after feature extraction and mapping along the three directions of height, width, and channel.Next, the three attention weights are fused to obtain f : After conversion by Eq. (7), the feature attention weight f ∈ R C×H×W for the three directions of fusion is obtained.BatchNorm is subsequently applied to f to prevent the network from overfitting while simplifying the structure.The normalization result is divided into feature maps with the same number of channels by two convolutional layers, i.e., . The parameter r is the reduction ratio used to control the module size.Then, the Sigmoid activation function is applied to each of the two feature maps and the results are concatenated.
(1) where δ() is the Sigmoid activation function and G is the result after concatenation.A convolution operation on G adjusts the number of channels and adds it to the input X to obtain the final output of the entire mechanism: MDFFAM distinguishes itself from channel attention by considering the importance of different channels and encoding the information in both high and wide spatial directions.This allows the detector to capture the features along different directions and effectively use the location information to establish solid long-range dependencies that assist the model in object identification.
LKSPP (large kernel spatial pyramid pooling)
In CNNs, the requirement of fixed input size is usually met by cropping and stretching, which can bring about image distortion and decreased detection accuracy of the model for images.SPP 41 is an effective solution.Regardless of the input size, the output size after the SPP layer remains fixed, which reduces the risk of overfitting.The feature of multi-size feature fusion enhances network robustness.Figure 2 illustrates three spatial pyramid pooling structures: SPP in Yolov5 42 , SPPCSPC in Yolov7, and LKSPP.SPP, the simplest of the three, uses three max pooling layers to compute the input in parallel.The pooling layers are chosen with large kernels to expand the receptive field.Finally, the original input is stitched with the three pooled results using shortcuts.The SPPCSPC used in Yolov7 follows the same pooling layer design as SPP, with three pooling layers connected in parallel and kernel sizes of 5, 9, and 13.However, before the pooling operation, three convolutional layers are introduced, in which the convolutional kernel of 3 expands the receptive field, making the receptive field obtained by the pooling part of SPPCSPC larger than that of SPP.Moreover, stacking multiple CBG modules effectively increases the depth of the model.
Both SPP and SPPCSPC use large-kernel pooling layers to further illustrate the importance of large receptive fields.However, they have limitations in their structures.SPP simply designs three large kernels in parallel, which increases the computational load in exchange for an extended receptive field and impacts inference speed.SPPCSPC adds many elements to SPP, such as convolutional layers, normalizations, and activation functions, to effectively increase the module depth and reduce the risk of overfitting.The convolutional layer before the pooling operation also helps the module to expand the receptive field.However, SPPCSPC does not take into account the design idea of reverse bottleneck, and the computational burden brought by simply using convolutional layers to expand the receptive field is relatively heavy.
To address the above issues, LKSPP is proposed, with the following design principles: (1) Introduce a reverse bottleneck: the hidden dimension of the module is larger than the input dimension.The design, similar to (10) www.nature.com/scientificreports/Transformer's MLP module and ConvNets, effectively reduces module computation.For instance, ConvNeXt uses reverse bottlenecks and gives the task of changing the channel dimension to 1 × 1 convolutions, which significantly cuts down network FLOPs while enhancing accuracy.In LKSPP, this reverse bottleneck design is reflected in the three convolutional layers after the pooling operation, all employing 1 × 1 kernels.This ensures parameter reduction while expanding channel numbers.All convolutional layers maintain the input feature map's size and only modify the channel dimension.(2) Implement a front large kernel pooling layer.In the network, pooling layers with large kernels should steer clear of channel number increase calculation.Hence, the reverse bottleneck is positioned at the end of the module while the pooling part is front-loaded.Most of the computational tasks are still handled by 1 × 1 convolutional kernels with output channels halved compared to input channels.This design can further reduce the parameters and computations for the large kernel pooling layer.
(3) Establish a serial connection method.Both SPP and SPPCSPC use parallelism to connect large kernel pooling layers.In this way, direct use of large kernels incurs a substantial computational burden, especially for a pooling layer with a 13 × 13 kernel size.In contrast, a serial approach is more reasonable compared to the design paradigm of direct use of multiple large kernels in parallel.SPPF 42 sequentially connects three pooling layers with 5 × 5 kernels, resulting in a significant speedup with improved performance.LKSPP concatenates three pooling layers with large kernels in serial, each employing the same kernel of 7 × 7. Obviously, the pooling part of the LKSPP boasts the greatest receptive field.4) Incorporate a global receptive field path.In the design principles for large kernels, shortcuts remain crucial.Similarly, LKSPP introduces a shortcut and adds a global receptive field to this shortcut path.Specifically, input feature maps for each channel are compressed to a 1 × 1 size through an adaptive average pooling layer to facilitate global feature extraction for each channel.Then, a 1 × 1 convolution layer captures information from the extracted global features in a deeper step.Finally, the convolved output restores the feature size of each channel from 1 × 1 to the original size through the Upsampling module.
Given the four points, LKSPP experiences a significant reduction in parameters and computations compared to SPPCSPC with a larger receptive field.
Slim-YOLO
To demonstrate the effectiveness of LKSPP and MDFFAM in improving the performance of the object detector, these two modules serve as the cores in constructing the model, which is referred to as Slim-YOLO.The overall framework of Slim-YOLO is depicted in Fig. 3 and comprises three major components: backbone, neck, and head.Backbone: The role of the backbone part is mainly to extract features from the input.It is divided into five stages, each generating feature maps with varying sizes and channel dimensions.As the network deepens, the size of the feature map decreases and the channel dimension increases.Specifically, to obtain rich feature information early in the extraction process, several CBG modules are applied at each stage, i.e., Convolution Layer + BatchNorm + Activation Function Gelu.After CBG, two MDFFAM modules are introduced to enhance the utilization of location information.MDFFAM extracts features from the input along three spatial directions and fuses the resulting feature maps, which effectively boosts the robustness of the network.Given that the detector obtains rich local features in the initial part, four CBGs are used in stage 1, gradually decreasing to two in the last three stages.The backbone continues to pass the extracted feature maps to the neck for further feature fusion and reprocessing.
Neck: First, LKSPP performs a pooling operation on the feature maps extracted by the backbone.A serial large kernel pooling layer is designed to filter out redundant features, accurately retain critical information, reduce network parameters, and enhance the fused feature information.Then, two Upsampling modules are utilized to augment the resolution of the feature maps.The feature map (P4) generated in stage 4 is fused with the output feature map of the Upsampling module in stage 6.Similarly, the output feature map of the Upsampling module in stage 7 is fused with the feature map (P3) generated in stage 3. Stage 8 and stage 9 share a similar architecture, where a CBG module with 3 × 3 kernel is added before and after the Concat layer to enhance the ability to capture local features.MDFFAM makes full use of the spatial location information of the CBG-processed feature maps and establishes solid long-range dependencies between the modules.
Head: This part is mainly responsible for the localization and classification of the previously processed feature maps.The processing means usually focus on non-maximal value suppression methods and other versions, such as soft NMS 43 and weighted NMS 44 .In the head, RepConv is used to expedite model inference during deployment.During training, RepConv consists of three branches: 1 × 1 convolution, 3 × 3 convolution, and BatchNorm layer.During deployment, the model fuses the convolutional layers and BatchNorm layers of the three RepConv branches with a reparameterization technique, equivalently into a VGG-like structure.RepConv is subsequently used behind each of the three feature maps in the final output to further accelerate the inference.Eventually, the detection head calculates the bounding box loss and classification loss for localization.
Experiment Experiment preparation
This paper uses the NEU-DET 45 surface defect detection dataset, which contains six typical mechanical surface defects, i.e., Rolled-in scale (Rs), Patches (Pa), Crazing (Cr), Pitted surface (Ps), Inclusion (In), and Scratches (Sc).Each defect type comprises 300 images, for a total of 1800 images.The dataset is divided into three subsets: a test set with 1134 images, a validation set with 126 images, and a training set with 540 images.
All experiments are based on the Pytorch environment and are executed from scratch without pre-trained models.In the comparative and ablation experiments, only the module is changed, with the parameter settings consistent with the baseline YOLOv7.All models undergo training for 200 epochs with an input image size of 320 × 320.
Baseline
To verify the superiority of the proposed module, the previous versions of the YOLO series and the most advanced object detector, YOLOR, are selected as baselines.Slim-YOLO is compared with baselines, and the experimental results are shown in Table 1.www.nature.com/scientificreports/ In comparison with the YOLO series, Slim-YOLO exhibits the highest mAP 50 , with a 4.8% improvement over the least accuracy YOLOv4-CSP, and even a 0.5% enhancement over YOLOv7, the most advanced real-time object detector currently available.While Slim-YOLO demonstrates an absolute advantage in terms of accuracy, it does impose a slight computational burden on the hardware.First of all, the parameters of Slim-YOLO are only 34.6 M, which is 5.5% less than YOLOv7 and even 80.9% less than YOLOv3-SPP.Furthermore, in terms of computation, although YOLOv7 is undoubtedly the smallest in the YOLO series with only 103.2G, Slim-YOLO places a much smaller computational burden, 34.4% less than YOLOv7, which fully illustrates that Slim-YOLO's core modules, MDFFAM and LKSPP, are lightweight.
Similarly, in comparison with the detectors of the YOLOR series, Slim-YOLO outperforms the top three indicators.In terms of the parameters, it is 6.4% fewer than YOLOR-P6, the lowest in the YOLOR series.In Flops, it is 4.4% less than YOLOR-P6 and even only one-third of YOLOR-CSP-X.Slim-YOLO also demonstrates superior accuracy performance, with an 11.1% improvement over YOLOR-CSP, which has the highest accuracy in the YOLOR series.
How to effectively improve the model accuracy and mitigate the increase in computational burden has been the key to measuring the effectiveness of the module.By comparing with the baselines, it is evident that Slim-YOLO has successfully balanced both accuracy and computational cost, which further demonstrates that the core components of Slim-YOLO, MDFFAM, and LKSPP, markedly enhance model accuracy.
Figure 4 illustrates the P-R curves of YOLOv7, the most advanced of the YOLO series, and the proposed Slim-YOLO.In the category accuracy, Slim-YOLO exceeds YOLOv7 in four categories, with the most significant improvement seen in 'Crazing' at 8.8%.It is worth noting that the computational burden of Slim-YOLO is much smaller than that of YOLOv7.Slim-YOLO outperforms YOLOv7 in terms of detection accuracy for all categories, and its computational burden is notably lighter than that of YOLOv7.
To visualize the detection performance of Slim-YOLO on defect features, six defect types in the dataset are randomly selected for experiments.YOLOv7 and YOLOR-CSP, the top performers in the YOLO and YOLOR series, function as the baselines, and the results are shown in Fig. 5.The distribution complexity of each defect type varies, with 'Rolled in scale' and 'Crazing' exhibiting the highest distribution complexity, which leads to a lower detection accuracy for these two types of defects using the baselines.Slim-YOLO achieves the highest www.nature.com/scientificreports/detection accuracy in 'Rolled in scale' , 28% and 13% higher than YOLOv7 and YOLOR-CSP, respectively.It also demonstrates the optimal detection accuracy in 'Crazing' , a surface defect type highly similar to 'Inclusion' .In 'Scratches' , Slim-YOLO displays slightly lower accuracy than YOLOv7, while YOLOR-CSP exhibits the lowest accuracy and overlapping detection frames.In the remaining three defects, Slim-YOLO outperforms the benchmark model and achieves 91% detection accuracy for 'Patches' .These results demonstrate that Slim-YOLO, with the introduction of MDFFAM, is better equipped to capture the positional information of the features and realize the precise defect localization, with minimal overlap in detection frames.In addition, the LKSPP module can www.nature.com/scientificreports/effectively help the detector mine richer high-level semantics, capture sufficient global information, and take into account local information, even for the most difficult defect. Figure 6 shows the accuracies of the detectors for each defect in the test set, with<EMAIL_ADDRESS>as the criterion.Slim-YOLO exhibits the highest accuracy in 'Crazing' and 'Rolled in scale' defect detection, while YOLOR-D6 performs the poorest.YOLOv7 and YOLOv5L perform the best for 'Scratches' and 'Inclusion' , respectively.In the remaining types of defect detection, Slim-YOLO maintains a high level of accuracy.In summary, Slim-YOLO holds an absolute advantage in the defect detection task.
Ablation study
In this paper, ablation experiments are conducted to demonstrate the significant performance enhancement of the object detector by the proposed module.The specific results are shown in Table 2.With YOLOv7 as the baseline, modules are incrementally added.
Firstly, in terms of the parameters and computation, adding MDFFAM to YOLOv7 only induces a marginal increase of 0.82% and 0.67%, respectively, more than the original.This indicates that MDFFAM is lightweight enough to disregard the computational overhead it introduces to the detector, while yielding a notable improvement in the detector's accuracy.In the individual module comparison, YOLOv7 with MDFFAM achieves the highest mAP 50 , exhibiting a 1.8% enhancement over the baseline, along with 1.9% and 1.5% improvements in the accuracy metrics mAP 50:75 and mAP 50:95 , respectively.Next, testing LKSPP, it is important to note that only the SPPCSPC in YOLOv7 is replaced with LKSPP, while the remainder of the architecture remains unchanged.It is found that the parameters are reduced by 13.7% compared to the baseline.This fully illustrates that the proposed large kernel design principle can maximize the reduction of the parameters and computation.In addition, a series of large kernels in the design improves the effective receptive field of the module and captures more comprehensive features than the paradigm of directly paralleling multiple large kernels.LKSPP demonstrates improvements of 1.3%, 0.4%, and 0.3% over SPPCSPC for mAP 50 , mAP 50:75 and mAP 50:95 , respectively.Finally, two modules are added to the baseline to achieve the optimal results in three accuracy metrics: mAP 50 , mAP 50:75 , and mAP 50:95 , with an improvement of 2.2%, 1.7%, and 1.3%, respectively.The complexity of the model is further optimized with a 4.6% reduction in the parameters.
Figure 8a illustrates the comparison of classification loss before and after the addition of the module to the baseline model YOLOv7.The incorporation of both modules simultaneously results in a consistent minimization of loss values throughout the entire training process.In particular, with the addition of the modules, the classification performance of YOLOv7 is significantly improved and the loss pattern is smoother.This observation underscores the synergistic effect engendered by the conjoined operation of LKSPP and MDFFAM, attributed www.nature.com/scientificreports/ to their disparate functional focuses.LKSPP is adept at harnessing rich high-level semantic features owing to its expansive receptive field, while MDFFAM excels in ascertaining precise feature location information.The detector, fortified with the merits of both modules, exhibits a marked enhancement in classification efficacy.
The importance of MDFFAM
To demonstrate the effectiveness of the proposed MDFFAM in improving the detection performance of the model for small targets, YOLOv7 is used as the baseline and different attention modules are added separately, with results shown in Table 3.The test involves four attention mechanisms: CA, CBAM, SE, and MDFFAM.In terms of the parameters, CBAM, CA, and MDFFAM all operate at the same level, while SE increases the parameters by 3.2% compared to the baseline.Regarding computation load, MDFFAM imposes a relatively small burden, with 14% less computational effort than SE.The difference between MDFFAM and CA, which incurs the least computational overhead, is almost negligible, as MDFFAM is only 0.58% higher compared to CA.Meanwhile, MDFFAM achieves the highest mAP 50 of 73.0, which is 4.7% better than the second-ranked CA, outperforming the baseline by 1.9% and 1.5% in the metrics mAP 50:75 and mAP 50:95 , respectively.
To better observe the association regulation of Precision, Recall, and mAP 50 for the four attention mechanisms throughout the training phase, a three-dimensional scatter plot is chosen for display, as shown in Fig. 7.At the beginning of the training phase, the results exhibit a scattered distribution.However, as the epoch keeps increasing, the three indicators converge in the same direction, and the scores improve.The figure demonstrates that MDFFAM rapidly enters the convergence state compared with the other three attention mechanisms, with the most minor dispersion fluctuation of the results of MDFFAM in the early training phase.The above experimental results highlight MDFFAM's capacity to facilitate model convergence and maintain stability.From the perspectives of both computational loss and accuracy, MDFFAM exhibits excellent performance.
Except for MDFFAM, the remaining three attention mechanisms all reduce the accuracy of the baseline.This fully illustrates that among the four attention mechanisms, MDFFAM introduces a small computational overhead to the model and also effectively improves detection accuracy.Compared with the other three attention mechanisms, the use of MDFFAM provides greater flexibility to the model.
The impact of hyperparameter r
To further observe the effect of hyperparameter 'r' in the MDFFAM on the model performance, experiments are conducted with YOLOv7 as the baseline.Five sets of experiments are performed to increase the reduction rate 'r' from 2 to 32 sequentially to observe the change in performance, and the experimental results are shown in Table 4.The experiments reveal that the maximum number of parameters and computation occurs when the reduction rate is set to the smallest 2. Conversely, the computational burden of the model is the smallest when 'r' is set to 32.This indicates that the hyperparameter 'r' can flexibly modulate the capacity and computational overhead of the module in the model.Moreover, it is observed that as 'r' increases, the computational overhead diminishes.However, the only goal is not to achieve a lightweight model, accuracy remains of great importance.
Figure 8b illustrates the variations in classification loss of the baseline model throughout the training phase under the influence of different hyperparameters r.A pronounced elevation and frequent oscillations in loss value are observed with r set to 32.Conversely, an assignment of 16 to r yields the most stable and reduced loss value, as evidenced by the smoothest trajectory of the curve.The remaining loss curves exhibit comparable magnitudes and trends, indicating a lesser dependency on the specific value of r within those ranges.Therefore, based on the results, the optimal balance between accuracy and model complexity is obtained when the reduction rate is set to 16, and the reduction rate of 16 is also employed by MDFFAM in the attention mechanism ablation experiment.
Discussion and conclusion
Much research has been conducted on object detection.CNNs 47,48 are employed to extract object features for the detection task.The enhancement of network depth 49 is a chosen strategy to improve the detection accuracy.The relation network 50 can boost detectors' effective integration of the extracted feature information.YOLOv7, as a state-of-the-art single-stage detection algorithm, is capable of quick and comprehensive detection tasks.Under unfavorable conditions such as insufficient light and shadows, GAFF 51 can fuse the visible and thermal features of the target to further weaken external interference.CPFM 52 mines the precise features across different modes and fuses them in a complementary way to enhance the robustness of the detection.
This paper proposes two new components: the MDFFAM and the LKSPPF.MDFFAM can make full use of spatial location information to assist the model in the accurate identification of the detection focus while ensuring the establishment of stable long-range dependencies.On the other hand, LKSPP not only flexibly handles inputs of varying scales and sizes but also obtains richer and more advanced semantic features, which is mainly attributed to the effective receptive field expansion enabled by large kernels.Furthermore, the serial connection of several large kernels in LKSPP further suppresses the redundancy in the computational burden associated with large kernels.The obtained effective receptive field is larger for series than for parallel.Experimental results empirically validate that the detector assembled with MDFFAM and LKSPP as the core achieves highly competitive performance in small object detection tasks.Additionally, when testing the MDFFAM and LKSPP modules in isolation, both demonstrate a decent performance in their respective comparative experiments.This shows that the incorporation of MDFFAM or LKSPP into the baseline independently induces obvious improvement in model performance.
The complexity of mechanical structures can result in surface defects not readily discernible under normal lighting conditions or partially visible in shadow.Therefore, there is a great interest in future research regarding www.nature.com/scientificreports/data enhancement tools based on the fusion of thermal and visible imaging features.The next work will focus on an effective combination of the feature fusion methods from the two different imaging with large kernel and attention mechanisms.The approach aims to enhance the robustness of the detector and its accuracy.
Figure 2 .
Figure 2. Schematic comparison of the proposed LKSPP with SPCSPC and SPP.
https://doi.org/10.1038/s41598-024-60745-9www.nature.com/scientificreports/The hardware configuration for the experiments includes an Nvidia GeForce RTX3060 graphics card, an AMD Ryzen 7 5800H with a Radeon Graphics processor operating at speeds of up to 3.2 GHz, and 16 GB of RAM.
Figure 5 .
Figure 5. Effectiveness of different detectors in detecting defects.
Figure 6 .
Figure 6.Detection accuracy of the detector for each defect.
Figure 7 .
Figure 7. Three-dimensional display of four attention mechanisms.
Table 2 .
Compare the impact of different proposed modules on the baseline.
Table 3 .
Compare the impact of different attention mechanisms on the baseline.
Table 4 .
The impact of MDFFAM on the baseline under different settings.Here, r is the reduction rate. | 5,663 | 2024-05-03T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Partition function and base pairing probabilities of RNA heterodimers
Background RNA has been recognized as a key player in cellular regulation in recent years. In many cases, non-coding RNAs exert their function by binding to other nucleic acids, as in the case of microRNAs and snoRNAs. The specificity of these interactions derives from the stability of inter-molecular base pairing. The accurate computational treatment of RNA-RNA binding therefore lies at the heart of target prediction algorithms. Methods The standard dynamic programming algorithms for computing secondary structures of linear single-stranded RNA molecules are extended to the co-folding of two interacting RNAs. Results We present a program, RNAcofold, that computes the hybridization energy and base pairing pattern of a pair of interacting RNA molecules. In contrast to earlier approaches, complex internal structures in both RNAs are fully taken into account. RNAcofold supports the calculation of the minimum energy structure and of a complete set of suboptimal structures in an energy band above the ground state. Furthermore, it provides an extension of McCaskill's partition function algorithm to compute base pairing probabilities, realistic interaction energies, and equilibrium concentrations of duplex structures. Availability RNAcofold is distributed as part of the Vienna RNA Package, . Contact Stephan H. Bernhart –<EMAIL_ADDRESS>
Background
Over the last decade, our picture of RNA as a mere information carrier has changed dramatically. Since the discovery of microRNAs and siRNAs (see e.g. [1,2] for a recent reviews), small noncoding RNAs have been recognized as key regulators in gene expression. Both computational surveys, e.g. [3][4][5][6][7] and experimental data [8][9][10][11] now provide compelling evidence that non-protein-coding transcripts are a common phenomenon. Indeed, at least in higher eukaryotes, the complexity of the non-coding RNome appears to be comparable with the complexity of the proteome. This extensive inventory of non-coding RNAs has been implicated in diverse mechanisms of gene regulation, see e.g. [12][13][14][15][16] for reviews.
Regulatory RNAs more often than not function by means of direct RNA-RNA binding. The specificity of these interactions is a direct consequence of complementary base pairing, allowing the same basic mechanisms to be used with very high specificity in large collections of target and effector RNAs. This mechanism underlies the post-transcriptional gene silencing pathways of microRNAs and siRNAs (reviewed e.g. in [17]), it is crucial for snoRNAdirected RNA editing [18], and it is used in the gRNA directed mRNA editing in kinetoplastids [19]. Furthermore, RNA-RNA interactions determine the specificity of important experimental techniques for changing the gene expression patterns including RNAi [20] and modifier RNAs [21][22][23][24].
RNA-RNA binding occurs by formation of stacked intermolecular base pairs, which of course compete with the propensity of both interacting partners to form intramolecular base pairs. These base pairing patterns, usually referred to as secondary structures, not only comprise the dominating part of the energetics of structure formation, they also appear as intermediates in the formation of the tertiary structure of RNAs [25], and they are in many cases well conserved in evolution. Consequently, secondary structures provide a convenient, and computationally tractable, approximation not only to RNA structure but also to the thermodynamics of RNA-RNA interaction.
From the computational point of view, this requires the extension of RNA folding algorithms to include intermolecular as well as intramolecular base pairs. Several approximations have been described in the literature: Rehmsmeier et al. [26] as well as Dimitrov and Zuker [27] introduced algorithms that consider exclusively intermolecular base pairs, leading to a drastic algorithmic simplification of the folding algorithms since multi-branch loops are by construction excluded in this case. Andronescu et al. [28], like the present contribution, consider all base pairs that can be formed in secondary structures in a concatenation of the two hybridizing molecules. This set in particular contains the complete structural ensemble of both partners in isolation. Mückstein et al. [29] recently considered an asymmetric model in which base pairing is unrestricted in a large target RNA, while the (short) interaction partner is restricted to intermolecular base pairs.
A consistent treatment of the thermodynamic aspects of RNA-RNA interactions requires that one takes into account the entire ensemble of suboptimal structures. This can be approximated by explicitly computing all structures in an energy band above the ground state. Cor-responding algorithms are discussed in [30] for single RNAs and in [28] for two interacting RNAs. A more direct approach, that becomes much more efficient for larger molecules, is to directly compute the partition function of the entire ensemble along the lines of McCaskill's algorithm [31]. This is the main topic of the present contribution.
As pointed out by Dimitrov and Zuker [27], the concentration of the two interacting RNAs as well as the possibility to form homo-dimers plays an important role and cannot be neglected when quantitative predictions on RNA-RNA binding are required. In our implementation of RNAcofold we therefore follow their approach and explicitly compute the concentration dependencies of the equilibrium ensemble in a mixture of two partially hybridizing RNA species.
This contribution is organized as follows: We first review the energy model for RNA secondary structures and recall the minimum energy folding algorithm for simple linear RNA molecules. Then we discuss the modifications that are necessary to treat intermolecular base pairs in the partition function setting and describe the computation of base pairing probabilities. Then the equations for concentration dependencies are derived. Short sections summarize implementation, performance, as well as an application to real-world data.
RNA secondary structures
A secondary structure S on a sequence x of length n is a set of base pairs (i, j), i <j, such that 0 (i, j) ∈ S implies that (x i , x j ) is either a Watson-Crick (GC or AU) or a wobble (GU) base pair.
1 Every sequence position i takes part in at most one base pair, i.e., S is a matching in the graph of "legal" base pairs that can be formed within sequence x.
3 If (i, j) ∈ S and (k, l) ∈ S with i <k, then either i <j <k <l or i <k <l <j. This condition rules out knots and pseudoknots. Together with condition 1 it implies that S is a circular matching [32,33].
The "loops" of S are planar faces of the unique planar embedding of the secondary structure graph (whose edges are the base pairs in S together with the backbone edges (i, i + 1), i = 1 ..., n -1). Equivalently, the loops are the elements of the unique minimum cycle basis of the secondary structure graph [34]. The external loop consists of all those nucleotides that are not enclosed by a base pair in S. The standard energy model for RNA secondary structures associates an energy contribution to each loop L that depends on the loop type type(L) (hairpin loop, interior loop, bulge, stacked pair, or multi-branch loop) and the sequence of some or all of the nucleotides in the loop, x| L : The external loop does not contribute to the folding energy. The total energy of folding sequence x into a secondary structure S is then the sum over all loops of S. Energy parameters are available for both RNA [35] and single stranded DNA [36].
Hairpin loops are uniquely determined by their closing pair (i, j). The energy of a hairpin loop is tabulated in the form where ᐍ is the length of the loop (expressed as the number of its unpaired nucleotides). Each interior loop is determined by the two base pairs enclosing it. Its energy is tabulated as where ᐍ 1 is the length of the unpaired strand between i and k and ᐍ 2 is the length of the unpaired strand between So-called dangling end contributions arise from the stacking of unpaired bases to an adjacent base pair. We have to distinguish two types of dangling ends: (1) interior dangles, where the unpaired base i + 1 stacks onto i of the adjacent basepair (i, j) and correspondingly j -1 stacks onto j and (2) exterior dangles, where i -1 stack onto i and j + 1 stacks on j. The corresponding energy contributions are denoted by and , respectively. Within the additive energy model, dangling end terms are interpreted as the contribution of 3' and 5' dangling nucleotides: Here | separates the dangling nucleotide position from the adjacent base pair, d 5' (k -1|k, l) thus is the energy of the nucleotide at position k -1 when interacting with following base pair (k, l), while d 3' (k, l|l + 1) scores the interaction of position l + 1 with the preceding pair (k, l).
The Vienna RNA Package currently implements three different models for handling the dangling-end contributions: They can be (a) ignored, (b) taken into account for every combination of adjacent bases and base pairs, or (c) a more complex model can be used in which the unpaired base can stack with at most one base pair. In cases (a) and (b) one can absorb the dangling end contributions in the loop energies (with the exception of contributions in the external loop). Model (c) strictly speaking violates the secondary structure model in that an unpaired base x i between two base pairs (x p , x i-1 ) and (x i+1 , x q ) has three distinct states with different energies: x i does not stack to its neighbors, x i stacks to x i-1 , or x i+1 . The algorithm then minimizes over these possibilities. While model (c) is the default for computing minimum free energy structures in most implementations such as RNAfold and mfold, it is not tractable in a partition function approach in a consistent way unless different positions of the dangling ends are explicitly treated as different configurations.
RNA secondary structure prediction
Because of the no-(pseudo)knot condition 3 above, every base pair (i, j) subdivides a secondary structure into an interior and an exterior structure that do not interact with each other. This observation is the starting point of all dynamic programming approaches to RNA folding, see e.g. [32,33,37]. Including various classes of pseudoknots is feasible in dynamic programming approaches [38][39][40] at the expense of a dramatic increase in computational costs, which precludes the application of these approaches to large molecules such as most mRNAs.
In the course of the "normal" RNA folding algorithm for linear RNA molecules as implemented in the Vienna RNA Package [41,42], and in a similar way in Michael Zuker's mfold package [43][44][45] the following arrays are computed for i <j: F ij free energy of the optimal substructure on the subsequence x[i, j].
C ij free energy of the optimal substructure on the subsequence x[i, j] subject to the constraint that i and j form a basepair. The "conventional" energy minimization algorithm (for simplicity of presentation without dangling end contributions) for linear RNA molecules can be summarized in the following way, which corresponds to the recursions implemented in the Vienna RNA Package: The F table is initialized as F i+1, i = 0, while the other tables are are set to infinity for empty intervals. It is straightforward to translate these recursions into recursions for the partition function because they already provide a partition of the set of all secondary structures that can be formed by the sequence x. This unambiguity of the decomposition of the ensemble structure is not important for energy minimization, while it is crucial for enumeration and hence also for the computation of the partition function [31]. Let us write Z ij for the partition function on for the partition function constrained to structures with an (i, j) pair, and , for the partition function versions of the multiloop terms M ij and .
The adaptation of the recursion to the folding of two RNAs A and B of length n 1 and n 2 into a dimeric structure is straightforward: the two molecules are concatenated to form a single sequence of length n = n 1 + n 2 . It follows from the algorithmic considerations below that the order of the two parts is arbitrary.
A basic limitation of this approach arises from the nopseudoknots condition: It restricts not only the intramolecular base pairs but also affects intermolecular pairs. Let S A and S B denote the intramolecular pairs in a cofolded structure S. These sets of base pairs define secondary structures on A and B respectively. Because of the no-pseudoknot condition on S, an intermolecular base pair in S\(S A ∪ S B ) can only connect nucleotides in the external loops of A and B. This is a serious restriction for some applications, because it excludes among other pseudoknot-like structures also the so-called kissing hairpin complexes [46]. Taking such structures into account is equivalent to employing folding algorithms for structure models that include certain types of pseudoknots, such as the partition function approach by Dirks and Pierce [40]. Its high computational cost, however, precludes the analysis of large mRNAs. In an alternative model [29], no intramolecular interactions are allowed in the small partner B, thus allowing B to form basepairs with all contiguous unpaired regions in S A . From a biophysical point of view, however, it makes sense to consider exclusively hybridization in the exterior loop provided both partners are large structured RNAs. In this case, hybridization either stops early, i.e., at a kissing hairpin complex (in the case of very stable local structures) or it is thermodynamically controlled and runs into the ground state via a complete melting of the local structure. In the latter case, the no-pseudoknots condition is the same approximation that is also made when folding individual molecules. Note that this approximation does not imply that the process of hybridization could only start at external bases.
Loops with cuts have to be scored differently Figure 1 Loops with cuts have to be scored differently. Top row: hairpins and interior loops containing the cut between n 1 (black ball) and n 1 + 1 (white ball). Below: multi loops containing the cut. Neither M1 nor M components must start at n 1 + 1 or stop at n 1 . Note that the construction of Z M out of Z M and Z M1 ensures that the cut is not inside the loop part of Z M either.
Let us now consider the algorithmic details of folding two concatenated RNA sequences. The missing backbone edge between the last nucleotide of the first molecule, position n 1 in the concatenated sequence, and the first nucleotide of the second molecule (now numbered n 1 +1) will be referred to as the cut c. In each dimeric structure there is a unique loop L c that contains the cut c. If c lies in the external loop of a structure S then the two molecules A and B do no interact in this structure. Algorithmically, L c is either a hairpin loop, interior loop, or multibranch loop. From an energetic point of view, however, L c is an exterior loop, i.e., it does not contribute to the folding energy (relative to the random coil reference state). For example, an interior loop (i, j; k, l) does not contribute to the energy if either i ≤ n 1 <k or l ≤ n 1 <j. Naturally, dangling end contributions must not span the cut, either. Hairpin loops and interior loops (including the special cases of bulges and stacked pairs) can therefore be dealt with by a simple modification of the energy rules. In the case of the multiloop there is also no problem as long as one is only interested in energy minimization, since multiloops are always destabilizing and hence have strictly positive energy contribution. Such a modified MFE algorithm has been described already in [41].
For partition function calculations and the generation of suboptimal structures, however, we have to ensure that every secondary structure is counted exactly once. This requires one to explicitly keep track of loops that contain the cut c. The cut c needs to be taken into account explicitly only in the recursion for the Z P terms, where one has to distinguish between true hairpin and interior loops with closing pair (i, j) (upper alternatives in eq.(6)) and loops containing the cut c in their backbone (lower alternatives in eq.(6)). Explicitly, this means i ≤ n 1 <j in the hairpin loop case, in the interior loop case, this either means i ≤ n 1 <k or l ≤ n 1 <j. Since multiloops are decomposed into two components, it is sufficient to ensure during the construction of Z M1 and Z M that these components neither start nor end adjacent to the cut, see In the remainder of this presentation we will again suppress the dangling end terms for simplicity of presentation.
A second complication arises from the initiation energy Φ I that describes the entropy necessary to bring the two molecules into contact. This term, which is considered to be independent of sequence length and composition [47], has to be taken into account exactly once for every dimer structure if and only if the structure contains at least one base pair (i, j) that crosses the cut, i.e., i ≤ n 1 <j. The resulting bookkeeping problems fortunately can be avoided by introducing this term only after the dynamic programming tables have been filled. To this end we observe that , ,
Base pairing probabilities
McCaskill's algorithm [31] computes the base pairing probabilities from the partition functions of subsequences. Again, it seems easier to first perform the backtracking recursions on the "raw" partition functions that do not take into account the initiation contribution. This yields pairing probabilities P kl for an ensemble of structures that does not distinguish between true dimers and isolated structures for A and B and ignores the initiation energy. McCaskill's backwards recursions are formally almost identical to the case of folding a single linear sequence. We only have to exclude multiloop contributions in which the cut-point u between components coincides with the cut point c. All other cases are already taken care of in the forward recursion.
Thus:
The "raw" values of P ij , which are computed without the initiation term, can now be corrected for this effect. To this end, we separately run the backward recursion starting from Z 1,n and from to obtain the base pairing probability matrices and for the isolated molecules. Note that equivalently we could compute and directly using the partition function version of RNAfold.
In solution, the probability of an intermolecular base pair is proportional to the (concentration dependent) probability that a dimer is formed at all. Thus, it makes sense to consider the conditional pair probabilities given that a dimer is formed, or not. The fraction of structures without intermolecular pairs in our partition function Z (i.e. in the cofold model without initiation contributions) is Z A Z B /Z, and hence the fraction of true dimers is Now consider a base pair (i, j). If i ∈ A and j ∈ B, it must arise from the dimeric state. If i, j ∈ A or i, j ∈ B, however, it arises from the dimeric state with probability p* and from the monomeric state with probability 1 -p*. Thus the conditional pairing probabilities in the dimeric complexes can be computed as The fraction of monomeric and dimeric structures, however, cannot be directly computed from the above model. As we shall see below, the solution of this problem requires that we explicitly take the concentrations of RNAs into account.
Concentration dependence of RNA-RNA hybridization
Consider a (dilute) solution of two nucleic acid sequences A and B with concentrations a and b, respectively. Hybridization yields a distribution of five molecular species: the two monomers A and B, the two homodimers AA and BB, and the heterodimer AB. In principle, of course, more complex oligomers might also arise, we will, however, neglect them in our approach. We may argue that ternary and higher complexes are disfavored by additional destabilizing initiation entropies.
The presentation in this section closely follows a recent paper by Dimitrov [27], albeit we use here slightly different definitions of the partitions functions. The partition functions of the secondary structures of the monomeric states are Z A and Z B , respectively, as introduced in the previous section. In contrast to [27], we include the unfolded states in these partition functions. The partition functions Z AA , Z BB , and Z AB , which are the output of the RNAcofold algorithm (denoted Z in the previous section), include those states in which each monomer forms base-pairs only within itself as well as the unfolded monomers. We can now define as the partition functions restricted to the true dimer states, but neglecting the initiation energies Θ I . An additional symmetry correction is needed in the case of the homo-dimers: of the molecule. Such symmetric structures have a twofold rotational symmetry that reduces their conformation space by a factor of 2, resulting in an entropic penalty of ∆G sym = RT ln 2. On the other hand, since the recursion for the partition functions eq. 6 assumes two distinguishable molecules A and B, any asymmetric structures of a homodimer are in fact counted twice by the recursion. Leading to the same correction as for symmetric structures.
Since both the initiation energy Θ I and the symmetry correction ∆G sym are independent of the sequence length and composition, the thermodynamically correct partition functions for the three dimer species are given by From the partition functions we get the free energies of the dimer species, such as F AB = -RT ln , and the free energy of binding ∆F = F AB -F A -F B . We assume that pressure and volume are constant and that the solution is sufficiently dilute so that excluded volume effects can be neglected. The many particle partition function for this system is therefore [27] where a = n A + 2n AA + n AB is the total number of molecules of type A put into the solution (equivalently for b); n A , n B , n AA , n BB , n AB are the particle numbers for the five different monomer and dimer species, V is the volume and n is the sum of the particle numbers. The system now minimizes the free energy -kT ln , i.e., it maximizes , by choosing the particle numbers optimally.
As in [27], the dimer concentrations are therefore determined by the mass action equilibria: (14) with Concentrations in eq. (14) are in mol/l.
Note, however, that the equilibrium constants in eq. (15) are computed from a different microscopic model than in [27], which in particular also includes internal base pairs within the dimers.
Together with the constraints on particle numbers, eq. (14)
Implementation and performance
The algorithm is implemented in ANSI C, and is distrib- The base pairing probabilities are represented as a dot plot in which squares with an area proportional to P ij represent the the raw pairing probabilities, see Fig. 2. The dot plot is provided as Postscript file which is structured in such a way that the raw data can be easily recovered explicitly. RNAcofold also computes a table of monomer and dimer concentrations dependent on a set of user supplied initial conditions. This feature can readily be used to investigate the concentration dependence of RNA-RNA hybridization, see Fig. 3 for an example.
Like RNAfold, RNAcofold can be used to compute DNA dimers by replacing the RNA parameter set by a suitable set of DNA parameters. At present, the computation of DNA-RNA heterodimers is not supported. This would not only require a complete set of DNA-RNA parameters (stacking energies are available [49], but we are not aware of a complete set of loop energies) but also further complicate the evaluation of the loop energy contributions since pure RNA and pure DNA loops will have to be distinguished from mixed RNA-DNA loops.
Applications
Intermolecular binding of RNA molecules is important in a broad spectrum of cases, ranging from mRNA accessibility to siRNA or miRNA binding, RNA probe design, or designing RNA openers [50]. An important question that arises repeatedly is to explain differences in RNA-RNA binding between seemingly very similar or even identical binding sites. As demonstrated e.g. in [22,29,51,52], different RNA secondary structure of the target molecule can have dramatic effects on binding affinities even if the sequence of the binding site is identical.
Since the comparison of base pairing patterns is a crucial step in such investigations we provide a tool for graphically comparing two dot plots, see Fig. 4. It is written in Perl-Tk and takes two dot plot files and, optionally, an alignment file as input. The differences between the two dot plots are displayed in color-code, the dot plot is zoomable and the identity and probability(-difference) of a base pair is displayed when a box is clicked.
As a simple example for the applicability of RNAcofold, we re-evaluate here parts of a recent study by Doench and Sharp [53]. In this work, the influence of GU base pairs on the effectivity of translation attenuation by miRNAs is assayed by mutating binding sites and comparing attenuation effectivity to wild type binding sites Introducing three GU base pairs into the mRNA/miRNA duplex did, with only minor changes to the binding energy, almost completely destroy the functionality of the binding site. While Doench and Sharp concluded that miRNA binding sites are not functional because of the GU base pairs, testing the dimer with RNAcofold shows that there is also a significant difference in the cofolding structure that might account for the activity difference without invoking sequence specificities: Because of the secondary structure of the target, the binding at the 5' end of the miRNA is much weaker than in the wild type, Fig. 4.
Limitations and future extensions
We have described here an algorithm to compute the partition function of the secondary structure of RNA dimers and to model in detail the thermodynamics of a mixture of two RNA species. At present, RNAcofold implements the most sophisticated method for modeling the interac- Dot plot (left) and mfe structure representation (right) of the cofolding structure of the two RNA molecules AUGAA-GAUGA (red) and CUGUCUGUCUUGAGACA (blue) Figure 2 Dot plot (left) and mfe structure representation (right) of the cofolding structure of the two RNA molecules AUGAA-GAUGA (red) and CUGUCUGUCUUGAGACA (blue). Dot Plot: Upper right: Partition function. The area of the squares is proportional to the corresponding pair probabilities. Lower left: Minimum free energy structure. The two lines forming a cross indicate the cut point, intermolecular base pairs are depicted in the green upper right (partition function) and lower left (mfe) rectangle.
tions of two (large) RNAs. Because the no-pseudoknot condition is enforced to limit computational costs, our approach disregards certain interaction structures that are known to be important, including kissing hairpin complexes.
The second limitation, which is of potential importance in particular in histochemical applications, is the restriction to dimeric complexes. More complex oligomers are likely to form in reality. The generalization of the present approach to trimers or tetramers is complicated by the fact that for more than two molecules the results of the calculation are not independent of the order of the concatenation any more, so that for M-mers (M -1)! permutations have to be considered separately. This also leads to bookkeeping problems since every secondary structure still has to be counted exactly once.
Difference dot Plot of native and mutated secondary struc-ture of a 3GU mutation of the CXCR4 siRNA gene Figure 4 Difference dot Plot of native and mutated secondary structure of a 3 GU mutation of the CXCR4 siRNA gene. The red part on the right hand side shows the base pairing probability of the 5' part of the micro RNA, which is 80% higher in the native structure. This is an alternative explanation for the missing function of the mutant. Because of the mutations, the stack a little to the left gets more stable, and the probability of binding of the 5' end of the siRNA is reduced significantly.The color of the dots encodes the difference of the pair probabilities in the two molecules such positive (red) squares denote pairs more more probable in the second molecule (see color bar). The area of the dots is proportional to the larger of the two pair probabilities. Example for the concentration dependency for two mRNA-siRNA binding experiments Figure 3 Example for the concentration dependency for two mRNA-siRNA binding experiments. In [54], Schubert et al. designed several mRNAs with identical target sites for an siRNA si, which are located in different secondary structures. In variant A, the VR1 straight mRNA, the binding site is unpaired, while in the mutant mRNA VR1 HP5-11, A', only 11 bases remain unpaired. We assume an mRNA concentration of a = 10 nmol/1 for both experiments. Despite the similar binding pattern, the binding energies (∆F = F AB -F A -F B ) differ dramatically. In [54], the authors observed 10% expression for VR1 straight, and 30% expression for the HP5-11 mutant. Our calculation shows that even if siRNA is added in excess, a large fraction of the VR1 HP5-11 mRNA remains unbound. | 6,813.8 | 2006-01-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Thermo-Hydrodynamics of Internally Heated Molten Salts for Innovative Nuclear Reactors
The problem of heat transfer in pipe flow has been extensively investigated in the past. Many different models have been proposed and adopted to predict the velocity profile, the eddy diffusivity, the temperature distributions, the friction factor and the heat transfer coefficient (Kays et al., 2004; Schlichting & Gersten, 2000). However, the majority of such studies give a description of the problem for non-internally heated fluids. Models regarding fluids with internal heat generation have been developed more than 50 years ago (Kinney & Sparrow, 1966; Poppendiek, 1954; Siegel & Sparrow, 1959), giving in most cases a partial treatment of the problem in terms of boundary conditions and heat source distribution, and relying on a turbulent flow treatment that does not seem fully satisfactory in the light of more recent investigations (Churchill, 1997; 2002; Kays, 1994; Zagarola & Smits, 1997). Internally heated fluids are of great interest in the current development of Molten Salt Reactors (MSR) (LeBlanc, 2010), included as one of the six innovative nuclear reactors selected by the Generation IV International Forum (GIF-IV, 2002) for a more sustainable version of nuclear power. MSRs are circulating fuel reactors (Nicolino et al., 2008), which employ a non-classical (fluid-type) fuel constituted by a molten halide (fluoride or chloride) salt mixture playing the distinctive role of both heat source and coolant. By adopting classical correlations for the Nusselt number (e.g., Dittus-Boelter), the heat transfer coefficient of the MSR fuel can be overestimated by a non-negligible amount (Di Marcello et al., 2008). In the case of thermal-neutron-spectrum (graphite-moderated) MSRs (LeBlanc, 2010), this has significant consequences on the core temperature predictions and on the reactor dynamic behaviour (Luzzi et al., 2011). Such influence of the heat source within the fluid cannot be neglected, and thus required proper investigation. The present chapter deals with this critical issue, first summarizing the main modelling efforts carried out by the authors (Di Marcello et al., 2010; Luzzi et al., 2010) to investigate the thermo-hydrodynamics of internally heated fluids, and then focusing on the heat transfer coefficient prediction that is relevant for analysing the molten salt behaviour encountered in MSRs.
Introduction
The problem of heat transfer in pipe flow has been extensively investigated in the past. Many different models have been proposed and adopted to predict the velocity profile, the eddy diffusivity, the temperature distributions, the friction factor and the heat transfer coefficient (Kays et al., 2004;Schlichting & Gersten, 2000). However, the majority of such studies give a description of the problem for non-internally heated fluids. Models regarding fluids with internal heat generation have been developed more than 50 years ago (Kinney & Sparrow, 1966;Poppendiek, 1954;Siegel & Sparrow, 1959), giving in most cases a partial treatment of the problem in terms of boundary conditions and heat source distribution, and relying on a turbulent flow treatment that does not seem fully satisfactory in the light of more recent investigations (Churchill, 1997;Kays, 1994;Zagarola & Smits, 1997). Internally heated fluids are of great interest in the current development of Molten Salt Reactors (MSR) (LeBlanc, 2010), included as one of the six innovative nuclear reactors selected by the Generation IV International Forum (GIF-IV, 2002) for a more sustainable version of nuclear power. MSRs are circulating fuel reactors (Nicolino et al., 2008), which employ a non-classical (fluid-type) fuel constituted by a molten halide (fluoride or chloride) salt mixture playing the distinctive role of both heat source and coolant. By adopting classical correlations for the Nusselt number (e.g., Dittus-Boelter), the heat transfer coefficient of the MSR fuel can be overestimated by a non-negligible amount (Di Marcello et al., 2008). In the case of thermal-neutron-spectrum (graphite-moderated) MSRs (LeBlanc, 2010), this has significant consequences on the core temperature predictions and on the reactor dynamic behaviour (Luzzi et al., 2011). Such influence of the heat source within the fluid cannot be neglected, and thus required proper investigation. The present chapter deals with this critical issue, first summarizing the main modelling efforts carried out by the authors (Di Luzzi et al., 2010) to investigate the thermo-hydrodynamics of internally heated fluids, and then focusing on the heat transfer coefficient prediction that is relevant for analysing the molten salt behaviour encountered in MSRs.
The chapter is organized as follows. Section 2 provides a brief description of the Molten Salt Reactors, focusing on their distinctive features, in terms of both sustainability (i.e., reduced radioactive waste generation, effective use of natural resources) and safety, with respect to the traditional configuration of nuclear reactors. Section 3 deals with the study of molten salt heat transfer characteristics, which represent a key issue in the current development of MSRs. In 6 www.intechopen.com particular, a "generalized approach" to evaluate the steady-state temperature distribution in a representative power channel of the reactor core is presented. This approach incorporates recent formulations of turbulent flow and convection (Churchill, 1997;, and is built in order to carefully take into account the molten salt mixture specificities, the reactor core power conditions and the heat transfer in the graphite core structure. In Section 4, a preliminary correlation for the Nusselt number prediction is advanced in the case of simultaneous uniform wall heat flux and internal heat generation, on the basis of the results achieved by means of the presented generalized approach. In Section 5, the main conclusions of the present study are summarized.
Innovative nuclear reactors based on the molten salt technology
In the recent years, there has been a growing interest in Molten Salt Reactors, which have been considered in the framework of the Generation IV International Forum (GIF-IV, 2002;2009), because of their several potentialities and favourable features when compared with conventional solid-fuelled reactors (Forsberg et al., 2003;Furukawa et al., 2008;Hargraves & Moir, 2010;LeBlanc, 2010;Renault et al., 2010). Actually, MSRs meet many of the future goals of nuclear energy, in particular for what concerns an improved sustainability, an inherent safety, and unique characteristics in terms of actinide burning and waste reduction (Nuttin et al., 2005), while benefiting from the past experience acquired at ORNL 1 with the molten salt technology.
Different from other GIF-IV projects, a specific reference configuration for the MSR has not been identified yet (GIF-IV, 2009). Current R&D activities on MSRs are devoted to this subject and many reactor configurations have been proposed until now (Luzzi et al., 2011). A molten salt reactor can be designed considering both thermal-and fast-neutron spectrum, and can operate as incinerator or breeder or converter (Forsberg, 2002), in critical or sub-critical (i.e., driven by an external neutron source) conditions. An example of the layout of a typical MSR is given in Fig. 1. The primary molten salt mixture 2 flows through the reactor core (constituted by graphite, if a thermal-neutron-spectrum reactor is under consideration) to a primary heat exchanger, where the heat is transferred to a secondary molten salt coolant. The primary salt then flows back to the reactor core. The heat is generated inside the core directly by the primary molten salt mixture, which plays the distinctive role of both fuel and coolant. The liquid fuel salt typically enters the reactor vessel at 560 • C and exits at 700 • C, with a pressure of ∼1 atmosphere. The secondary coolant loop transfers the heat to the power cycle (a multi-reheat helium Brayton cycle or a steam Rankine cycle) or to an hydrogen production facility (Forsberg et al., 2003).
MSRs are based on a liquid fuel, so that their technology is fundamentally different from the solid fuel technologies currently in use or envisaged in future for the other GIF-IV reactor concepts. Some of the advantages specific to MSRs (for instance, in terms of safety) originate directly from this characteristic, as pointed out in the next subsection. Furthermore, these types of reactor are particularly well adapted to the thorium fuel cycle ( 232 Th-233 U), which 1 See www.ornl.gov/info/library or www.energyfromthorium.com/pdf/. 2 Typically, fluorides of fissile and/or fertile elements such as UF 4 , PuF 3 and/or ThF 4 are combined with carrier salts to form fluids. The most common carrier salt proposed are mixtures of enriched (>99.99%) 7 LiF and BeF 2 termed "flibe". A critical assessment of the potential molten salt mixtures for MSRs can be found in (Renault et al., 2010). has the advantage of producing less transuranic isotopes than the uranium-plutonium fuel cycle ( 238 U-239 Pu) (Nuttin et al., 2005). Designs specific for the 232 Th-233 U cycle using fluoride salts have recently been termed Liquid Fluoride Thorium Reactors (LFTR). Among the most attractive features of the LFTR design is the higher sustainability of the back-end of the fuel cycle, in terms of waste profile (Hargraves & Moir, 2010). Adoption of thorium in a closed cycle (i.e., with full recycle of actinides) generates much less waste, of far less radiotoxicity (LeBlanc, 2010), which requires a few hundred years of isolated storage versus the few hundred thousand years necessary for the waste generated by the conventional once-through uranium-plutonium fuel cycle, adopted in the current Light Water Reactors (LWR). LFTRs are featured by a higher fuel cycle sustainability when compared with current LWRs also in terms of natural resource utilization, as can be appreciated looking at the volume of material handled from the front-end phase of the fuel cycle to generate a comparable amount of electric power (Fig. 2).
Besides the favourable features concerning the fuel cycle and the waste management, MSRs offer an array of other advantages in design, operation, safety and proliferation resistance over the traditional solid fuel design of nuclear reactors. A detailed review of such potentialities, as well as of the molten salt technology, is beyond the scope of the present chapter, and can be found in (Furukawa et al., 2008;LeBlanc, 2010;Luzzi et al., 2011;Renault et al., 2010). In the next subsection, the main operational and safety advantages achievable with molten salts are briefly presented, focusing the attention on the differences with respect to solid-fuelled nuclear reactors.
Operational and safety issues of MSRs
As the only liquid-fuelled reactor concept, the safety basis and characteristics of the MSR are considerably different from the other reactor concepts. This leads to different advantages, as outlined here below.
The reactor design characteristics minimize the potential for accident initiation. Unlike solid-fuelled reactors, fuel is added as needed, and consequently the reactor has almost no excess nuclear reactivity, which reduces the risk of accidental reactivity insertion. Thanks to a good neutron economy, and to the on-line fuel feeding and reprocessing (in which the fuel is cleaned up from neutron poisons such as Xe), MSRs are usually featured by low fissile inventory. Fission products (except Xe and Kr) are highly soluble in the salt and are expected to remain inside the mixture under both operating and accident conditions. The fission products, which are not soluble (e.g., Xe, Kr), are continuously and relatively easily removed from the molten fuel salt, and the potential for significant radioactivity release from the reactor is notably low.
A distinctive safety feature of the MSR design is that the primary system is at a low operating pressure even at high temperatures, due to the high boiling point (∼1400 • C at atmospheric pressure) of the fluoride salt mixture. This eliminates a major driving force (high pressure) for transport of radionuclides from the reactor to the environment during an accident. Moreover, neutral pressure reduces the cost and the scale of MSR plant construction by lowering the scale of the containment requirements, because it obviates the need to contain pressure like in light water or gas cooled nuclear reactors (featured by thick walled pressure vessels). Disruption in a transport line of the primary system would not result in an explosion, but in a leak of the molten salt, which would be captured in a catch basin, where it would passively cool and harden.
The fluid nature of the fuel means that the reactor core meltdown is an irrelevant term. The liquid state of the core also enables in most emergencies a passive, thermally triggered fuel salt draining into bunkered, and geometrically sub-critical, multiple dump tanks, which are provided with passive decay heat cooling systems (see Fig. 1). Actually, at the bottom of the core, MSR designs have a freeze plug (a plug of salt, actively cooled by a fan to keep it at a temperature below the freezing point of the salt). If the fuel salt overheats and its temperature rises beyond a critical point, the freeze plug melts, and the liquid fuel overflows by gravity and is immediately evacuated from the core, pouring into the emergency dump tanks. This formidable safety tactic is only possible if the fuel is a liquid. Power is not needed to shutdown the reactor, for example by manipulating control elements, but it is needed to prevent the shutdown of the reactor.
Further characteristics of fluoride salts (both fuel and secondary system) are relevant from the safety and design/operational points of view. They are chemically inert, thermodynamically lacking the highly energetic reactions with environmental materials found in other reactor types (e.g., hot zirconium or sodium with water). In particular, the absence of water in the reactor core means no possible steam explosion or hydrogen production within the containment, as occurred in the Fukushima accident. In designs without graphite moderator, there is not even combustible material present. Moreover, molten fluoride salts are excellent coolants, with a 25% higher volumetric heat capacity than pressurized water and nearly 5 times that of liquid sodium. This results in more compact primary circuit components, like pumps and heat exchangers. They also have a much lower thermal conductivity than sodium, thus avoiding thermal shock issues. The high melting temperature (∼460 • C for the "flibe" mixture, LiF-BeF 2 67-33 mol%) requires operational constraints on reactor temperature to avoid freezing during the normal operating conditions or during maintenance operations, but from the safety point of view it makes molten salt accidentally escaping from the reactor vessel to immediately freeze. Liquid fluoride salts are impervious to radiation damage, which does not constitute a constraint on fuel burn-up limit as for solid-fuelled cores. Actually, they are not subject to the structural stresses of solid fuel and their ionic bonds can tolerate unlimited levels of radiation damage, while eliminating the (rather high) cost of fabricating fuel elements and the (also high) cost of periodic shutdowns to replace them. In addition, a fluid fuel permits to have a homogeneous core composition eliminating the complications connected to the refuelling strategy, which in conventional reactors comprises reshuffling of the fuel assemblies. MSRs can operate with different fissile materials and additives in the liquid fuel, proving the possibility to transmute and burn nuclear wastes such as plutonium, minor actinides and long-lived fission products (Forsberg, 2002).
As concerns inherent safety, MSR designs with fast spectrum (FS-MSRs) are characterized by a very strong negative void (expanded fuel is pushed out of the core) and temperature reactivity coefficients of fuel salt, which avoid the major design constraints required in solid-fuelled fast reactors and, acting instantly, permit the desirable property of automatic "load following operation". Namely, under conditions of changing electricity demand (load), the reactor tends to adjust its power. Therefore, FS-MSRs can provide a high power density, while maintaining excellent passive safety characteristics (Renault et al., 2010).
In conclusion, the available evidence about the MSR features suggests that the probability and the consequences of a large accident are much smaller than those of most solid-fuelled reactors, whereas the processing system for cleaning the fuel salt and the remote maintenance of major components indicate greater concerns associated with smaller accidents. MSRs involve more intensive manipulation of highly radioactive materials than other reactor classes, and thus small spills and contamination accidents appear to be more likely with this reactor class. The salt processing technology 3 and, more in general, the "liquid salt chemistry" plays a major role in the viability demonstration of MSR concepts and requires essential R&D. Among the main issues, the following ones are worth to be mentioned: (i) the physico-chemical behaviour of coolant and fuel salts, including fission products and tritium; (ii) the compatibility of salts with structural materials for fuel and coolant circuits, as well as fuel processing materials development; (iii) the on-site fuel processing; and (iv) the maintenance, instrumentation and control of liquid salt chemistry (redox, purification, homogeneity). Further details can be found in GIF-IV, 2009). As concerns the modelling efforts, in MSRs a strong coupling between neutronics and thermal-hydraulics exists with more evidence than in solid-fuelled reactors (Cammi et al., 2011a;Křepel et al., 2007;Nicolino et al., 2008). In particular, the following two distinctive features of molten salts (acting both as circulating fuel and coolant) are of relevance for the MSR dynamics and must be properly addressed from the modelling point of view: (i) as it concerns neutronics, the concentration of delayed neutron precursors (DNP) is featured by an unusual pattern according to the fuel velocity field, and can significantly affect the neutron balance since a part of DNPs can decay outside the reactor core (Cammi et al., 2011a;Křepel et al., 2007;Nicolino et al., 2008); (ii) as it concerns thermal-hydraulics, the coolant is a fluid with internal heat generation whose heat transfer properties are considerably different from non-internally heated fluids. To this last issue are dedicated the following sections.
A generalized approach to the modelling of the MSR core channels
A typical configuration of a MSR core with a thermal-neutron spectrum is reported in Fig. 3a. It refers to the Molten Salt Breeder Reactor (MSBR) core (Robertson, 1971), usually considered as reference system for benchmark analyses and validation purposes (e.g., Křepel et al. (2007)). The core includes graphite blocks traversed by circular channels (Fig. 3b), through which the power generating molten salt flows. The present work is focused on heat transfer in a single-channel of the core (Fig. 3c), considering the most relevant features related to its physical behaviour modelling, while neglecting the details of the actual geometrical domain.
In particular, the analysed geometry consists of a smooth circular channel with constant flow section surrounded by a solid region (represented by the graphite matrix in the specific case of interest), within which the fluid flow is hydro-dynamically developed, but thermally developing, as depicted in Fig. 3d. This situation is consistent with the flow characteristics (c) cylindrical shell approximation of the single-channel; (d) analysed geometry and coordinate system encountered in the MSR core channels, both in steady-state and transient operation (Luzzi et al., 2011). Even if the graphite blocks can be square or hexagonal shaped, it is a good approximation to model them as a cylindrical shell. In this way, the adopted geometry is axial-symmetric and the use of a two-dimensional domain is made possible.
The analysed physical situation is therefore represented by the molten salt flowing through a cylindrical channel surrounded by graphite, with both the fluid and the solid (see at the end of subsection 3.1) generating power. To properly treat the heat transfer characteristics of such a system, a "generalized approach" is undertaken. This approach treats the problem of heat transfer by forced convection of a fluid inside a circular pipe (generally known as the "Graetz problem") according to a general mathematical formulation that also considers the internal heat generation of the fluid.
In principle, the adopted model is applicable and valid for annular pipes and parallel plate channels but, in the interest of simplicity and practicality, the results are herein limited to circular pipes. The detailed derivation and numerical implementation/discussion of the solution can be found in Luzzi et al., 2010), hence in the next two subsections only essential parts are reproduced.
Mathematical formulation of the generalized approach
With reference to the "Graetz problem", the following expression is adopted for the energy equation: where the time-averaged axial component of velocity (u) and the eddy diffusivity for heat (ε H ) are assumed to depend only on the radial coordinate (r). For the meaning of the other symbols, see the nomenclature in Section 7. Equation 1 is valid under the following hypotheses: (i) axial-symmetric conditions are taken into account; (ii) steady-state exists; (iii) the fluid is incompressible with no phase change, and constant physical properties; (iv) the hydrodynamic pattern is established; (v) natural convection effects are not considered; and (vi) axial conduction of heat is negligible. The last assumption has been shown by Weigand et al. (2001) to introduce a negligible error for Peclet numbers larger than 10 2 . This condition is satisfied in the case of MSRs, which are typically featured by Peclet numbers greater than 10 4 . The boundary conditions for Equation 1, at r = 0 (at the pipe centreline), must be of the second kind (see Equation 2a) because of assumption (i), while at r = R 1 (at the pipe wall) they can be taken as any combination of the boundary conditions of the first, second, and third kind, as expressed by Equations 2b, 2c and 2d, respectively: Finally, the boundary condition at the pipe entrance (z = 0) is given by Equation 3: In order to get the solution of the Equation 1 with the boundary conditions 2 and 3, it is convenient to express them in a dimensionless form (see Fig. 4), and then to adopt the so-called "splitting-up procedure". Such procedure consists in splitting-up the solution of the original problem into two parts, as given by Equation 5, and can be applied by assuming that the non-homogeneous term Φ(Z) and the term P(R, Z)=R · S(R, Z) can be expressed in terms of q-order polynomials of the axial coordinate Z as follows: According to this procedure, the following final solution for the temperature field in the fluid is achieved (details can be found in Di ): where with j = q, q − 1, q − 2, ..., 1, 0 and θ q+1 (R)=0. In Equation 5, Ψ i (R) and μ i are the eigenfunctions and the eigenvalues, respectively, of the Sturm-Liouville problem represented by the differential Equation 8 with its boundary conditions 9: Once the temperature distribution, θ(R, Z), in the fluid flow is determined, the Nusselt number can be evaluated by means of Equations 10 and 11: In the case of uniform internal heat generation and constant wall heat flux, the dependence of Nu on the axial coordinate vanishes when fully developed flow conditions occur . This fact will be employed in the derivation of the heat transfer correlation form for internally heated fluids presented in subsection 4.2.
The model described above permits to evaluate in a simple and prompt way some fundamental quantities such as the distributions of temperature and the Nusselt number. It is applicable for: (i) boundary conditions (at the pipe wall) of first, second or third kind, with arbitrary axial distribution; (ii) arbitrary radial distribution of the inlet temperature, T in (r); and (iii) arbitrary shape of the internal heat source Q(r, z) in both the radial and axial directions. The model can be implemented for both laminar and turbulent flow. In the first case, the solution can be obtained by considering the Hagen-Poiseuille parabolic velocity profile and zero eddy diffusivity ε H , i.e.: f (R)=2(1 − R 2 ) and g(R)=1. In the second case, to obtain the velocity profile and the eddy diffusivity for the heat, both needed to solve the original Equation 1, the adoption of a formulation for turbulent flow is required (see subsection 3.2 for details). In particular, the solution here considered includes the recent formulations of turbulent flow and convection of Churchill (1997) and was assessed for a large variety of fluids, showing that such generalized approach is able to reproduce with a good agreement the experimental data concerning heat transfer evaluations for both fully developed and thermally developing flow conditions, in a wide range of Prandtl (10 −2 < Pr < 10 4 ) and Reynolds (2 · 10 3 < Re < 5 · 10 5 ) numbers, with and without internal heat generation .
The above generalized approach can be easily extended to evaluate the entire temperature field in the reactor core single-channel, by taking into account the heat conduction in the graphite matrix as well as the corresponding internal heat generation due to gamma heating and neutron irradiation. The "overall solution" (fluid + solid) of such heat transfer problem, with reference to the geometry shown in Fig. 3d, can be achieved by combining the above solution for the turbulent pipe flow of the internally heated molten salt ("Graetz problem") with the solution for the heat conduction problem in the solid region (graphite) surrounding it. The detailed derivation of the "overall solution" can be found in . The final result is shown in Fig. 5, which refers to a single-channel representative of the average steady-state conditions of the MSBR core (Robertson, 1971), shown in Fig. 3a. As far as the boundary conditions are concerned, a constant temperature (T in ) is imposed at the channel inlet (z = 0), while a convective flux condition is prescribed at the outlet (z = H). In the lower part of the solid annulus, at z = 0, the same temperature of the fluid entering the channel is fixed. Adiabatic conditions are imposed on the external radius R 2 and in correspondence of the outlet section (z = H). On the wall between the fluid and the solid (r = R 1 ), continuity of temperature and wall heat flux are considered.
As can be noticed in Fig. 5, a good agreement is found between the "overall solution" achieved by means of the generalized approach and a dedicated Computational Fluid Dynamics (CFD) simulation. The CFD calculation was performed by means of the finite volume software FLUENT (Fluent, 2005): (i) adopting the incompressible RANS (Reynolds Averaged Navier Stokes) equations for the fluid motion with Boussinesq's eddy viscosity hypothesis; (ii) considering the standard "k − ε turbulence model" and the enhanced wall treatment approach available in FLUENT; and (iii) in steady-state and hydro-dynamically developed conditions, with reference to a two-dimensional, axial-symmetric (r, z) computational domain, in accordance with the hypotheses and the boundary conditions mentioned above. Further details concerning the mesh strategy and the numerical model are given in (Luzzi et al., 2011).
Turbulent flow formulation
As pointed out, to obtain the solution of the turbulent "Graetz problem", the reinterpretation of turbulent flow and convection of Churchill (Churchill, 1997; is considered, so that the eddy diffusivity and thus the velocity profile are expressed in terms of the local turbulent shear stress. In particular, the relationship between the eddy diffusivity for momentum, ε M , and the dimensionless turbulent shear stress, (u ′ v ′ ) ++ , in hydro-dynamically developed flow, is a "one-to-one correspondence" (Churchill, 1997), as expressed in Equation 12: From Equations 12 to 15, the velocity can be obtained as follows It can be noticed that explicit expressions for the dimensionless turbulent shear stress and the friction factor are required in order to evaluate the velocity profile. For the first one, the following correlation, suggested by Heng et al. (1998) and based on the recent turbulent velocity measurements of Zagarola & Smits (1997), is adopted: As far as the Darcy friction factor is concerned, the recent correlation proposed by Guo & Julien (2003) Finally, in order to obtain the eddy diffusivity for heat, ε H , which is also needed in the solution of the turbulent "Graetz problem" (Equation 1), the turbulent Prandtl number (Pr T = ε M /ε H ) is evaluated through the correlation proposed by Kays (1994), and reported in Equation 20: This expression was found to be in a good agreement with most experimental and computed values of the turbulent Prandtl number (Kays, 1994). Fig. 6 shows the comparison in terms of velocity profile between the use of Equation 17 (in the generalized approach) and a CFD calculation performed by means of FLUENT, with reference to the MSBR case (see subsection 3.1). As a result, a general good agreement is found. A more complete study for different Reynolds numbers and different turbulence models is available in . It is worth pointing out that a proper evaluation of the molten salt velocity field is a relevant aspect for what concerns the dynamic behaviour of the MSRs, due to the drift of DNPs and their distribution inside the fluid (Cammi et al., 2011b).
Derivation of a heat transfer correlation for the MSR core channels
In the previous section, a detailed treatment of the heat transfer for internally heated fluids has been presented. Treatments of this kind, as well as dedicated CFD codes, can be used to deeply investigate the heat transfer process in many engineering applications. Nevertheless, when dealing with complex systems, computational requirements make often impossible the direct application of such techniques. This is particularly the case for the set-up of models dedicated to the transient analysis of graphite-moderated MSRs, where the reactor core is actually composed by thousands of channels. In this context, it can be useful to rely on a simplified treatment, such as the use of correlations able to predict the Nusselt number, and thus the heat transfer coefficient. In this section, the generalized model presented in subsection 3.1 is adopted to derive a simple correlation for the heat transfer in channels featured by internally heated fluids.
Overview of available correlations
Molten salts are Newtonian fluids and are featured by Prandtl numbers on the order of 10. A number of correlations suitable for a wide range of Reynolds and Prandtl numbers has been proposed in the past, and can be used also for molten salts. Examples of such correlations are the following: the Dittus-Boelter (Dittus & Boelter, 1930), Colburn (Colburn, 1933), and Sieder-Tate (Sieder & Tate, 1936) correlations for turbulent flows; the Hausen (Hausen, 1959) and Gnielinski (Gnielinski, 1976) correlations, which are valid also in the transition between laminar and turbulent flow.
More recent studies have been carried out in order to increase the accuracy of the mentioned correlations for Reynolds and Prandtl numbers of interest in specific fields. As regards molten salts, the Hausen and Gnielinski correlations have been recently checked by means of a dedicated experimental facility, and a slightly modified version of the Gnielinski correlation has been proposed (Yu-ting et al., 2009). Another interesting work can be found in (Bin et al., 2009), where the Sieder-Tate and the Hausen correlations are also assessed, and a modified Sieder-Tate correlation is proposed.
All the correlations mentioned above can be used with a good degree of accuracy in many applications in the field of engineering, but they are not suitable for situations where the working fluid is featured by internal heat generation, as in the case of MSRs. Recently, it has been shown that the use of such classical correlations for predicting heat transfer in MSRs can lead to an underestimate of temperature difference between molten salt and graphite as high as 70% . Specific correlations should then be used for internally heated fluids. Some preliminary studies on the subject are available in literature (Kinney & Sparrow, 1966;Poppendiek, 1954;Siegel & Sparrow, 1959), but they are in most cases partial treatments and they do not lead to the proposal of correlations to be used for turbulent flow.
Analytical derivation of heat transfer correlation form for internally heated fluids
In this subsection, the problem of heat transfer in channels featured by internally heated fluids is treated analytically, and the general form of the Nusselt number correlation for such situation is derived under the following assumptions: (i) smooth channel with circular cross section; (ii) fully developed turbulent flow conditions; (iii) uniform internal heat generation; and (iv) constant wall heat flux. This situation can be simplified by considering it as the superimposition of two simpler situations, i.e.: 1) a flow without internal heat generation, but featured by constant wall heat flux, which is the typical case considered by classical heat transfer correlations; and 2) a channel with adiabatic walls and internal heat generation.
The possibility of such superimposition is guaranteed by the linearity (with respect to temperature) of the energy equation -see Equation 1. Hence, it is possible to compute the difference between wall and bulk temperatures as follows (see also Fig. 4): In Equation 21 and in the next ones, the subscript Q + j w refers to the complete situation with both internal heat generation and wall heat flux, while the subscripts Q and j w indicate that the temperature differences are computed in the simplified situation of internal heat generation alone (situation 1) and wall heat flux alone (situation 2), respectively. Introduction of the Equation 21 in the definition of heat transfer coefficient leads to: It should be mentioned at this point that the term h Q , although similar to h j w in its definition, does not represent a heat transfer coefficient. Actually, it includes in the same definition the temperatures of the situation 2 (with internal heat generation and with adiabatic walls) and the heat flux in the situation 1 (without internal heat generation). It is possible to rewrite the Equation 22 in terms of Nusselt numbers as follows: Equation 23 implies that the Nusselt number, in case of internally heated fluids and constant wall heat flux (Nu Q+j w ), can be computed by means of classical correlations for the value of Nu j w (subsection 4.1) through the introduction of a correction factor with the form: Hence, what is required is the derivation of the term δ as a function of the parameters characterizing the system. Assuming constant the properties of the fluid, it is possible to write: δ = δ C p , μ, ρ, k, D, u avg , Q, j w where h j w and (T w − T b ) Q are independent of j w . Moreover, the term (T w − T b ) Q is directly proportional to Q -see for example (Poppendiek, 1954). It follows: μ, ρ, k, D, u avg (27) in which the dependence on two parameters has been made explicit. Use of Π-theorem (Langhaar, 1962) in Equation 27 finally leads to: Summarizing, a correlation for the Nusselt number, in case of simultaneous wall heat flux and internal heat generation, must take the following form: If Nu j w is assumed to be known from available correlations (subsection 4.1), Equation 29 allows to fully characterize the heat transfer in a channel with internal heat generation simply by finding the dependency of ϕ upon Reynolds and Prandtl numbers.
Derivation of a correlation for the Nusselt number in the core channels of MSRs
The results of the previous subsection are of general validity under the mentioned assumptions, and can be used to derive a correlation suitable for computing the Nusselt number in case of channels with internally heated fluids. If the classical correlations available in literature for Nu j w are adopted, what is required is just the derivation of the function ϕ(Pr, Re). In case of laminar flow (Nu j w = 48/11), the function ϕ(Pr, Re) can be analytically demonstrated to be constant and equal to 3/44 (Poppendiek, 1954), but, in case of turbulent flow, it can have a complex shape. Nevertheless, by restricting the field of application, it is reasonable to assume a simple dependence such as ϕ(Pr, Re)=a 1 Pr a 2 Re a 3 where a 1 , a 2 and a 3 are constants.
At this point, it is possible to employ the generalized approach described in subsection 3.1 to evaluate the function ϕ and derive proper values for the constants a 1 , a 2 and a 3 . In particular, Equation 29 can be rearranged as where the Nusselt numbers can directly be computed using Equation 10.
Focusing on the situation encountered in the core channels of graphite-moderated MSRs, Reynolds numbers typically range between 10 3 and 10 5 . Consistently with the choice to use a simple correlation form like Equation 30, the investigation can be restricted to conditions of fully developed turbulence (Re ≥ 10 4 ). By employing Equations 31 and 10 as described above, the function ϕ has been evaluated for 100 different combinations of Prandtl and Reynolds numbers in the ranges 7.5 < Pr < 20 and 10 4 < Re < 10 5 . Interpolating Equation 30 in the least square sense, the following correlation is finally achieved: The average interpolation error results equal to 4.9%, with a maximum error equal to 10.2%. These discrepancies can be considered acceptable for preliminary calculations. It should be mentioned that, on the basis of the same reasoning here considered, another correlation was presented in (Di . Such correlation was characterized by a much more complex functional form, but was able to interpolate the data provided by the generalized model with an average error of 3.5%, in the wide range 3 · 10 3 < Re < 2 · 10 5 and 0.7 < Pr < 10 2 .
Adopting Equation 32, the overall correlation for the Nusselt number, in case of both internal heat generation and wall heat flux, can be written as follows: The MSBR (Fig. 3) can be selected as an example of application. The molten salt that is considered for such reactor is featured by a Prandtl number equal to 11. The Reynolds number in the core channels is on average 2 · 10 4 . According to these values, the function ϕ(Pr, Re) as computed through Equation 32 results equal to 4.49 · 10 −3 . For the MSBR, the ratio QD/j w is on average equal to 123.4 and, consequently, the correction factor γ results equal to 0.644. This indicates that the direct use of a classical correlation for the Nusselt number would lead to an overestimate of the heat transfer coefficient on the order of 40%. In Fig. 7, the Nusselt number obtained with Equation 33 is compared with some of the correlations available in literature, for Prandtl number equal to 11 and Reynolds numbers lower than 10 5 , which is the range of interest for MSRs. For both Equation 33 and Di , the Gnielinski correlation was used for Nu j w . The agreement between the two is clearly visible. Overestimate is instead generally observed for the classical correlations, which do not account for the internal heat generation. Such overestimate can be notable at low Reynolds numbers, where Nusselt numbers are over-predicted by as much as four times. In Fig. 7, the results obtained through the use of the CFD code FLUENT (see at the end of Section 3.1 for details) are also shown. They are in a good accordance with the proposed correlation (Equation 33).
Possible experimental set-up
The analytical treatment described in the previous sections can provide reasonable results in preliminary studies, but their validity can be tested only through appropriate experimental campaigns.
In particular, what is of interest to assess the validity of Equation 33 is the experimental evaluation of the function ϕ(Pr, Re). Equations 26 and 28, together with the definition of Nusselt number, lead to: Assuming Nu j w as known from available correlations, what is necessary from an experimental point of view is the evaluation of the term (T w − T b ) Q . This requires a facility able to reproduce the condition of an internally heated, thermally and hydro-dynamically developed turbulent flow in a straight, circular and adiabatic channel (according to the assumptions pointed out at the beginning of subsection 4.2). The experimental set-up must be suitable for measuring wall and bulk temperatures, as well as for ensuring a uniform and precisely-known internal heat generation Q. In addition, the definition of the dependence of ϕ upon Prandtl and Reynolds numbers requires the capability to vary in a known way the fluid properties, as well as to vary and measure the fluid velocity.
A possible set-up for the thermal-hydraulic circuit required for the experimental analyses of interest was already adopted by Kinney & Sparrow (1966). This set-up was used to test water, but with proper modifications can be adopted also for molten salts. A schematic view of the possible experimental facility is shown in Fig. 8. A closed loop is used, with a heat exchanger for cooling the working fluid, after it has been warmed in the test section. Such test section must be long enough to assure conditions of full thermal and hydro-dynamic development. Internal heating in the fluid is obtained by forcing an electrical current to flow into it. This is possible by choosing an electrically insulating material for the channel wall in the test section, and by placing electrodes at the sides of it.
In this way, the current is forced to flow longitudinally in the fluid. Adopting electricity to heat up the molten salts also solves the problem of the knowledge of the volumetric power Q, which can easily be derived by measuring the electric current in the circuit and the voltage difference at the electrodes. The velocity of the fluid can be varied by using an appropriate pump or valves, and it can be measured by means of standard techniques (e.g., Coriolis flow meter). Wall and bulk temperatures can be measured by means of thermocouples and mixing chambers. Finally, the Prandtl number can be varied by changing fluid, fluid temperature, or by using suitable "thickening" agents (Kedl, 1970).
Conclusion
Thermo-hydrodynamics of molten salts is a key issue in the current development of MSRs, which are featured by favourable characteristics with respect to conventional solid-fuelled reactors, due to the peculiarity of a nuclear fuel serving also as coolant, as discussed in Section 2. In this study, a "generalized approach" was undertaken for the "turbulent Graetz problem", with reference to fluids flowing through smooth and straight circular pipes, within which internal heat generation occurs, consistently with the flow characteristics encountered in MSRs. Such generalized model, which incorporates recent formulations of turbulent flow and convection, represents an original contribution in the field of thermo-hydrodynamics, and allows to consider boundary conditions of the first, second and third kind, with arbitrary axial distribution, arbitrary inlet temperature radial distribution and arbitrary variations of internal heat source in both the radial and axial directions. The "overall solution" (molten salt + graphite) presented in Section 3 is thought to be useful under the following two respects: (i) it provides an insight into the heat transfer characteristics of graphite-moderated MSR core channels, as shown for the reference case of the MSBR; (ii) it permits to evaluate in a simple and prompt way some fundamental quantities (i.e., the distributions of temperature and velocity, and the Nusselt number). Moreover, the presented generalized model offers a useful validation framework for assessing CFD codes (Luzzi et al., 2011) and can be an important interpretative support of numerical solutions in steady-state conditions, in the prospect of more complex, multi-physics (thermo-hydrodynamics + neutronics) analyses of graphite-moderated MSR core channels (Cammi et al., 2011a;. In Section 4, the analytical derivation of the heat transfer correlation form for internally heated fluids was discussed, and a preliminary correlation for the Nusselt number prediction was advanced in the case of simultaneous uniform wall heat flux and internal heat generation, on the basis of the results achieved by means of the "generalized approach". Such correlation, which includes the range of Prandtl and Reynolds numbers of interest for molten salts, provides a simple description of the heat transfer for internally heated fluids, showing that the use of classical correlations (without internal heat generation) for predicting heat transfer in MSRs can lead to an underestimate of graphite temperatures. Although obtained through a detailed analytical treatment, the proposed Nusselt number correlation needs to be verified on experimental grounds. To this purpose, the testing facility and the procedure required for its validation have finally been discussed in brief.
Acknowledgment
Authors express their gratitude to Dr. Valentino Di Marcello for performing some of the computations used in this study. | 10,221.4 | 2012-03-14T00:00:00.000 | [
"Physics"
] |
Highly Integrated Cladding Mode Stripper Array for Compact High-Power Industrial Fiber Laser
A design integrating multiple cladding mode strippers used in fiber laser architectures into a single device is proposed. This approach can increase the compactness of fiber lasers, thus contributing to industrial laser processing applications. By offset-placing the most intense light-stripping parts, for instance, by inversing the laser injection directions or by displacing the beginning of etched sections, multiple cladding mode strippers bundled together into a single housing can have the hottest regions separated and can operate at full power simultaneously, with no evident cross-influence on each other. Two and three cladding-mode-stripper arrays have been implemented, and validation tests have been conducted with ~500-W cladding power being injected into each input port. For both arrayed devices, compared to the scenario in which only a single cladding mode stripper is working, no greater than a 2.1 °C temperature increment is generated when all components are operating concurrently, which demonstrates the effectiveness of the integration method. In this way, one half and two thirds of space/weight reduction can be realized, respectively, for the two and three cladding-mode-stripper arrays, which is meaningful, since cladding mode strippers are among the most bulky and hottest components in fiber lasers. Moreover, this integration provides a valuable reference for the miniaturization of other components, and thus, could contribute to the development fiber lasers with higher power-to-volume ratios, which would be more economical for industrial applications.
Introduction
Fiber lasers have been developed and utilized in the material processing industry in recent years because of their high wall-plug efficiency, easy maintenance, small footprint and the merits of fiber beam delivery [1][2][3][4][5]. For commercial industrial fiber lasers, compacter size and lighter weight are important development directions, as they can save valuable space and reduce operational costs [6][7][8][9]. Therefore, it is important for laser architectures, including the constituent fiber material and fiber components, to be small and tightly arranged. In fact, optical fiber is quite flexible; as such, it can be coiled into a compact spool, while on the other hand, fiber components usually occupy a considerable amount of space for mechanical protection and thermal dissipation purposes, especially in high power applications. Therefore, the integration and miniaturization of these devices play an important role in the development of new fiber lasers which better suit the demands of industrial applications.
The cladding mode stripper (CMS) [10][11][12][13][14][15][16][17][18][19][20] is an indispensable component in fiber laser systems, as it removes the cladding light in the fiber, i.e., mainly pump light residue and signal light leakage. In order to clean the cladding power as much as possible and to avoid severe thermal issues caused by abrupt light leakage, high power CMSs are at least many centimeters [18][19][20] or even several meters in length [16]. Additionally, due to the great amount of optical radiation, high power CMSs generally require water-cooled housing to block the light and to dissipate the optical-induced heat load [13][14][15][16][17]19]. Therefore, the CMS device including its housing is indeed one of the bulkiest components in fiber laser systems; this puts a limitation on laser footprint reduction. The typical architecture of master oscillator power amplifiers (MOPAs) contains several CMSs, e.g., one between the oscillator/amplifier stages and one before the final output. The total volume of these CMS devices is the volume of a single CMS multiplied by the number of CMSs used; the area required becomes even larger when the cold plate and engine cabinet are included, and the power-to-volume ratio of such lasers can be unsatisfactory.
In this work, a design to integrate multiple CMSs into a single device is proposed. The integrated CMS has N input ports and N output ports; thus, it is designated as an N-CMS array. Using the housing of a home-made 500-W CMS, of which the length, width and height are 20 cm, 2 cm and 2 cm, respectively, two-CMS and three-CMS integrations have been implemented and up to 1.5-kW full-power functionalization has been demonstrated, saving one half and two thirds, respectively, in both space and weight. With this design, the total volume of the CMSs in fiber lasers could be reduced to that of a single CMS.
Design and Development
Individual CMSs have been fabricated by applying chemical etchant on the cladding surface of double-cladding fibers (DCF). According to previous investigations [19], in spite of the rather long length, i.e., 20 cm, applied to thoroughly deplete the cladding light, light leakage is actually not uniformly distributed along the whole etched length, but rather, is concentrated in the first 2 cm, as is the heat load. Based on these findings, it is reasonable to think that, with the hottest regions being staggered, multiple CMSs could be integrated into a single device without causing significant heat accumulation. To prove this idea, a two-CMS integrated device-two-CMS array-has been made, for which schematic drawings are shown in Figure 1a, with the red arrows indicating the propagation directions of the cladding laser within each fiber. The fabrication details are as follows. The CMS fibers were 25/400 µm DCF and lengths of 18 cm were coating-stripped. Regarding the coating edge closer to the laser injection end to be the origin position, both fibers were etched from the 4th cm to the 17th cm on the coating-free part. After that, the fibers were bundled together and inserted through a 20-cm long, 1.2/3.0-mm inner/outer-diameter fused-silica tube, with the respective input and output fiber ends being placed in opposite configurations. In this way, the hottest regions of the two CMSs should be around 8 cm from each other in principle, as Figure 1b indicates. The two-CMS array can be applied to a MOPA system consisting of one amplifier stage. For more general MOPA laser types, and narrow linewidth fiber lasers in particular, multiple amplifier stages are employed; therefore, more than two CMSs have to be used. In order to increase the number of CMSs integrated in a single device, apart from inversing the etched sections, it is also proposed to offset the beginning of the etched sections, as shown in the sketch of a three-CMS array in Figure 2a. To make the three-CMS integrated device, 25/250 µm DCF were used and the lengths of the stripped windows were 18 cm. Two fibers were etched from 2.5 to 17 cm, the third one was etched from 7.5 to 17 cm, and the three CMS fibers were bundled with the bare parts sealed inside a 20-cm long fused-silica tube. It is noteworthy that the first and the third CMS fibers were placed so as to have the same laser direction, while the second CMS fiber was positioned in the opposite way. Therefore, the hottest regions of the three CMSs should be separated from each other by a distance of 5.5 cm, as Figure 2b shows.
Experiment Results and Discussion
In order to work safely at high power, CMS arrays were further packaged in watercooled aluminum housings, of which the length × width × height were 20 cm × 2 cm × 2 cm. Silica tubes were fully enclosed in the housings so that the stripped light would be completely blocked and absorbed. Then, the performance of the CMS arrayed devices was investigated by building a measurement system, as shown in Figure 3. The input port of each CMS was injected with about 500 W of cladding power, provided by one set-four pieces-of 915-nm laser diodes (LDs). For the two-CMS array, two cascaded 7 × 1 fiber combiners were used to connect between the LDs and the CMS, while for the three-CMS array, only the first 7 × 1 fiber combiner was employed. The terminating fibers of the two combiners were 220/242 µm multimode fiber and 25/400 µm GDF, with the numerical aperture (NA) being 0.22 and 0.46, respectively. In this way, the fiber splices between the combined LDs and the CMS devices were symmetric and easy to implement. It should be pointed out that three sets of LDs were adopted in the experiments: one was with 95% of the power confined in 0.16 of NA-type I, while the other two sets were with 95% of the power confined in 0.14 of NA-type II. The two-CMS array was tested using one set of type I LDs and one set of type II LDs, while the three-CMS array was tested using one set of type I LDs and two sets of type II LDs. It is also worth noting that during the experiments, the flux and temperature of the water coolant provided to the CMS housing were about 9.4 L/min and 22.5 • C, respectively. The experimental results of the thermal image and temperature behavior of the two-CMS array are summarized in Figure 4. As shown, the hottest regions of the integrated CMSs were located at different positions, about 8 cm from one another, which is consistent with the intended values. As such, barely any cross thermal influence was observed. This is further evidenced in the temperature-power curves. The solid lines correspond to the case when only one CMS was operating, while the marked lines are the maximum temperatures at the respective hottest regions when both CMSs were working. It is clear that the solid lines and the marked lines coincide well. For instance, the maximum temperature on the housing was 51.1 • C when CMS#1 was receiving~500 W of power and 38.0 • C when CMS#2 was receiving~500 W of power. When the two-CMS array was fully operational (CMS#1 and CMS#2 simultaneously receiving~500 W of power and the housing dissipating 1000 W of power), the maximum temperatures at the hottest regions were 51.7 • C and 38.0 • C, showing little variation compared to when a single CMS was active. We may therefore conclude that integrating two CMSs using the proposed method does not significantly increase the heat density, allowing both CMSs operate safely but providing the advantage of a 50% reduction in both volume and weight. Here, it is worth noting that the temperature difference between CMS#1 and CMS#2 was due to the difference in LD brightness; the brighter the LD, the cooler the CMS. Apart from the thermal behavior, the power attenuation was also measured; the values were 99.79% and 99.60% for CMS#1 and CMS#2, respectively.
As shown in Figure 5, the three-CMS array behaved similarly. The hottest regions of the three CMSs were separated by~6 cm, with a low degree of influence on each other. In the case the three CMSs working separately, the maximum temperatures on the housing were 62.0 • C, 43.7 • C, and 44.7 • C, respectively, at~500 W of stripped power. Meanwhile, with the three CMSs operating together (i.e., with~1500 W of total power being dissipated in the housing), the maximum temperatures at the hottest regions were 62.5 • C, 43.8 • C, and 46.8 • C. On this basis, it was observed that CMS#1 and CMS#2 behaved in almost the same was, while CMS#3-the one located in the center-exhibited a 2.1-• C temperature increment due to double heat accumulation from the two adjacent CMSs. Again, this experiment validated the effectiveness of integrating three CMSs using the proposed method. The heat density barely increased, so that all three CMSs could safely operate with the volume being 1/3 of the original value. The temperature of CMS#1 was evidently higher than those of the other two because LDs of inferior NA were used. The power attenuations of CMS#1, CMS#2, and CMS#3 were 99.18%, 99.23%, and 99.07%, respectively. CMS#3 exhibited lower stripping efficiency because the etched section was 5 cm shorter. By comparing Figures 4 and 5, one can also find that the three-CMS array exhibited higher temperatures than the two-CMS array; this was due to the smaller diameter of the fiber than that used in the three-CMS array. With the same amount of cladding power, fiber with a smaller diameter will induce higher power density on the cladding surface, which, in turn, will generate a greater stripping rate and thermal density at the beginning of the etched section. Additionally, it was clear that the attenuations of the three-CMS array were lower than those of the two-CMS array; the main reason for this lies in the difference of NA of the combiner delivery fiber. It is known that lower-NA light is less sensitive to the CMS stripping structure [10]; therefore, the three-CMS array performed less well. Furthermore, it can be observed from the thermal image in Figure 5 that a section of the housing did not become bright, which means that more CMSs could be added. For instance, two more CMSs could be integrated, with one radiating the interval between hot regions #1 and #3 and the other heating the interval between regions #2 and #3. This implementation would be very useful for narrow-linewidth fiber lasers, since the architecture typically contains three or more amplifier stages, so that at least five CMSs are needed.
Conclusions
A method to integrate multiple CMSs in a single device is proposed. By inversing and/or displacing the etched sections of the CMSs, the hottest regions can be staggered, and the heat density is almost the same as when only one CMS is active. Using this design, two-and three-CMS arrays have been fabricated, respectively demonstrating full-power stripping abilities of~1000 W and~1500 W. Both the size and weight have been effectively reduced compared to traditional designs, and no evident heat accumulation or heat crossinfluence were observed. With such a design, the MOPA structure of fiber lasers can be realized in a more compact way, since it has been shown that the bulkiest components can be organized to form a single unit. Further, by offsetting more CMSs, N-CMS arrays with increased N can also be implemented. This report provides a valuable reference for fiber laser engineering and industrial laser upgrades. | 3,203.2 | 2022-12-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
The Optimization of the Location and Capacity of Reactive Power Generation Units, Using a Hybrid Genetic Algorithm Incorporated by the Bus Impedance Power-Flow Calculation Method
: Dynamic and static reactive power resources have become an important means of maintaining the stability and reliability of power system networks. For example, if reactive power is not appropriately compensated for in transmission and distribution systems, the receiving end voltage may fall dramatically, or the load voltage may increase to a level that trips protection devices. However, none of the previous optimal power-flow studies for reactive power generation (RPG) units have optimized the location and capacity of RPG units by the bus impedance matrix power-flow calculation method. Thus, this study proposes a genetic algorithm that optimizes the location and capacity of RPG units, which is implemented by MATLAB. In addition, this study enhances the algorithm by incorporating bus impedance power-flow calculation method into the algorithm. The proposed hybrid algorithm is shown to be valid when applied to well-known IEEE test systems.
Introduction
Reactive power plays an important role in maintaining the stability and reliability of transmission and distribution power systems. As a consequence, various dynamic (synchronous generators, synchronous condensers, and solid-state devices) and static reactive power sources (capacitive and inductive compensators, as well as inverter-based distributed generators) have been deployed over the past few decades [1]. In particular, reactive power generation (RPG) units have been deployed at the optimal location for voltage control in transmission and distribution systems. If reactive power is not appropriately compensated, receiving ends may experience voltage variations outside ±5% of the rated voltage, possibly leading to automatic tripping of protection devices and low power factors.
As renewable energy deployments increase, the optimal allocation of RPG units should take photovoltaic (PV) systems [23], wind turbine generators (WTGs) [24,25], and microgrids [26] into account. For example, the effect of optimally allocated distributed generation (DG) units on energy Since the case studies included buses, lines, loads, generators, shunt capacitors, tap-changing transformers, P-Q, P-V, and slack buses, the results show that the proposed power-flow calculation method can be used to analyze many different power system configurations. This study integrates the bus impedance power-flow calculation into a GA. As a result, the proposed hybrid GA can be also used for operating, planning, or upgrading transmission systems by optimally allocating RPG units. In particular, PV and WTGs able to control reactive power or with the capability of Volt/Var control management can be optimally allocated by the proposed hybrid GA.
Structure of This Paper
This paper is organized as follows: Section 2 discusses the bus impedance power flow method, Section 3 contains the proposed GA, Section 4 presents case studies, and Section 5 summarizes the paper's major conclusions.
Bus Impedance Power Flow Method
The Newton-Raphson and fast-decoupled power-flow calculation methods that use the admittance matrix require the inverse of the Jacobian matrix. Such an inverse of the matrix can take a long time for a system with thousands or more nodes. Thus, this study proposes a power-flow calculation method that does not require the inverse of the Jacobian matrix, which is the main benefit of the proposed method. The proposed power-flow method evaluates the fitness of population members of the GA when optimally allocating RPG units. The detailed implementation of the GA is presented in the next section. Figure 1 shows a power system network with n nodes. The n × n impedance matrix (Z bus ) can represent the system as driving-point impedances (diagonal elements) and transfer impedances (off-diagonal elements). This study uses the well-known four rules that build the Z bus matrix [39].
Bus Impedance Matrix
Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 19 This paper is organized as follows: Section 2 discusses the bus impedance power flow method, Section 3 contains the proposed GA, Section 4 presents case studies, and Section 5 summarizes the paper's major conclusions.
Bus Impedance Power Flow Method
The Newton-Raphson and fast-decoupled power-flow calculation methods that use the admittance matrix require the inverse of the Jacobian matrix. Such an inverse of the matrix can take a long time for a system with thousands or more nodes. Thus, this study proposes a power-flow calculation method that does not require the inverse of the Jacobian matrix, which is the main benefit of the proposed method. The proposed power-flow method evaluates the fitness of population members of the GA when optimally allocating RPG units. The detailed implementation of the GA is presented in the next section. Figure 1 shows a power system network with n nodes. The n × n impedance matrix (Zbus) can represent the system as driving-point impedances (diagonal elements) and transfer impedances (offdiagonal elements). This study uses the well-known four rules that build the Zbus matrix [39].
Iterative Current Injection Method
The iterative bus impedance power-flow calculation method was originally presented in [40]. The Zbus matrix power-flow calculation method uses the following matrix form of Ohm's law for current that flows in each node and voltage induced to each node: In Figure 2, the currents that flow to the constant power load are calculated by The current that flow to the constant current load are calculated by The currents that flow to the constant impedance load are calculated by
Iterative Current Injection Method
The iterative bus impedance power-flow calculation method was originally presented in [40]. The Z bus matrix power-flow calculation method uses the following matrix form of Ohm's law for current that flows in each node and voltage induced to each node: In Figure 2, the currents that flow to the constant power load are calculated by The current that flow to the constant current load are calculated by The currents that flow to the constant impedance load are calculated by The currents that flow to the ground through parallel elements are also calculated by The currents flowing to loads and the ground are added by Appl. Sci. 2020, 10, 1034 The currents are also iteratively used in Equation (1). However, the currents in Equation (6) are estimated from the initial nominal voltage. That is, they do not take the voltage variation caused by loads, generators, shunt capacitors, and transformers into account. Therefore, Equations (1) and (6) are repeated until the following convergence: Appl. Sci. 2020, 10, x FOR PEER REVIEW 4 of 19 The currents that flow to the ground through parallel elements are also calculated by The currents flowing to loads and the ground are added by The currents are also iteratively used in Equation (1). However, the currents in Equation (6) are estimated from the initial nominal voltage. That is, they do not take the voltage variation caused by loads, generators, shunt capacitors, and transformers into account. Therefore, Equations (1) and (6) are repeated until the following convergence:
Tap-Changing Transformer Model
The bus impedance power-flow calculation method, originally presented in [40], is usually not used in power-flow studies, or the method is used in analyzing a system without transformers because the method cannot analyze tap-changing transformers. However, most transmission and distribution systems include tap-changing transformers to change the secondary-side voltage. Therefore, such a transformer should not be ignored in a power-flow calculation algorithm.
If a tap-changer with a turns ratio of a is on the low-voltage (or secondary) side in Figure 3, the admittance matrix, Ybus, is Where Ym0 and Yn0 are the line capacitances beside the transformer. To model the tap-changing transformer depicted in Figure 3 in the proposed method, this study proposes decomposing the transformer model into two parts: the series and parallel elements in Figure 4.
Tap-Changing Transformer Model
The bus impedance power-flow calculation method, originally presented in [40], is usually not used in power-flow studies, or the method is used in analyzing a system without transformers because the method cannot analyze tap-changing transformers. However, most transmission and distribution systems include tap-changing transformers to change the secondary-side voltage. Therefore, such a transformer should not be ignored in a power-flow calculation algorithm.
If a tap-changer with a turns ratio of a is on the low-voltage (or secondary) side in Figure 3, the admittance matrix, Y bus , is where Y m0 and Y n0 are the line capacitances beside the transformer. The currents that flow to the ground through parallel elements are also calculated by The currents flowing to loads and the ground are added by The currents are also iteratively used in Equation (1). However, the currents in Equation (6) are estimated from the initial nominal voltage. That is, they do not take the voltage variation caused by loads, generators, shunt capacitors, and transformers into account. Therefore, Equations (1) and (6) are repeated until the following convergence:
Tap-Changing Transformer Model
The bus impedance power-flow calculation method, originally presented in [40], is usually not used in power-flow studies, or the method is used in analyzing a system without transformers because the method cannot analyze tap-changing transformers. However, most transmission and distribution systems include tap-changing transformers to change the secondary-side voltage. Therefore, such a transformer should not be ignored in a power-flow calculation algorithm.
If a tap-changer with a turns ratio of a is on the low-voltage (or secondary) side in Figure 3, the admittance matrix, Ybus, is Where Ym0 and Yn0 are the line capacitances beside the transformer. To model the tap-changing transformer depicted in Figure 3 in the proposed method, this study proposes decomposing the transformer model into two parts: the series and parallel elements in To model the tap-changing transformer depicted in Figure 3 in the proposed method, this study proposes decomposing the transformer model into two parts: the series and parallel elements in Figure 4. The proposed method builds the Zbus matrix of the series elements in Figure 4. The matrix also includes lines without transformers. The final Zbus matrix is used in (1) during iterations. Additionally, the proposed method builds the Ybus matrix of the parallel elements in Figure 4, calculates
Genetic Algorithm
Optimal allocation of the capacity and location of RPG units can be treated as an optimization problem. Thus, this study presents a GA that includes the proposed power-flow calculation method. The GA finds one or more RPG units and their capacities to minimize the following objective function.
Objective Function
The proposed GA defines, as its objective function, the minimizing of the weighted sum of three parameters: variation in voltage to a set value (e.g., 1.0 p.u.), the installation cost of RPG units, and total losses. , , , subject to The proposed method builds the Z bus matrix of the series elements in Figure 4. The matrix also includes lines without transformers. The final Z bus matrix is used in (1) during iterations. Additionally, the proposed method builds the Y bus matrix of the parallel elements in Figure 4, calculates I m 0 and I n 0 by (5), and adds them to Equation (6). As a result of this compensation (I m 0 and I n 0 ), the proposed method can solve the problem of applying the conventional bus impedance, matrix power-flow calculation method to tap-changing transformers.
Genetic Algorithm
Optimal allocation of the capacity and location of RPG units can be treated as an optimization problem. Thus, this study presents a GA that includes the proposed power-flow calculation method. The GA finds one or more RPG units and their capacities to minimize the following objective function.
Objective Function
The proposed GA defines, as its objective function, the minimizing of the weighted sum of three parameters: variation in voltage to a set value (e.g., 1.0 p.u.), the installation cost of RPG units, and total losses.
subject to
Optimization Variables
To optimally allocate the capacity and location of RPG units, the following optimization variables are defined.
(1) Capacity: the capacity of an RPG unit is optimally determined with the following constraint.
(2) Location: RPG units can be connected to any bus, except the slack bus.
(3) Demand: the optimization of RPG units should take continuously varying demands into account during the optimization period. Thus, this study collected the typical load profile data in Figure 5 from [41]. The data show a peak demand of 1.0 p.u. at 15:00, and a load factor of 0.68. These data are used as input for GA.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 19 To optimally allocate the capacity and location of RPG units, the following optimization variables are defined.
(1) Capacity: the capacity of an RPG unit is optimally determined with the following constraint.
(2) Location: RPG units can be connected to any bus, except the slack bus.
(3) Demand: the optimization of RPG units should take continuously varying demands into account during the optimization period. Thus, this study collected the typical load profile data in Figure 5Error! Reference source not found. from [41]. The data show a peak demand of 1.0 p.u. at 15:00, and a load factor of 0.68. These data are used as input for GA.
Genetic Algorithm
(1) Initialization: the GA initializes offspring members of the first generation with uniform random numbers. The offspring, O, is defined by where A = { ai | ai is a bus, excluding a slack bus } and Smax represents maximum capacity. (2) Fitness and reproduction: the objective function (14) calculates a fitness score for each offspring member. A normalized geometric ranking selection scheme is used [42]. A lower geometric rank (Ri) means a lower objective function value. Each slot size of a scaled roulette wheel is calculated by where p is the probability that produces the fittest offspring, and Pi is the probability of the slot size of the scaled roulette wheel. Subsequently, the GA distributes random numbers to the scaled roulette wheel's slots, according to slot size (probability), and reproduces offspring members according to the number of random numbers that belong to each slot. This means that offspring with higher fitness in their objective function have a higher selection probability.
Genetic Algorithm
(1) Initialization: the GA initializes offspring members of the first generation with uniform random numbers. The offspring, O, is defined by where A = { a i | a i is a bus, excluding a slack bus } and S max represents maximum capacity. (2) Fitness and reproduction: the objective function (14) calculates a fitness score for each offspring member. A normalized geometric ranking selection scheme is used [42]. A lower geometric rank (R i ) means a lower objective function value. Each slot size of a scaled roulette wheel is calculated by where p is the probability that produces the fittest offspring, and P i is the probability of the slot size of the scaled roulette wheel.
Subsequently, the GA distributes random numbers to the scaled roulette wheel's slots, according to slot size (probability), and reproduces offspring members according to the number of random numbers that belong to each slot. This means that offspring with higher fitness in their objective function have a higher selection probability.
(3) Crossover and mutation: an arithmetic crossover operation that combines two offsprings (O i and O j ) is performed in Figure 6, so it produces new offsprings: O i ' and O j '. To avoid convergence to a local minimum, a new offspring member, O k ', is also generated by single-position uniform mutation in Figure 7.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 7 of 19 (3) Crossover and mutation: an arithmetic crossover operation that combines two offsprings (Oi and Oj) is performed in Figure 6, so it produces new offsprings: Oi' and Oj'. To avoid convergence to a local minimum, a new offspring member, Ok', is also generated by single-position uniform mutation in Figure 7. Figure 8 shows the detailed workflow of the proposed algorithms. The proposed GA calculates the probability of each slot of the scaled roulette wheel, generates random numbers on the wheel, counts the number of random numbers that belong to each slot, and reproduces offspring members. The detailed parameters for the proposed GA are presented in the following case studies. (3) Crossover and mutation: an arithmetic crossover operation that combines two offsprings (Oi and Oj) is performed in Figure 6, so it produces new offsprings: Oi' and Oj'. To avoid convergence to a local minimum, a new offspring member, Ok', is also generated by single-position uniform mutation in Figure 7. Figure 8 shows the detailed workflow of the proposed algorithms. The proposed GA calculates the probability of each slot of the scaled roulette wheel, generates random numbers on the wheel, counts the number of random numbers that belong to each slot, and reproduces offspring members. The detailed parameters for the proposed GA are presented in the following case studies.
The results produced by the proposed bus impedance, Newton-Raphson, Gauss-Seidel, and decoupled power-flow calculation methods implemented in MATLAB show consistency with one another. The detailed voltage profile data are presented in the Appendix A. For example, a tapchanging transformer between buses 4 and 7 (with a transformer turns ratio of 0.978) modeled by the proposed power-flow method increases the primary side voltage of 0.95433∠−11.645° p.u. to the secondary side voltage of 0.98343∠−15.117° p.u. Figure 10 presents a convergence curve of the Start Build the impedance matrix (Z bus ) of the test system.
Read the data related to the objective function: 1) Weighting factors, RPG unit installation costs, and worst case costs. 2) Constraints: V min , V max , P min , P max , Q min , and Q max . 3) GA parameters: p, P c , P m , and the number of populations and generations. 4) Read load profile data.
Initialize the 1 st generation offspring members (with uniform random numbers) that include the location and capacity of RPG units.
Calculate the power flow of the system with each offspring member during the total simulation period (e.g., one day in hourly intervals).
All offspring members converge to a single offspring member?
Calculate the fitness on offspring members.
Crossover of offspring members
Reproduction of offspring members
Validation of Power-Flow Calculation Method
To verify the proposed power-flow calculation algorithm, this case study calculates the power flow of the IEEE 14-bus system in Figure 9 [43,44]. The system includes 14 buses, 1 slack bus (1.0∠0 • ), 4 P-V buses (a magnitude of voltage of 1.0 p.u.), 11 loads (in total: P + jQ = 259 + j73.5 MVA), 1 shunt capacitor (j0.19 p.u.), 1 generator (with 40 MW), and three tap-changing transformers (buses 4-7, 4-9, and 5-6). The other detailed system data are available in [43,44] Figure 11 shows the IEEE 30-bus test system [44,45]. The system includes a slack bus, five P-V buses, two shunt capacitors, twenty-one loads, and four tap-changing transformers. The detailed system data are available in [44,45]. Tables A1 and A2, presented in the Appendix A compare the power-flow calculation results determined by the proposed method to those produced by the Newton-Raphson, Gauss-Seidel, and decoupled power-flow calculation methods implemented in MATLAB. The results show good consistency with each other. As the third validation, Figure 12 shows the power-flow calculation results of the IEEE 57-bus test system, determined by the proposed method, to those produced by the Newton-Raphson and Gauss-Seidel methods. The detailed system data are available in [44,46]. The results show consistency with each other.
Since the test systems contains typical transmission system elements (i.e., loads, slack, P-V, P-Q buses, shunt capacitors, generators, and tap-changing transformers), the proposed power-flow method can be integrated into the proposed GA.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 19 Figure 11 shows the IEEE 30-bus test system [44,45]. The system includes a slack bus, five P-V buses, two shunt capacitors, twenty-one loads, and four tap-changing transformers. The detailed system data are available in [44,45]. Table 3 presented in the Appendix A, compares the power-flow calculation results determined by the proposed method to those produced by the Newton-Raphson, Gauss-Seidel, and decoupled power-flow calculation methods implemented in MATLAB. The results show good consistency with each other. As the third validation, Figure 12 shows the power-flow calculation results of the IEEE 57-bus test system, determined by the proposed method, to those produced by the Newton-Raphson and Gauss-Seidel methods. The detailed system data are available in [44,46]. The results show consistency with each other.
Since the test systems contains typical transmission system elements (i.e., loads, slack, P-V, P-Q buses, shunt capacitors, generators, and tap-changing transformers), the proposed power-flow method can be integrated into the proposed GA.
Validation of the Genetic Algorithm
To find the optimal location and capacity of RPG units, this case study was run with the following assumptions: (1) The maximum capacity of an RPG unit is 100% of the base MVA of the system (i.e., 100 MVA); (2) RPG units can be connected to all buses except the slack bus; (3) The weighting factors in the objective function are equal; (4) The nominal voltage of the slack and P-V buses is set to 1∠0° p.u. Table 1 presents the parameters of the GA. The parameters are determined by trial and error optimization [34,[47][48][49]. To validate the proposed GA implemented in MATLAB, this case study was configured to optimally allocate RPG units in the IEEE 14-and 30-bus systems [43][44][45][46].
IEEE 30-Bus System
As the first case study, this study optimally allocates RPG units in the IEEE 30-bus system in Figure 11.
Validation of the Genetic Algorithm
To find the optimal location and capacity of RPG units, this case study was run with the following assumptions: (1) The maximum capacity of an RPG unit is 100% of the base MVA of the system (i.e., 100 MVA); (2) RPG units can be connected to all buses except the slack bus; (3) The weighting factors in the objective function are equal; (4) The nominal voltage of the slack and P-V buses is set to 1∠0 • p.u. Table 1 presents the parameters of the GA. The parameters are determined by trial and error optimization [34,[47][48][49]. To validate the proposed GA implemented in MATLAB, this case study was configured to optimally allocate RPG units in the IEEE 14-and 30-bus systems [43][44][45][46]. Table 1. Parameters of the genetic algorithm (GA).
IEEE 30-Bus System
As the first case study, this study optimally allocates RPG units in the IEEE 30-bus system in Figure 11. Figure 13 depicts the standard deviation of the objective function for offspring members over multiple generations. Since the variation converges to zero, the proposed GA determines the fittest single offspring member (i.e., a solution to the optimization problem). Figures 14 and 15 examine the effect of optimally allocated RPG units in the IEEE 30-bus system on voltage magnitude ( Figure 14) and losses ( Figure 15). The PRG units optimally allocated by the proposed hybrid GA provide less variation in voltage than the case that is not optimized, and reduce losses.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 12 of 19 13 depicts the standard deviation of the objective function for offspring members over multiple generations. Since the variation converges to zero, the proposed GA determines the fittest single offspring member (i.e., a solution to the optimization problem). Figures 14 and 15 examine the effect of optimally allocated RPG units in the IEEE 30-bus system on voltage magnitude ( Figure 14) and losses ( Figure 15). The PRG units optimally allocated by the proposed hybrid GA provide less variation in voltage than the case that is not optimized, and reduce losses. 13 depicts the standard deviation of the objective function for offspring members over multiple generations. Since the variation converges to zero, the proposed GA determines the fittest single offspring member (i.e., a solution to the optimization problem). Figures 14 and 15 examine the effect of optimally allocated RPG units in the IEEE 30-bus system on voltage magnitude ( Figure 14) and losses ( Figure 15). The PRG units optimally allocated by the proposed hybrid GA provide less variation in voltage than the case that is not optimized, and reduce losses.
IEEE 14-Bus System
This study optimizes the capacity and location of RPG units in the IEEE 14-bus system in Figure 9. For the IEEE 14-bus system, the proposed GA optimally allocates six RPG units, with a total capacity of 90 MVA (53 MVA in bus 3, 11 MVA in bus 7, 8 MVA in bus 10, 2 MVA bus 12, 7 MVA in bus 13, and 9 MVA in bus 14), in order to minimize variation in voltage, the installation cost of RPG units, and losses. Figure 16 depicts the objective function's standard deviation for offspring members over multiple generations when the proposed GA finds a solution with an objective function of 0.07735. Since the variation converges to zero, the proposed GA determines the fitter single offspring member (i.e., a solution to the optimization problem). Figures 17 and 18 present the effect of optimally allocated RPG units in the test system on voltage magnitude and losses. The PRG units optimally allocated by the proposed hybrid GA provide less variation in voltage than the case that is not optimized, and reduce losses.
IEEE 14-Bus System
This study optimizes the capacity and location of RPG units in the IEEE 14-bus system in Figure 9. For the IEEE 14-bus system, the proposed GA optimally allocates six RPG units, with a total capacity of 90 MVA (53 MVA in bus 3, 11 MVA in bus 7, 8 MVA in bus 10, 2 MVA bus 12, 7 MVA in bus 13, and 9 MVA in bus 14), in order to minimize variation in voltage, the installation cost of RPG units, and losses. Figure 16 depicts the objective function's standard deviation for offspring members over multiple generations when the proposed GA finds a solution with an objective function of 0.07735. Since the variation converges to zero, the proposed GA determines the fitter single offspring member (i.e., a solution to the optimization problem). Figures 17 and 18 present the effect of optimally allocated RPG units in the test system on voltage magnitude and losses. The PRG units optimally allocated by the proposed hybrid GA provide less variation in voltage than the case that is not optimized, and reduce losses.
IEEE 14-Bus System
This study optimizes the capacity and location of RPG units in the IEEE 14-bus system in Figure 9. For the IEEE 14-bus system, the proposed GA optimally allocates six RPG units, with a total capacity of 90 MVA (53 MVA in bus 3, 11 MVA in bus 7, 8 MVA in bus 10, 2 MVA bus 12, 7 MVA in bus 13, and 9 MVA in bus 14), in order to minimize variation in voltage, the installation cost of RPG units, and losses. Figure 16 depicts the objective function's standard deviation for offspring members over multiple generations when the proposed GA finds a solution with an objective function of 0.07735. Since the variation converges to zero, the proposed GA determines the fitter single offspring member (i.e., a solution to the optimization problem). Figures 17 and 18 present the effect of optimally allocated RPG units in the test system on voltage magnitude and losses. The PRG units optimally allocated by the proposed hybrid GA provide less variation in voltage than the case that is not optimized, and reduce losses.
Conclusions
The objective of this study was to propose a hybrid algorithm that can model tap-changing transformers and optimize the location and capacity of RPG units for systems having these transformers. To achieve this objective, the study proposed a hybrid GA that incorporates bus impedance power-flow calculation. The proposed hybrid algorithm successfully calculated power flow in the well-known IEEE test systems (i.e., IEEE 14-, 30-, and 57-bus systems), and optimized the location and capacity of RPG units in the IEEE 14-and 30-bus systems.
Since the IEEE test systems include various power system elements (e.g., loads, slack, P-V, P-Q buses, shunt capacitors, generators, and tap-changing transformers), the proposed power-flow method can calculate the power flow of a variety of system configurations. The proposed hybrid algorithm can be also used for operating, planning, or upgrading transmission systems by optimally adding RPG units. PV and WTGs able to control reactive power can be optimally allocated by the
Conclusions
The objective of this study was to propose a hybrid algorithm that can model tap-changing transformers and optimize the location and capacity of RPG units for systems having these transformers. To achieve this objective, the study proposed a hybrid GA that incorporates bus impedance power-flow calculation. The proposed hybrid algorithm successfully calculated power flow in the well-known IEEE test systems (i.e., IEEE 14-, 30-, and 57-bus systems), and optimized the location and capacity of RPG units in the IEEE 14-and 30-bus systems.
Since the IEEE test systems include various power system elements (e.g., loads, slack, P-V, P-Q buses, shunt capacitors, generators, and tap-changing transformers), the proposed power-flow method can calculate the power flow of a variety of system configurations. The proposed hybrid algorithm can be also used for operating, planning, or upgrading transmission systems by optimally adding RPG units. PV and WTGs able to control reactive power can be optimally allocated by the proposed hybrid GA. However, the proposed algorithm is based on per-phase analysis, because
Conclusions
The objective of this study was to propose a hybrid algorithm that can model tap-changing transformers and optimize the location and capacity of RPG units for systems having these transformers. To achieve this objective, the study proposed a hybrid GA that incorporates bus impedance power-flow calculation. The proposed hybrid algorithm successfully calculated power flow in the well-known IEEE test systems (i.e., IEEE 14-, 30-, and 57-bus systems), and optimized the location and capacity of RPG units in the IEEE 14-and 30-bus systems.
Since the IEEE test systems include various power system elements (e.g., loads, slack, P-V, P-Q buses, shunt capacitors, generators, and tap-changing transformers), the proposed power-flow method can calculate the power flow of a variety of system configurations. The proposed hybrid algorithm can be also used for operating, planning, or upgrading transmission systems by optimally adding RPG units. PV and WTGs able to control reactive power can be optimally allocated by the proposed hybrid GA. However, the proposed algorithm is based on per-phase analysis, because transmission systems are usually assumed to be balanced. The algorithm could be extended to three-phase systems in future work. RPG,max,i : minimum and maximum outputs of reactive power generator i at iteration k p: probability that produces the fittest offspring P i : probability of the slot size of the scaled roulette wheel R: the number of reactive power generators R i : geometric rank of offspring member i from 1 to M S i : nameplate capacity of a reactive power generation unit i S min,i : minimum nameplate capacity of a reactive power generation unit i S max,i : maximum nameplate capacity of a reactive power generation unit i S Loss,i,h : losses of transmission line (or branch) i at period h S load : complex power of loads connected to each node: P + jQ:|S|∠δ s T: the number of tap-changing transformers Tap i : tap position of transformer i Tap min and Tap max : minimum and maximum tap positions Tap min and Tap max : minimum and maximum tap positions V (k) : voltages induced in each node at iteration k V (k) i,h : voltage (magnitude) of bus i at period h and iteration k V m and V n : voltages of buses m and n V nom : magnitude of the nominal (or rated) voltage V set : set voltage magnitude of a reactive power generation unit x i : location to which a reactive power generation unit can be connected W Loss , W RPG , and W V : weighting factors for losses, reactive power generator installation cost, and voltage variation, respectively Y bus : bus admittance matrix Y eq : series admittance of a tap-changing transformer Y ex : excitation admittance of a tap-changing transformer y i : capacity of a reactive power generation unit Y m0 and Y n0 : admittances of buses m and n connected the ground Y parallel : admittance matrix of parallel elements connected to the ground Z bus or Z bus : bus impedance matrix Appendix A Table A1 compares power-flow calculation results determined by the proposed Z bus method to those produced by the Newton-Raphson, Gauss-Seidel, and decoupled power-flow calculation methods implemented in MATLAB. | 7,533.4 | 2020-02-04T00:00:00.000 | [
"Engineering"
] |
An Algorithm for Network-Based Gene Prioritization That Encodes Knowledge Both in Nodes and in Links
Background Candidate gene prioritization aims to identify promising new genes associated with a disease or a biological process from a larger set of candidate genes. In recent years, network-based methods – which utilize a knowledge network derived from biological knowledge – have been utilized for gene prioritization. Biological knowledge can be encoded either through the network's links or nodes. Current network-based methods can only encode knowledge through links. This paper describes a new network-based method that can encode knowledge in links as well as in nodes. Results We developed a new network inference algorithm called the Knowledge Network Gene Prioritization (KNGP) algorithm which can incorporate both link and node knowledge. The performance of the KNGP algorithm was evaluated on both synthetic networks and on networks incorporating biological knowledge. The results showed that the combination of link knowledge and node knowledge provided a significant benefit across 19 experimental diseases over using link knowledge alone or node knowledge alone. Conclusions The KNGP algorithm provides an advance over current network-based algorithms, because the algorithm can encode both link and node knowledge. We hope the algorithm will aid researchers with gene prioritization.
Introduction
Understanding the genetic and biological mechanisms of diseases is an ongoing challenge. Common diseases such as rheumatoid arthritis and breast cancer that occur relatively frequently in the population are likely to have complex and multifactorial underlying mechanisms. Moreover, common diseases likely arise from both genetic and environmental factors as well as from interactions among such factors. In recent years, several high-throughput techniques that survey a large number of genes have been developed for elucidating the genetic factors of common diseases. Such techniques include gene expression profiling, genotyping of single nucleotide polymorphisms, and whole genome sequencing to name just a few. One challenge with such techniques is that they typically produce hundreds of candidate genes associated with the disease of interest. To address this challenge, computational approaches have been developed for prioritizing candidate genes to reduce the number of promising genes that need to be examined in detail by the biomedical researcher.
Candidate gene prioritization
Candidate gene prioritization is the process of identifying and ranking new genes as potential candidates of being associated with a disease or phenotype. Most candidate gene prioritization methods rely on a set of genes that are already known to be associated with the disease to rank the other genes. Genes that rank higher are more likely to be associated with the disease and more worthy of further biological investigation compared to those genes that rank lower. Developing excellent methods for candidate gene prioritization is important, because such methods can save biomedical researchers a significant amount of time, effort and resources by allowing them to focus on a relatively small set of promising genes to be studied in depth. Thus, candidate gene prioritization has enormous potential for accelerating progress in translational bioinformatics and in the development of new therapies.
The gene prioritization methods described in the literature can be broadly classified into two groups: similarity-based and network-based methods. Similarity-based methods attempt to identify those candidate genes whose features are most similar to genes that are already known to be associated with a particular disease. Examples of such features include expression patterns [1,2], sequence features [3] and functional annotations [4]. More recently, network-based approaches have been developed and applied to candidate gene prioritization. In the next section, we describe in greater detail network-based methods, since the algorithm that we describe and evaluate in this paper is an example of a network-based method.
Network-based methods
In the network-based approach to gene prioritization [5,6,7,8,9,10,11,12], biological knowledge about genes is represented as a network. A network consists of nodes and links between pairs of nodes where nodes represent entities and links represent a variety of pair-wise relations that can exist among the entities. For example, in a protein-protein interaction network (PPIN), nodes represent proteins, and the links represent pair-wise interactions among the proteins. In a co-expression network, nodes represent genes whose expression levels are measured in a microarray experiment, and the links may represent correlations between expression levels of pairs of genes. We term a network, such as a PPIN, that incorporates knowledge as a knowledge network.
In network-based gene prioritization, an inference algorithm is applied to the knowledge network to rank genes (or proteins) relative to a root set of genes; members of the root set are genes that are known to be associated with a disease of interest. The premise underlying this approach is that genes in the network that are in close proximity to genes in the root set are more likely to be associated with the disease than those that are further away. Proximity between genes in a network can be defined and computed using a variety of inference methods and include methods that have been developed for social-and Web-network analysis such as PageRank [13] and Hyperlink-Induced Topic Search (HITS) [14].
Several investigators have examined network-based methods for gene prioritization. One of the earliest application of networkbased gene prioritization was to rank each protein in the Online Predicted Human Interaction Database (OPHID) according to the protein's association with Alzheimer's disease [7]. Any gene which directly interacted with a known gene on the PPIN was considered to be a candidate gene -this is known as a ''nearest neighbor'' based approach. Even such a simple gene prioritization approach was shown to be effective. For example, a beta-catenin was predicted to be associated with Alzheimer's disease which had not been previously implicated in the disease. Since then, more sophisticated network algorithms have been applied. Kohler et al. [9] applied random walk and diffusion kernel network algorithms and Chen et al. [5] applied Web and social network algorithms to PPINs to prioritize candidate genes. Madi et al. developed a novel measure of node importance and used it to investigate antigen dependency networks computed from matrices of antigen-antigen correlations [15]. Furthermore, Madi et al. developed methods for identifying network components and their most informative interactions and applied them to networks of autoantibody reactivities in healthy mothers and their newborn babies [16].
Investigators have also integrated multiple knowledge sources to improve network-based gene prioritization. Frank et al. [17] constructed a classifier to predict interactions from a number of different data sources and used the classifier's output in the network. Chen et al. [18] combined different data sources including protein-protein interactions, gene expression data, and pathway data and showed that networks that used multiple data sources performed better than networks that used a single data source. A recent review provides a comprehensive overview of algorithms and tools including network-based methods used in gene prioritization [19]. Another recent review describes the application of network theory for the analysis and understanding of multi-level complex systems and discusses challenges for network-based science [20].
One limitation of current network-based inference algorithms is that they utilize link weights but not node weights. However, knowledge about entities can also be represented as node weights in a knowledge network. We conjectured that an inference algorithm that utilized both link and node weights would perform better than an algorithm that only utilized link weights. Since there are no existing network-based inference algorithms that can utilize node knowledge, we developed a new network-based method called the Knowledge Network Gene Prioritization (KNGP) algorithm that utilizes link and node knowledge. As an illustrative example, consider the small knowledge network shown in Figure 1 where a link is annotated with a number that represents the link weight and a node is annotated with a number that represents the node weight. A typical network algorithm like PageRank when applied to this network to rank nodes A, B and C with respect to node D will rank A, B and C in that order because A's link to D has a higher link weight than B's link to D and C is only indirectly connected to D through B. A network algorithm that also considers the node weights may rank the nodes as B, A and C in that order because B's combination of node and link weights may be superior to A's combination of node and link weights.
Knowledge Network Gene Prioritization (KNGP) algorithm
This section describes the KNGP algorithm in detail. KNGP creates a knowledge network from biological knowledge related to genes (or proteins). The biological knowledge is represented in two ways: 1) knowledge related to a gene is represented as a weight associated with the corresponding node (e.g., the number of gene ontology terms associated with a gene), and 2) knowledge related to a pair of genes is represented as a weight associated with the link that connects the corresponding nodes (e.g., whether the products of a pair of genes interact). For brevity, we call these node and link weights respectively. The algorithm outputs a ranking for the nodes relative to a set of genes already known to be associated with a disease of interest which is called the root node set. More specifically, the algorithm computes the posterior node importance for each gene in a set of genes called the candidate node set. The posterior node importance of a node is a measure of how likely the corresponding gene is to be associated with the disease of interest. The KNGP algorithm was motivated by the PageRank and the PageRank with Priors algorithms that are commonly used to rank nodes in a network.
PageRank and the PageRank with Priors algorithms were originally developed for networks with directed links, but have recently been applied to undirected networks. For application to an undirected network such as a PPIN, the network is converted into a directed network where an undirected link between two nodes is represented as two directed links. When PageRank is applied to an undirected network, the posterior node importance of a node is simply proportional to its degree (the number of neighboring nodes to which it is linked where the links are unweighted or the sum of the weights on the links where the links are weighted) [21,22]. However, in PageRank with Priors or in personalized PageRank, the posterior node importance is not simply proportional to its degree and is computed using an iterative algorithm [21]. Figure 2 shows the components, inputs, and output of the KNGP algorithm and Figure 3 provides the pseudocode for the algorithm. The functions of the four components are to 1) create the knowledge network, 2) compute the prior node importance, 3) search for the optimal value of the parameter f, and 4) perform inference. The inputs include link weights, node weights, the set of root nodes R and the set of candidate nodes C. The output is the posterior node importance for each candidate node. We now describe the components of the KNGP algorithm in detail.
Create the knowledge network
Two matrices are associated with the knowledge network: the link knowledge matrix and the transition probability matrix. The link knowledge matrix is a n*n matrix where n is the number of nodes in the knowledge network, and an entry in it represents the link weight between the nodes specified by the row number and column number. The transition probability matrix is a n*n matrix and is derived from the link knowledge matrix. An entry in this matrix denotes the transition probability of going to one node (represented by the row number) from another node (represented by the column number) in the network. The transition probability of going to node v from node u is given by: where lw(u, v) is the link weight between node u and v obtained from the link knowledge matrix, and neighbors(u) is the set of neighboring nodes to which node u has a weighted link. If node u has no neighbors, then p(v | u) is set to 0, and by symmetry p(u | v) is also 0. This transition probability term encodes link knowledge.
Compute the prior node importance The prior node importance represents how likely -a priori -a given gene is associated with the disease of interest. The prior node importance is defined by two vectors: the node knowledge vector and the prior probability vector. The node knowledge vector is a n dimensional vector where n is the number of nodes in the knowledge network, and an entry in it represents the node weight associated with the corresponding node. The prior probability vector Pr is derived by normalizing the node knowledge vector. The prior probability Pr v of node v is defined as: where R is the set of root nodes, w v is the weight associated with node v that is obtained from the node knowledge vector, and f is a parameter that takes a value between 0 and positive infinity. The f scales the node weights for members of the root set compared to the non-root set. The next section describes how the optimal value of f is obtained. In summary, the prior probability term encodes both node knowledge and root node knowledge.
Search for the optimal value of the parameter f For a specific value of f, the KNGP algorithm performs inference to evaluate how highly the root nodes are ranked using leave-one-out cross-validation (described in the Methods section). Specifically, the performance associated with a value of f is measured using the area under the ROC curve (AUC).
The pseudocode for the search is given in the find_best_f procedure in Figure 3. The find_best_f procedure has three inputs: a network with link and node weights, R which is the set of root nodes, and F which is a set of f values defined by the user in the range 0 to positive infinity. As shown in the pseudocode, the outer loop iterates through f values in F, and the inner loop performs leave-one-out cross-validation to compute the AUC. The output of find_best_f procedure is the optimal value of f in F which is defined as the value that maximizes the AUC. The optimal f value depends on the relative distribution of the link and node weights between the root node and candidate node sets; hence, for a given knowledge network and disease of interest, the optimal f value can change.
Perform inference
Given a network with a transitional probability matrix Q that encodes link knowledge, and a prior probability vector Pr that encodes both node knowledge with root node knowledge, inference on the network produces a posterior probability vector Po which is a n dimensional vector where n is the number of nodes in the network. KNGP's inference is based on a random walk model where a walker's probability of jumping from one node to another is proportional to the weight of the link that connects the two nodes. In addition, the probability of jumping from one node to another is modified by a ''back probability'' which determines how often the walker jumps back to the set of root nodes. The sequence of nodes visited during a random walk is represented by a Markov chain model. The relative number of visits to a node is obtained by computing the stationary probability of the Markov chain. The stationary probability distribution denotes the fraction of time that the walker spends at any one node during a random walk and is interpreted as the importance of the node relative to the other nodes in the network. The stationary probability distribution represents the posterior probability vector and is computed using the following iterative equation: where Pr is the prior probability vector, Q is the transitional probability matrix, Po is the posterior probability vector and ß is the back probability value inclusive between 0 and 1. Po is initialized to a vector of 0 s at the start of inference. At iteration i+1, Po is updated by multiplying Po at iteration i with the matrix Q. The stationary distribution is reached when the difference in the elements of Po at (i+1) and Po at i falls below a small constant delta.
The posterior probability vector includes a probability for every node in the network. After the stationary posterior probability vector is obtained, the KNGP algorithm ranks the candidate nodes and outputs them along with their posterior probabilities. Often times, the candidate nodes will consist of all nodes in the network that are not in the root node set.
PageRank with Priors algorithm
The network-based algorithm which is most similar to KNGP is PageRank with Priors (PRP). PRP first takes as input a network and a root node set. The algorithm then computes a relative importance score for each of the remaining nodes in the network. The PRP algorithm was originally applied to assign importance to webpages on the World Wide Web in relation to a specified set of webpages [23]. In PRP, the prior probability Pr v of node v is defined as: where |R| is the number of nodes in the root set. It is important to note that (4) does not have a term to introduce node knowledge. Rather, only root node knowledge is incorporated into the prior probabilities. Similar to the KNGP algorithm, PRP uses equation 3 to perform inference. The main difference between the two algorithms lies in the prior probabilities. Chen et al. [5] applied PRP to candidate gene prioritization and showed that network-based methods which previously had been used to study primarily social and web networks are also applicable to gene prioritization. As described earlier, PageRank with Priors is applied to an undirected network by converting the undirected link to two directed links.
Methods
This section provides details of the datasets and the experimental setup to evaluate the KNGP algorithm.
Synthetic networks
We created several synthetic networks with the goal of investigating how the node weights interacted with the link weights to influence the AUC at different f values in the KGNP algorithm.
The synthetic datasets were created as follows. Each dataset contained 1000 nodes of which nodes 1 to 100 are designated as root nodes and the remaining nodes are designated as candidate nodes (or non-root nodes). To assign node weights and link weights, the 1000 nodes were partitioned into the following 5 groups: N Group 1 consisted of root nodes 1 through 50 N Group 2 consisted of root nodes 51 through 10 N Group 3 consisted of candidate nodes 101 through 150 N Group 4 consisted of candidate nodes 151 through 200 N Group 5 consisted of candidate nodes 201 through 1000 Four datasets were generated in the following manner: N In dataset 1, each of the 1000 nodes was assigned a random node weight between 0 and 1. Thus, root nodes and candidate nodes had similar node weights. The links among the root nodes (i.e., node groups 1 and 2) were assigned a random weight between 0.5 and 1 and the links among the candidate nodes and among the root nodes and the candidate nodes were assigned a random weight between 0 and 0.5. Thus, links among root nodes had higher weights than other links.
N In dataset 2, the root nodes (i.e., groups 1 and 2) were assigned a random node weight between 0.5 and 1, and the candidate nodes (i.e., groups 3, 4 and 5) were assigned a random node weight between 0 and 0.5. Thus, root nodes had higher node weights than all of the candidate nodes. All links were assigned a random link weight between 0 and 1. Thus, links among root nodes, links among candidate nodes and links among root nodes and candidate nodes had similar weights. N In dataset 3, the root nodes were assigned a random weight between 0.9 and 1.0, and the candidate nodes were assigned a random weight between 0.5 and 1.0. Thus, the root nodes, on average, had higher node weights than the candidate nodes, but some of the candidate nodes could have had greater node weights. The link weights between the root nodes were assigned a value between 0.55 and 1.0, and the link weights between the candidate nodes were assigned a value between 0.5 and 1.0. Thus, the links between the root nodes were, on average, higher than the link weights between the candidate nodes, but some of the candidate node link weights could have been higher.
N In dataset 4, the root nodes were assigned a random node weight between 0.95 and 1.0, and the candidate nodes were assigned a random node weight between 0 and 1.0. Thus, the root nodes, on average, had higher node weights than the candidate nodes, but some of the candidate nodes could have had greater node weights. The link weights between the root nodes were assigned a value between 0.1 and 1.0, and the link weights between the candidate nodes were assigned a value between 0 and 1.0. Thus, the links between the root nodes were, on average, higher than the link weights between the candidate nodes, but some of the candidate node link weights could have been higher.
The KNGP algorithm was run on each of the synthetic networks using the evaluation protocol (described in Methods section) for a range of f parameter values that included the following: 0, 1, 15, 100, 10,000 and 10 10 . At one extreme, f = 0, the prior probabilities of the root nodes became 0, and the prior probabilities of the candidate nodes were proportional to the node weights. At the other extreme, f = 10 10 , the prior probabilities of the root nodes approached infinity, and the prior probabilities of the candidate nodes approached 0 due to normalization. Table 1 shows the link weights that were used for each dataset. Table 2 and Table 3 show the link weights that were used for each individual group and between the groups respectively. In creating synthetic networks, we did not include complex topologies that may arise from grouping nodes into groups or modules. Such grouping may be useful in the analysis of PPINs where groups of proteins may represent metabolic pathways or functional modules.
Biological networks
We created several networks from biological knowledge. One set of networks encoded link knowledge derived from proteinprotein interactions and from the Gene Ontology (GO) annotations. Another network encoded node knowledge that was derived from the GO annotations. And a final network encoded both link knowledge and node knowledge. All the networks had the same set of links where the presence of a link indicated a protein-protein interaction. Our goal was to evaluate the additional benefit of encoding node knowledge for gene prioritization using the KGNP algorithm.
Protein-Protein
Interaction+GO link weight networks. The nodes in these networks represented genes (proteins), and a link was present between two genes if there was a protein-protein interaction (PPI) between the corresponding proteins. We obtained PPIs from the Interologous Interaction Database (IID) [24,25] and the human protein-protein interaction (HPPI) database [26,27]. In total, the networks contained 126,668 interactions between 11,259 proteins. The weight for a link was obtained from the Gene Ontology (GO) and is described next.
The GO [28] is a set of controlled vocabularies which describes the functions of proteins within the cell. The GO is divided into three separate ontologies that describe molecular function, biological process, and cellular component. Given a specific GO ontology such as GO molecular function, we calculated the similarity between a pair of genes using the algorithm described in Wang et al. [29]. This algorithm measures the functional similarity of two genes based on the semantic similarities among the GO terms annotating these genes. It encodes a GO term's semantics into a numeric value by aggregating the semantic contributions of their ancestor terms in the GO graph and uses this numeric value to measure the semantic similarity of two GO terms.
We created three networks with link weights corresponding to the three GO ontologies that are labeled as the PPI+GOM (with weights derived from the GO molecular function ontology), the PPI+GOB (with weights derived from the GO biological process ontology) and PPI+GOC (with weights derived from the GO cellular component ontology).
GO node weight network. This network was obtained by augmenting the PPI network with node weights. The PPI network was constructed as described above for the link weight networks. A node's weight represented the number of GO terms associated with the corresponding gene. To obtain the node weight for a gene, we totaled the number of terms obtained from all three gene ontologies including the cellular, molecular and functional ontologies.
Combined link and node weight network. This network was obtained by combining the PPI+GOC link weight network with the GO node weight network. The link weights in this network were the same as those used in the PPI+GOC network, and the node weights were the same as those used in the GO network.
Evaluation
We used a leave-one-out cross-validation scheme where each root node was ''left-out'' -in turn -from the root node set. The KNGP algorithm was then applied to the network to determine how highly the left-out root node was ranked. The higher the leftout root node was ranked; the better was the performance of the KNGP algorithm.
The leave-one-out evaluation protocol is shown in Figure 4. The protocol generates a total of m*10 (where m is the size of the root node set) rank ordered lists of 100 nodes each with a left-out root node that is embedded in 99 non-root nodes. A threshold rank (for example, the 5th rank) for such a list separates those nodes that are ranked above it from those that are ranked below it. For a given threshold rank, sensitivity is defined as the percentage of lists where the left-out node was ranked above the threshold and specificity as the percentage of lists where the left-out node was ranked below the threshold. Varying the threshold rank produced a series of sensitivity and specificity values from which a ROC curve was constructed, and the corresponding AUC was calculated.
We applied the KNGP algorithm to each of the synthetic networks and the biological networks using the evaluation protocol.
Results
This section provides the results that we obtained for the synthetic and biological networks. Synthetic networks As Table 5 shows, the optimal f value (i.e., the f value that achieved the highest AUC) depends on the degree to which the link and node weights are biased towards the root nodes versus the non-root nodes. In this context, the bias indicated how much greater the node or link weights were for the root nodes versus the non-root nodes. If the link weights were considerably more biased towards the root nodes than the non-root nodes -as in dataset 1than the highest AUC was obtained at the largest f value. Conversely, if the node weights were considerably more biased towards the root nodes than the non-root nodes -as in dataset 2than the highest AUC was obtained at the smallest f value. When the bias towards the root nodes was more balanced between the node weights and link weights -as in datasets 3 and 4 -than the highest AUC was obtained at a f value between the two extremes.
These results provide some intuition for the f parameter in the KGNP algorithm. The f parameter represents the tradeoff in importance between the link weights and the node weights in determining the ranking of the nodes. If the optimal f value is high, then it implies that the link weights dominate over the node weights in determining the ranking. Conversely, if the optimal f value is low, then it implies that the node weights dominate over the link weights in determining the ranking. These results imply that when the optimal f value occurs between the two extremes, both node and link weights are used to determine the ranking equally. Conversely, at the extremes, either the node weight or the links weights are used almost exclusively. Table 6 provides the AUCs values for each of 19 diseases obtained by applying KNGP to the three link weight networks. Of the three GO link weight networks, PPI-GOC performed the best and we used this network for creating the combined link and node weight network. Table 7 provides the AUCs values for each of 19 diseases obtained by applying KNGP to PPI-GOC link weight network, GO node weight network and a network that combines PPI-GOC link weights with GO node weights. The last row in the last column in Table 6 gives the p-values obtained from the twotailed Wilcoxon paired-samples signed-rank test comparing the combined network with the link weight network and the node weight network. The combined network has significantly better performance at the 0.05 significance level than either the link weight network or the node weight network. Table 8 gives the top 5 ranked candidate genes for asthma that were obtained by applying the KNGP algorithm to the combined PPI-GOC and GO network. The two proteins -IL9R and IL12B -that are shown in bold font in Table 8 were ranked far lower by the other two networks. We obtained evidence from the literature that both these proteins have an association with asthma. Kauppi et al. [31] genotyped several alleles from the IL9R gene and compared results between a large cohort of patients with asthma and healthy-control samples. The results were studied using linkage analysis, transmission disequilibrium, and homozygosity analyses. The authors showed that a IL9R allele -sDF2*10 -was more likely to be transmitted among patients with asthma and was found homozygotic among asthma patients more often than expected. Furthermore, a specific X chromosome haplotype was found to be more associated for patients with asthma. In order to test the hypothesis that the IL12B gene contains polymorphisms associated with asthma, Randolph et al. [32] performed a genotype analysis for polymorphisms in the IL12B gene between patients with asthma and their parents. In the results, the authors showed that one of the alleles of the IL12B gene was undertransmitted to children with asthma. Furthermore, the authors showed that a polymorphism of the IL2B gene may be significantly associated with asthma severity in whites.
Application to Asthma
Appendix S2 provides the top 10 ranked candidate proteins for each of the 19 experimental diseases obtained by applying the KNGP algorithm to the combined PPI+GOC and GO network.
Discussion
Developing effective computational methods for candidate gene prioritization is an important problem in bioinformatics. In this paper, we presented and evaluated a new network-based method called the KNGP algorithm. The advantage of the KNGP algorithm is that it can encode node knowledge in addition to link knowledge into the network-based gene prioritization process and thus represents an advance over current network-based gene prioritization algorithms. On 19 diseases, we showed that the incorporation of link and node knowledge can add a significant benefit to the network-based gene prioritization process. We applied the new network-based method that we have introduced to PPINs; however, we anticipate that it is applicable to a range of other molecular and biological networks such as gene networks, metabolic networks and neural networks. Beyond biological networks, this algorithm will likely be useful in the analysis of Web and citation networks and other social and financial networks.
A main limitation of the current paper is that the KNGP algorithm searches over only a limited number of fixed values for the f parameter. We restricted the search to a few values to decrease the running time of the algorithm. A more advanced searching algorithm may lead to more optimal performance, but our experience indicated that the difference would not be too significant since the search space is highly convex.
In this paper, we explored only protein-protein interactions and GO annotations for link weights and GO annotations for node weights as the knowledge sources. Exploring alternative types of knowledge sources for the node and link weights may lead to better performance and is a possible extension for further research. Another extension is to combine the rankings from various networks derived from different knowledge sources. In the future, we plan on exploring these different research avenues.
Conclusions
We presented a new network-based algorithm that is able to incorporate different types of biological knowledge in nodes and in links called the KNGP algorithm. Our results indicate that encoding both node and link knowledge can improve performance over using only link knowledge in network-based gene prioritization. We hope that researchers will find our new network-based approach useful for candidate gene prioritization and that future extensions will yield additional improvements.
Supporting Information
Appendix S1 Uniprot identifiers of known proteins associated with each disease that was obtained from the Genetic Association Database. | 7,578.2 | 2013-11-19T00:00:00.000 | [
"Computer Science",
"Biology"
] |
MicroRNA-610 suppresses osteosarcoma oncogenicity via targeting TWIST1 expression
Osteosarcoma is the most frequent primary bone tumor affects adolescents and young adults. Recently, microRNAs (miRNAs) are short, non-coding and endogenous RNAs that played as important roles in the initiation and progression of tumors. In this study, we try to explore the biological function and expression of miR-610 in the osteosarcoma. We showed that miR-610 expression was downregulated in the osteosarcoma tissues and cell lines. Elevated expression of miR-610 suppressed the osteosarcoma cell proliferation, cell cycle, invasion and EMT program. Moreover, overexpression of miR-610 increased sensitivity of MG-63 and U2OS cells to cisplatin. Twist1 was identified as a direct target gene of miR-610 in the osteosarcoma cell. Furthermore, we demonstrated that Twist1 was upregulated in the osteosarcoma tissues and cell lines. The expression of Twist1 was negatively associated with the expression of miR-610 expression in the osteosarcoma tissues. Ectopic expression of Twist1 inhibited the sensitivity of miR-610-overexpressing MG-63 cells to cisplatin. We also showed that overexpression of Twist1 increased the proliferation and invasion of miR-610-overexpressing MG-63 cells. These data indicated that ectopic expression of miR-610 suppressed the osteosarcoma cell proliferation, cell cylce, invasion and increased the sensitivity of osteosarcoma cells to cisplatin through targeting the Twist1 expression.
INTRODUCTION
Osteosarcoma is the most frequent primary bone tumor affects adolescents and young adults, which is prone to early metastasis and frequently occurs in the long bones [1][2][3][4][5]. Despite the recently advancements including adjuvant chemotherapy, radiotherapy and wide tumor excision, the prognosis and 5 years survival rate of these patients remains poor [6][7][8][9]. Chemotherapeutic drugs such as cisplatin and doxorubicin are well used in osteosarcoma and 5-year survival rate has been increased from 20% to 70% [10,11]. However, the molecular mechanisms about acquiring chemoresistance are still unknow [12][13][14]. It is urgent need to find these molecular mechanisms to explore therapeutic strategies.
In this study, we try to explore the biological function and expression of miR-610 in the osteosarcoma. We showed that miR-610 expression was downregulated in the osteosarcoma tissues and cell lines. Elevated expression of miR-610 suppressed the osteosarcoma cell proliferation, cell cycle, invasion and EMT program. Moreover, overexpression of miR-610 increased sensitivity of MG-63 and U2OS cells to cisplatin.
miR-610 increased the sensitivity of osteosarcoma cells to cisplatin
The miR-610 expression level was downregulated in the osteosarcoma cells (HOS, SAOS-2, MG-63 and U2OS) compared to in the hFOB cells ( Figure 1A). qRT-PCR analysis suggested that miR-610 mimic could significantly enhance miR-610 expression both the MG-63 and U2OS cell ( Figure 1B and 1C). The response of MG-63 and U2OS cells to cisplatin enhanced after treated with the miR-610 mimic compared to the scramble-transfected cells ( Figure 1D and 1E).
The expression of miR-610 was downregulated in osteosarcoma tissues
We next determined the miR-610 expression level in the osteosarcoma tissues. We showed that miR-610 was downregulated in 21 osteosarcoma cases compare to the adjacent non-tumor tissues ( Figure 2A). The miR-610 expression was lower in the osteosarcoma tissues compare with the adjacent non-tumor tissues ( Figure 2B).
Overexpression of miR-610 suppressed epithelialmesenchymal transition (EMT) program
Ectopic expression of miR-610 increased the epithelial marker E-cadherin protein expression and decreased the mesenchymal marker N-cadherin, Vimentin and Snail protein expression ( Figure 3A). MiR-610 overexpression promoted the E-cadherin mRNA expression and supressed N-cadherin, Vimentin and Snail mRNA expression ( Figure 3B).
miR-610 overexpression inhibited the osteosarcoma cell cycle and invasion
Ectopic expression of miR-610 suppressed the MG-63 and U2OS cell cycle ( Figure 5A and 5B). elevated expression of miR-610 decreased cell invasion both in the MG-63 and U2OS cell ( Figure 5C and 5D).
Twist1 was a direct target gene of miR-610
To search the molecular mechanism involved in the function of miR-610 in osteosarcoma cell, we used the TargetScan databases to find potential target gene of Figure 6A. Luciferase assay demonstrated that miR-610 overexpression decreased luciferase activity of wild-type (WT) 3ʹUTR of the Twist1 vector, but not in the mutated-type (Mut) 3ʹUTR of the Twist1 construct both in the MG-63 and U2OS cell ( Figure 6B and 6C). Overexpression of miR-610 suppressed the Twist1 protein expression both in the MG-63 and U2OS cell ( Figure 6D and 6E).
Twist1 was upregulated in osteosarcoma tissues
The Twist1 expression level was upregulated in the osteosarcoma cells (HOS, SAOS-2, MG-63 and U2OS) compared to in the hFOB cells ( Figure 7A). We furtherly determined the Twist1 expression level in the osteosarcoma tissues. We showed that Twist1 was upregulated in 25 osteosarcoma cases compare to the adjacent non-tumor tissues ( Figure 7B). The Twist1 expression was higher in the osteosarcoma tissues compare with the adjacent non-tumor tissues ( Figure 7C). The expression of Twist1 was negatively associated with the expression of miR-610 expression in the osteosarcoma tissues ( Figure 7D).
miR-610 increased the sensitivity of osteosarcoma cells to cisplatin and decreased the osteosarcoma cell proliferation and invasion through downregulating Twist1
The Twist1 mRNA and protein expression was upregulated in the MG-63 cell after tranfected with Twist1 vector ( Figure 8A and 8B). The response of MG-63 and U2OS cells to cisplatin enhanced after treated with the Twist1 vector compared to the cotrol-transfected cells ( Figure 8C and 8E). Moreover, the responses of miR-610-overexpressing MG-63 and U2OS cells to Oncotarget 56177 www.impactjournals.com/oncotarget cisplatin were increased after transfection with the Twist1 vector compared with the control vector ( Figure 8D and 8F). Overexpression of Twist1 enhanced MG-63 cell proliferation ( Figure 8G) and invasion ( Figure 8I). Moreover, ectopic expression of Twist1 promoted the proliferation ( Figure 8H) and invasion ( Figure 8G) of miR-610-overexpressing MG-63 cells.
DISCUSSION
Osteosarcoma is the most frequent bone tumor occurring in adolescence and childhood, with the high mortality [14,34,35]. However, the molecular mechanism about osteosarcoma progression still remains elusive. It is supposed that miRNA deregulated expression is involved in the progression of human tumors [36][37][38][39]. In this study, we explored the biological function and expression of miR-610 in the osteosarcoma. We showed that miR-610 expression was downregulated in the osteosarcoma tissues and cell lines. Elevated expression of miR-610 suppressed the osteosarcoma cell proliferation, cell cycle, invasion and EMT program. Moreover, overexpression of miR-610 increased sensitivity of MG-63 and U2OS cells to cisplatin. Twist1 was identified as a direct target gene of miR-610 in the osteosarcoma cell. Furthermore, we demonstrated that Twist1 was upregulated in the Oncotarget 56179 www.impactjournals.com/oncotarget osteosarcoma tissues and cell lines. The expression of Twist1 was negatively associated with the expression of miR-610 expression in the osteosarcoma tissues. Ectopic expression of Twist1 inhibited the sensitivity of miR-610overexpressing MG-63 cells to cisplatin. We also showed that overexpression of Twist1 increased the proliferation and invasion of miR-610-overexpressing MG-63 cells. These data indicated that miR-610 acted as a tumor suppressor role in the osteosarcoma progression and was associated to the sensitivity of osteosarcoma cell to cisplatin.
Previous data demonstrated that miR-610 was involved in tumor initiation and progression [40][41][42]. For example, Wang et al. [40]. suggested that miR-610 suppressed the gastric cancer cell migration and invasion through inhibiting vasodilator-stimulated phosphoprotein (VASP) expression. Zeng et al. [43]. confirmed that miR-610 was decreased in the hepatocellular carcinoma tissues and cell lines. Ectopic expression of miR-610 reduced the hepatocellular carcinoma cell proliferation and tumorigenicity through regulating the protein 1 (TBL1X) and lipoprotein receptor-related protein 6 (LRP6) expression. Mo et al. [42]. showed that the expression level of miR-610 was downregulated in the glioblastoma cell and tissues and overexpression of miR-610 suppressed the glioblastoma cell proliferation through regulating the CCND2 and AKT3 expression. Sun et al. [41]. found that miR-610 expression level was downregulated in the colorectal cancer tissues and overexpression of miR-610 decreased the colorectal cancer cell proliferation, invasion and migration by regulating the hepatoma-derived growth factor (HDGF) expression. Yan et al. [44]. also showed that miR-610 expression was decreased in glioma samples and overexpression of miR-610 suppressed the glioma cell proliferation, invasion and migration by targeting the MDM2 expression. In line with these data, we also demonstrated that miR-610 expression was downregulated in the osteosarcoma tissues and cell lines. Elevated expression of miR-610 suppressed the osteosarcoma cell proliferation, cell cycle, invasion and EMT program.
To study the potential mechanism about miR-610 regulated the proliferation and invasion of osteosarcoma, we used open-target prediction programs (TargetScan databases) to predict the target gene of miR-610, and Twist1 was focused. Twist1 is a conserved basic transcription factor of helix-loop-helix and regulates cell migration, embryonic morphogenesis and differentiation of myoblast, mesodermal and osteoblast [45][46][47][48]. Recently, several studies showed Oncotarget 56180 www.impactjournals.com/oncotarget that Twist1 played as an oncogene that increased the tumor cell proliferation, migration and invasion, induced the EMT progression, angiogenesis and anti-apoptosis [49][50][51]. Twist1 has been found to be upregulated in a lot of tumors such as gastric cancer, ovarian cancer, breast cancer, bladder cancer and also osteosarcoma [52][53][54][55][56]. Increasing studies suggested that Twist1 was involved in drug resistance of cancer [55,57,58]. In our study, we firstly used the TargetScan databases to show that there is a putative binding site of miR-610 and Twist1. Luciferase assay demonstrated that miR-610 overexpression decreased luciferase activity of WT 3ʹUTR of the Twist1 vector, but not in the Mut 3ʹUTR of the Twist1 construct both in the MG-63 and U2OS cell. Overexpression of miR-610 suppressed the Twist1 protein expression both in the MG-63 and U2OS cell. Moreover, we also indicated that Twist1 expression level was upregulated in osteosarcoma tissues and cell lines. The expression of Twist1 was negatively Oncotarget 56181 www.impactjournals.com/oncotarget associated with the expression of miR-610 expression in the osteosarcoma tissues. Furthermore, we showed that Twist1 decreased the sensitivity of osteosarcoma cells to cisplatin and ectopic expression of Twist1 promoted the osteosarcoma cell proliferation and invasion. We demonstrated that miR-610 overexpression increased the sensitivity of osteosarcoma cells to cisplatin and decreased the osteosarcoma cell proliferation and invasion through downregulating Twist1 expression.
In summary, our study suggested that miR-610 expression level was downregulated in the osteosarcoma samples and cell lines. Ectopic expression of miR-610 suppressed the osteosarcoma cell proliferation, cell cylce, invasion and increased the sensitivity of osteosarcoma cells to cisplatin through targeting the Twist1 expression.
Tissues, cell lines cultured and transfection
Osteosarcoma and the matched adjacent nontumor tissues were obtained from surgical resection in our hospital. Tissues are immediately frozen in the liquid nitrogen and stored until used. This project was approved with the Institutional Ethics Committee of Central Hospital of Cangzhou City and written informed consent was get from each patient. The Osteosarcoma cell lines (MG63, HOS, U2OS, and Saos-2) and normal human osteoblastic cell line hFOB were cultured in the DMEM (Dulbecco's modified Eagle's medium) suppled with FBS, penicillin and streptomyc. miR-610 mimic and oligonucleotide scramble were obtained from the GenePharm (Shanghai, China) and transfected into the MG-63 cells by using Lipofectamine 2000 (Invitrogen, CA, USA) following to the manufacturer's protocol. These characteristics of patients are described in Supplementary Table 1.
Quantitative RT-PCR
Total RNA was prepared from cells or tissues with TRIzol (Life Technologies, NY). Quantification for miR-610 and TWIST1 was performed by using qRT-PCR following to the manufacturer's information. The expression of U6 was used as the internal control to the expression of miR-610; the expression of GAPDH was used as the internal control to the expression of TWIST. TWIST, forward primer: 5ʹ-ACGAGCTGGACTCCAAGATG-3ʹ and reverse primer: 5ʹ-CACGCCCTGTTTCTTTGAAT-3ʹ; GAPDH, forward primer: 5ʹ-GACTCATGACCACAGTCCATGC-3ʹ; and reverse primer: 5ʹ-AGAGGCAGGGATGATGTTCTG-3ʹ.
Cell proliferation and colony formation
Cells were cultured in the 96-well and cell proliferation was determined by MTT assay (Promega, Madison, USA) following to the company's instruction. Cells proliferation was counted after 24 and 48 hours. The absorbance was measured by the microtiter plate reader at 450 nm (Molecular Devices, CA, USA).
Cell cycle and invasion
For cell cycle analysis, cells were fixed with 70% ethanol, re-suspended in the PBS containing Rnase, Triton X-100 and propidium iodide (Sigma). The cell then was analyzed by FACSArray Flow Cytometer (Becton Dickerson, San Jose, CA, USA). For cell invasion, transwell chambers were used. Cells were cultured on the upper chamber with the non-coated membrane (Millipore) in the serum-free media. The lower chamber with the media containing FBS was used as a chemoattractant. The migration cell on the lower chamber was determined using microscope.
Luciferase report assay
Cells were cultured in the 48-well plates with about 60% confluence. Cell was transfected with miR-610 mimics or scramble and pGL3-TWIST1-3'UTR and PRL-TK by using Lipofectamine 2000 according to manufacturer's instructions. Renilla and firefly luciferase activitie was determined by using the dual-luciferase reporter (Promega, USA).
Western blot analysis
Proteins were isolated using a RIPA (radioimmunoprecipitation) lysis buffer. Total protein was separated by 12% SDS-PAGE and then transferred to polyvinylidene fluoride (PVDF) membranes (Millipore, USA). After blocked with the nonfat milk, the membrane was incubated with antibodies against TWIST (1:1000, Santa Cruz, USA), GAPDH (1:5000, Santa Cruz, USA). The membrane was measured by ECL reagent (Applygen, Beijing).
Statistical analysis
Data are shown as mean ± SD. Comparisons between more than two groups were performed with ANOVA and different between two groups was done using Student's t-tests. P < 0.05 was considered statistically significant.
CONFLICTS OF INTEREST
None. | 3,028.2 | 2017-04-11T00:00:00.000 | [
"Biology"
] |
Fast Depth Intra Mode Decision Based on Mode Analysis in 3 D Video Coding
Multiview video plus depth (MVD), which consists of a texture image and its associated depth map, has been introduced as a 3D video format, and 3D video coding, such as 3D-HEVC, was developed to efficiently compress this MVD data. However, this requires high encoding complexity because of the additional depth coding. In particular, intra coding using various prediction modes is very complicated. To reduce the complexity, we propose a fast depth intra mode decision method based on mode analysis. The proposed method adaptively reduces the number of original candidate modes in a mode decision process. Experimental results show that the proposed method achieves high performance in terms of the complexity reduction.
Introduction
To efficiently transmit high quality video contents over a limited bandwidth, HEVC [1] was developed by the Joint Collaborative Team on Video Coding (JCT-VC), which was established by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG).Since it includes lots of advanced techniques, such as advanced motion vector prediction modes for inter coding and various angular prediction modes for intra coding, very high coding efficiency is obtained.In particular, a rate-distortion (RD) optimization process [2] for the various intra prediction modes provides significantly high coding performance.However, to determine the optimum intra prediction mode, the RD optimization process needs all encoding processes, including transform, quantization, inverse quantization, inverse transform, and entropy coding for each mode.After comparison of the encoding results, the optimum mode maximizing the performance is selected among all possible candidate modes.As a result, the mode decision process with RD optimization places a very high computational burden on HEVC encoders.
3D video coding uses a multiview video plus depth (MVD) format, which consists of a texture image and its corresponding depth map, to reduce 3D data size.Figure 1 shows an example of the MVD format in a Newspaper test sequence.A texture image represents the brightness of an object, whereas a depth map indicates the distance between an object and a camera as a grey scale image.In general, the depth map is used to generate virtual texture views at arbitrary viewpoints, based on a depth image-based rendering (DIBR) technique [3].Thanks to the high coding performance of the HEVC standard, 3D-HEVC [4] was developed by the Joint Collaborative Team on 3D Video Coding Extension Development (JCT-3V) to efficiently compress this MVD format.High coding performance is provided by using a high correlation between the texture image and the depth map in MVD, but this requires drastically high encoding complexity because of the additional depth coding.In particular, the intra prediction mode decision processes with the RD optimization in both texture and depth coding are very complicated.In addition, 3D-HEVC adopted several advanced prediction methods for the efficient depth intra coding, such as a depth modeling mode (DMM) [5], generic segment-wise DC coding (SDC) [6], and a depth intra skip mode (DIS) [7], which also cause some complexity.
(a) (b) Many fast encoding algorithms were developed to reduce the complexity of the intra coding [8][9][10][11][12][13][14][15] and inter coding [16][17][18] in 3D-HEVC.In particular, there are two research categories to reduce the encoding complexity of the mode decision process in the depth intra coding [8][9][10][11][12][13][14][15].The first category is to optimize the advanced depth prediction methods and adaptively skip them.Most of fast algorithms in this category were developed to simplify DMM, because it requires much more complicated operations than SDC and DIS.For example, the optimum DMM wedgelet is determined through an exhaustive search process [5].Fast algorithms proposed in [8,9] adaptively skip this full DMM search in flat and smooth regions.A fast algorithm proposed in [10] simplifies wedgelet candidates, based on the corresponding texture information.In [11], some wedgelet partitions are skipped, based on the information of rough mode decision (RMD).A fast algorithm proposed in [12] reduces the encoding complexity by employing a simplified edge detector.The gradient-based mode filter in [13] is applied to borders of encoded blocks and determines the best positions to reduce the DMM-related mode decision process.A fast algorithm proposed in [14] selectively skips unnecessary DMM processes, based on a simple edge classification.
The second category is to reduce the number of original candidate modes in the original mode decision process, which include planar, DC, and 33 angular prediction modes.Unlike the texture image, the depth map mainly contains homogenous regions and sharp edges at object boundaries.In general, the homogenous areas are compressed with the DC and planar modes.The DC mode uses an average value of adjacent pixels in the prediction, whereas the planar mode employs a weighted average.The sharp edges are usually compressed with the horizontal and vertical modes, which do not need interpolation filtering.Based on this observation, a fast conventional HEVC intra mode decision and adaptive DMM search method (FHEVCI+ADMMS), which was recently developed for the fast intra mode decision [15], only uses the planar, DC, horizontal, and vertical modes in the mode decision process, instead of using the 35 different modes.Also, when the optimum mode among these four modes is the planar or DC mode, DMM is skipped.Even though this method is very simple, it significantly reduces the encoding complexity by reducing the number of candidate modes in the mode decision process, with negligible coding loss.However, based on our analysis, it was observed that there is still room for improvement in simplifying the original mode decision process.
In addition, since the advanced depth prediction methods, such as DMM, SDC, and DIS, have their disabling flags, they can be turned off in real-time applications.On the contrary, there is no flag that can adaptively enable or disable some of the original intra prediction modes.Hence, the research on the second category is very important.In this paper, we performed some useful mode analysis on depth coding, and then generated a mode pattern table based on the analysis.The proposed fast intra mode decision method adaptively reduces the number of candidate modes in the original mode decision process by employing the mode pattern table.Experimental results show that the proposed method outperforms the FHEVCI+ADMMS method, in terms of complexity reduction.
This paper is organized as follows.Section 2 explains an original intra mode decision method in 3D-HEVC and the FHEVCI+ADMMS method in detail.Section 3 shows results of the mode analysis Many fast encoding algorithms were developed to reduce the complexity of the intra coding [8][9][10][11][12][13][14][15] and inter coding [16][17][18] in 3D-HEVC.In particular, there are two research categories to reduce the encoding complexity of the mode decision process in the depth intra coding [8][9][10][11][12][13][14][15].The first category is to optimize the advanced depth prediction methods and adaptively skip them.Most of fast algorithms in this category were developed to simplify DMM, because it requires much more complicated operations than SDC and DIS.For example, the optimum DMM wedgelet is determined through an exhaustive search process [5].Fast algorithms proposed in [8,9] adaptively skip this full DMM search in flat and smooth regions.A fast algorithm proposed in [10] simplifies wedgelet candidates, based on the corresponding texture information.In [11], some wedgelet partitions are skipped, based on the information of rough mode decision (RMD).A fast algorithm proposed in [12] reduces the encoding complexity by employing a simplified edge detector.The gradient-based mode filter in [13] is applied to borders of encoded blocks and determines the best positions to reduce the DMM-related mode decision process.A fast algorithm proposed in [14] selectively skips unnecessary DMM processes, based on a simple edge classification.
The second category is to reduce the number of original candidate modes in the original mode decision process, which include planar, DC, and 33 angular prediction modes.Unlike the texture image, the depth map mainly contains homogenous regions and sharp edges at object boundaries.In general, the homogenous areas are compressed with the DC and planar modes.The DC mode uses an average value of adjacent pixels in the prediction, whereas the planar mode employs a weighted average.The sharp edges are usually compressed with the horizontal and vertical modes, which do not need interpolation filtering.Based on this observation, a fast conventional HEVC intra mode decision and adaptive DMM search method (FHEVCI+ADMMS), which was recently developed for the fast intra mode decision [15], only uses the planar, DC, horizontal, and vertical modes in the mode decision process, instead of using the 35 different modes.Also, when the optimum mode among these four modes is the planar or DC mode, DMM is skipped.Even though this method is very simple, it significantly reduces the encoding complexity by reducing the number of candidate modes in the mode decision process, with negligible coding loss.However, based on our analysis, it was observed that there is still room for improvement in simplifying the original mode decision process.
In addition, since the advanced depth prediction methods, such as DMM, SDC, and DIS, have their disabling flags, they can be turned off in real-time applications.On the contrary, there is no flag that can adaptively enable or disable some of the original intra prediction modes.Hence, the research on the second category is very important.In this paper, we performed some useful mode analysis on depth coding, and then generated a mode pattern table based on the analysis.The proposed fast intra mode decision method adaptively reduces the number of candidate modes in the original mode decision process by employing the mode pattern table.Experimental results show that the proposed method outperforms the FHEVCI+ADMMS method, in terms of complexity reduction.This paper is organized as follows.Section 2 explains an original intra mode decision method in 3D-HEVC and the FHEVCI+ADMMS method in detail.Section 3 shows results of the mode analysis and proposes our fast depth intra mode decision method.Section 4 discusses experimental results including the coding performance and the encoding complexity.Section 5 summarizes this study.
Original Depth Intra Mode Decision in 3D-HEVC
The original intra prediction mode decision process determines the optimum mode among the planar, DC, and 33 angular modes [19].To determine the optimum prediction mode in 3D-HEVC depth coding, the RMD process first calculates the sum of the absolute transformed difference (SATD) of each mode.Based on this SATD cost, a small number of prediction modes are inserted to a RD list.The number of modes selected in RMD depends on the block size.For instance, if the width and height of the block are greater than or equal to sixteen, three modes are added to the RD list.If the width and height are less than sixteen, eight modes are added to the list.Second, the three most probable modes (MPMs) are added to the list.In general, MPMs include the prediction modes of left and above blocks around the current block, and one special mode which is determined according to a predefined rule.For example, if the left and above blocks are compressed with two different modes, the MPMs are set as the two neighboring block modes, and the planar mode as the special mode.If one of the two neighboring modes is planar mode, the special mode is set as DC mode.If the two neighboring modes are the planar and DC modes, the vertical mode is used instead.Next, in order to efficiently predict sharp edges, DMM is added to the list when the minimum SATD cost mode is not the planar mode.Finally, all the candidate modes in the list are compared to each other during the RD optimization.The optimum mode becomes the mode having the minimum RD cost.Figure 2 shows a flowchart of the original depth intra mode decision method in 3D-HEVC.
Entropy 2019, 21, x FOR PEER REVIEW 3 of 13 and proposes our fast depth intra mode decision method.Section 4 discusses experimental results including the coding performance and the encoding complexity.Section 5 summarizes this study.
Original Depth Intra Mode Decision in 3D-HEVC
The original intra prediction mode decision process determines the optimum mode among the planar, DC, and 33 angular modes [19].To determine the optimum prediction mode in 3D-HEVC depth coding, the RMD process first calculates the sum of the absolute transformed difference (SATD) of each mode.Based on this SATD cost, a small number of prediction modes are inserted to a RD list.The number of modes selected in RMD depends on the block size.For instance, if the width and height of the block are greater than or equal to sixteen, three modes are added to the RD list.If the width and height are less than sixteen, eight modes are added to the list.Second, the three most probable modes (MPMs) are added to the list.In general, MPMs include the prediction modes of left and above blocks around the current block, and one special mode which is determined according to a predefined rule.For example, if the left and above blocks are compressed with two different modes, the MPMs are set as the two neighboring block modes, and the planar mode as the special mode.If one of the two neighboring modes is planar mode, the special mode is set as DC mode.If the two neighboring modes are the planar and DC modes, the vertical mode is used instead.Next, in order to efficiently predict sharp edges, DMM is added to the list when the minimum SATD cost mode is not the planar mode.Finally, all the candidate modes in the list are compared to each other during the RD optimization.The optimum mode becomes the mode having the minimum RD cost.Figure 2 shows a flowchart of the original depth intra mode decision method in 3D-HEVC.
State-Of-The-Art Algorithm for Fast Depth Intra Mode Decision
Since a depth map is much simpler than a texture image, as shown in Figure 1, most regions in the depth map are compressed using the planar, DC, horizontal, and vertical prediction modes, which are less complicated than the other modes.Considering the characteristic of the depth map, the FHEVCI+ADMMS method was proposed to reduce the number of original candidate modes and simplify DMM [15].First of all, the FHEVCI+ADMMS method calculates the RD costs of the planar, DC, horizontal, vertical modes, and then finds the suboptimum mode having the minimum RD cost.
State-Of-The-Art Algorithm for Fast Depth Intra Mode Decision
Since a depth map is much simpler than a texture image, as shown in Figure 1, most regions in the depth map are compressed using the planar, DC, horizontal, and vertical prediction modes, which are less complicated than the other modes.Considering the characteristic of the depth map, the FHEVCI+ADMMS method was proposed to reduce the number of original candidate modes and simplify DMM [15].First of all, the FHEVCI+ADMMS method calculates the RD costs of the planar, DC, horizontal, vertical modes, and then finds the suboptimum mode having the minimum RD cost.Because it does not consider the other 31 angular prediction modes, the computational complexity of their RD cost calculation processes is not required.In addition, it completely ignores RMD and MPM, so the complexity of the SATD comparison and the MPM list construction is avoided.Second, it skips DMM, when the suboptimum mode is the planar or DC mode.DMM consists of two different submodes, which are the explicit wedgelet signalization and intercomponent prediction modes [5].The explicit wedgelet signalization mode searches the optimum wedgelet partition, and then transmits the partition information.The intercomponent prediction mode predicts a contour partition, based on the texture information.When the suboptimum mode is the horizontal or vertical mode, the RD cost of DMM is calculated with some simplified wedgelet search.Finally, the RD costs of DMM and the suboptimum mode are compared with each other, and then the optimum mode is determined to be the minimum RD cost mode.Since the FHEVCI+ADMMS method only employs the four modes in the original mode decision process and adaptively skips the DMM-related mode decision process, the encoding complexity is significantly reduced.A flowchart of the FHEVCI+ADMMS method is shown in Figure 3.Because it does not consider the other 31 angular prediction modes, the computational complexity of their RD cost calculation processes is not required.In addition, it completely ignores RMD and MPM, so the complexity of the SATD comparison and the MPM list construction is avoided.Second, it skips DMM, when the suboptimum mode is the planar or DC mode.DMM consists of two different submodes, which are the explicit wedgelet signalization and intercomponent prediction modes [5].
The explicit wedgelet signalization mode searches the optimum wedgelet partition, and then transmits the partition information.The intercomponent prediction mode predicts a contour partition, based on the texture information.When the suboptimum mode is the horizontal or vertical mode, the RD cost of DMM is calculated with some simplified wedgelet search.Finally, the RD costs of DMM and the suboptimum mode are compared with each other, and then the optimum mode is determined to be the minimum RD cost mode.Since the FHEVCI+ADMMS method only employs the four modes in the original mode decision process and adaptively skips the DMM-related mode decision process, the encoding complexity is significantly reduced.A flowchart of the FHEVCI+ADMMS method is shown in Figure 3.
Mode Analysis
In order to perform mode analysis in 3D-HEVD, we investigated the computational complexity of both the original and DMM-related mode decision processes.As mentioned in Section 2.1, the original mode decision process is performed in the order of RMD, the MPM list construction, and RD optimization.As mentioned in Section 2.2, The DMM-related mode decision process performs the wedgelet partition search and contour partition prediction modes, and then compares them with the RD optimization.Figure 4 shows the mode decision runtime of each process.In this experiment, we used Poznan_Hall2 and Kendo test sequences with four different quantization parameters (QPs) of 34, 39, 42, and 45.The sequence information and coding options will be discussed in Section 4. It can be observed that the encoding complexity of the original mode decision process is higher than that of the DMM-related mode decision process.In particular, the complexity portion of the original mode decision process is significantly high in the high QP setting.For example, in a Poznan_Hall2 test sequence with QP of 45, the DMM-related encoding time takes about 20% in the overall mode decision process, whereas the encoding time of 80% is required in the original mode decision process.
Mode Analysis
In order to perform mode analysis in 3D-HEVD, we investigated the computational complexity of both the original and DMM-related mode decision processes.As mentioned in Section 2.1, the original mode decision process is performed in the order of RMD, the MPM list construction, and RD optimization.As mentioned in Section 2.2, The DMM-related mode decision process performs the wedgelet partition search and contour partition prediction modes, and then compares them with the RD optimization.Figure 4 shows the mode decision runtime of each process.In this experiment, we used Poznan_Hall2 and Kendo test sequences with four different quantization parameters (QPs) of 34, 39, 42, and 45.The sequence information and coding options will be discussed in Section 4.
It can be observed that the encoding complexity of the original mode decision process is higher than that of the DMM-related mode decision process.In particular, the complexity portion of the original mode decision process is significantly high in the high QP setting.For example, in a Poznan_Hall2 test sequence with QP of 45, the DMM-related encoding time takes about 20% in the overall mode decision process, whereas the encoding time of 80% is required in the original mode decision process.Since the original mode decision process imposes a high computational burden on the 3D-HEVC encoder, we focus on the second research category that reduces the number of original candidate modes.Since the original mode decision process imposes a high computational burden on the 3D-HEVC encoder, we focus on the second research category that reduces the number of original candidate modes.
(a) (b) Figure 5 represents the probability that the optimum prediction mode with the RD optimization is in accordance with the mode having (1) the minimum SATD cost, (2) one of two modes having the first and second minimum costs, and (3) one of three modes having the first, second, and third minimum costs.Since most of areas in the depth map are compressed with the planar, DC, horizontal, and vertical modes, we only considered these four modes.In this experiment, we used Poznan_Hall2, Undo_Dancer, and Shark test sequences with QPs of 39 and 42.As shown in Figure 5, the probability that the minimum SATD cost mode and the optimum mode are the same is relatively low in all the test sequences.For example, the probability is about 80% at a QP of 39 and less than 90% at a QP of 42.However, the probability that two modes having the first and second minimum costs become the optimum mode is greater than or equal to 90%, which indicates the relatively high accuracy.It means that two candidate modes are enough to determine the optimum mode in the mode decision process with the RD optimization.The other test sequences also showed similar results.Therefore, after the SATD cost calculation of the four modes, the proposed method inserts the two modes having the minimum SATD costs into the RD list, and then checks them as the candidate modes.It should be noted that the FHEVCI+ADMMS method directly calculates the RD costs of the planar, DC, horizontal, and vertical modes without the SATD cost calculation.On the other hand, the proposed method compares the RD costs of the two modes, which are the first and second minimum SATD Figure 5 represents the probability that the optimum prediction mode with the RD optimization is in accordance with the mode having (1) the minimum SATD cost, (2) one of two modes having the first and second minimum costs, and (3) one of three modes having the first, second, and third minimum costs.Since most of areas in the depth map are compressed with the planar, DC, horizontal, and vertical modes, we only considered these four modes.In this experiment, we used Poznan_Hall2, Undo_Dancer, and Shark test sequences with QPs of 39 and 42.As shown in Figure 5, the probability that the minimum SATD cost mode and the optimum mode are the same is relatively low in all the test sequences.For example, the probability is about 80% at a QP of 39 and less than 90% at a QP of 42.However, the probability that two modes having the first and second minimum costs become the optimum mode is greater than or equal to 90%, which indicates the relatively high accuracy.It means that two candidate modes are enough to determine the optimum mode in the mode decision process with the RD optimization.The other test sequences also showed similar results.Therefore, after the SATD cost calculation of the four modes, the proposed method inserts the two modes having the minimum SATD costs into the RD list, and then checks them as the candidate modes.It should be noted that the FHEVCI+ADMMS method directly calculates the RD costs of the planar, DC, horizontal, and vertical modes without the SATD cost calculation.On the other hand, the proposed method compares the RD costs of the two modes, which are the first and second minimum SATD cost modes.Since the proposed method should calculate the SATD costs of the four modes, further efforts are required to reduce the encoding complexity.
Fast Depth Intra Mode Decision
The proposed method only calculates the SATD costs of the planar, DC, horizontal, and vertical prediction modes, and then adds the first and second minimum cost modes in the RD list.In order to further reduce the encoding complexity, it adaptively reduces the number of candidate modes from two to one.Figure 6 illustrates a correlation between the optimum mode and the minimum SATD cost mode.We used a Poznan_Hall2 sequence with QPs of 39 and 42 in this experiment.When the planar mode is the minimum SATD cost mode (blue), as displayed in Figure 6, the optimum mode is likely to be the planar mode in both QPs.For example, the probability that the planar mode becomes the optimum mode is about 95% and 96% for QPs of 39 and 42, respectively.On the other hand, the probability that the optimum mode is determined to be the DC, horizontal, and vertical modes is drastically low.When the DC mode is the minimum cost mode (green), the probability that the planar and DC modes become the optimum mode is about 53% and 43% with a QP of 39, and about 49% and 46% for a QP of 42, respectively.This suggests that the optimum mode is likely to be the planar or DC mode.When the horizontal mode is the minimum cost mode (orange), the probability that the vertical mode becomes the optimum mode is very low.Similarly, in the case that the vertical mode is the minimum cost mode (yellow), the horizontal mode is most likely not the optimum mode.The other sequences also showed similar results.
Fast Depth Intra Mode Decision
The proposed method only calculates the SATD costs of the planar, DC, horizontal, and vertical prediction modes, and then adds the first and second minimum cost modes in the RD list.In order to further reduce the encoding complexity, it adaptively reduces the number of candidate modes from two to one.Figure 6 illustrates a correlation between the optimum mode and the minimum SATD cost mode.We used a Poznan_Hall2 sequence with QPs of 39 and 42 in this experiment.When the planar mode is the minimum SATD cost mode (blue), as displayed in Figure 6, the optimum mode is likely to be the planar mode in both QPs.For example, the probability that the planar mode becomes the optimum mode is about 95% and 96% for QPs of 39 and 42, respectively.On the other hand, the probability that the optimum mode is determined to be the DC, horizontal, and vertical modes is drastically low.When the DC mode is the minimum cost mode (green), the probability that the planar and DC modes become the optimum mode is about 53% and 43% with a QP of 39, and about 49% and 46% for a QP of 42, respectively.This suggests that the optimum mode is likely to be the planar or DC mode.When the horizontal mode is the minimum cost mode (orange), the probability that the vertical mode becomes the optimum mode is very low.Similarly, in the case that the vertical mode is the minimum cost mode (yellow), the horizontal mode is most likely not the optimum mode.The other sequences also showed similar results.Table 1 shows a mode pattern table, which was generated by taking into account the correlation between the optimum mode and the minimum SATD cost mode in Figure 6.In the proposed method, the mode pattern table is used to eliminate the candidate modes in the list.For example, the proposed method adds the first and second minimum cost modes in the RD list, after the SATD cost calculation of the planar, DC, horizontal, and the vertical modes.When the first minimum cost mode is the planar mode, the second minimum cost mode is always eliminated in the list, according to the mode pattern table in Table 1, because the optimum mode is likely to be the planar mode, as displayed in Figure 6 (blue).When the DC mode is the first minimum cost mode, it is eliminated if the second minimum cost mode is one of the horizontal and vertical modes, because the probability that the horizontal and vertical modes become the optimum mode is very low, as shown in Figure 6 (green).However, if it is the planar mode, it remains in the list.Similarly, when the horizontal mode is the first minimum cost mode, it is eliminated if the vertical mode is the second minimum cost mode, and vice versa.Hence, through the adaptive elimination of the second minimum SATD cost mode, the proposed method can reduce the number of candidate modes from two to one.Finally, similar to the mode decision process in 3D-HEVC, the proposed method does not add DMM to the list when the first minimum cost mode is planar.If it is not in planar mode, DMM including the wedgelet signalization and intercomponent prediction modes is added to the list, and is then compared with the original candidate modes through RD optimization.Table 2 shows the nine combinations of the possible candidate modes in the RD list, based on the mode pattern table.If the first minimum SATD cost mode is planar (case 1), the second minimum cost mode is always eliminated and DMM is not added to the list.Otherwise (cases 2 to 9), the second minimum cost mode can be eliminated or remain, but Table 1 shows a mode pattern table, which was generated by taking into account the correlation between the optimum mode and the minimum SATD cost mode in Figure 6.In the proposed method, the mode pattern table is used to eliminate the candidate modes in the list.For example, the proposed method adds the first and second minimum cost modes in the RD list, after the SATD cost calculation of the planar, DC, horizontal, and the vertical modes.When the first minimum cost mode is the planar mode, the second minimum cost mode is always eliminated in the list, according to the mode pattern table in Table 1, because the optimum mode is likely to be the planar mode, as displayed in Figure 6 (blue).When the DC mode is the first minimum cost mode, it is eliminated if the second minimum cost mode is one of the horizontal and vertical modes, because the probability that the horizontal and vertical modes become the optimum mode is very low, as shown in Figure 6 (green).However, if it is the planar mode, it remains in the list.Similarly, when the horizontal mode is the first minimum cost mode, it is eliminated if the vertical mode is the second minimum cost mode, and vice versa.Hence, through the adaptive elimination of the second minimum SATD cost mode, the proposed method can reduce the number of candidate modes from two to one.Finally, similar to the mode decision process in 3D-HEVC, the proposed method does not add DMM to the list when the first minimum cost mode is planar.If it is not in planar mode, DMM including the wedgelet signalization and intercomponent prediction modes is added to the list, and is then compared with the original candidate modes through RD optimization.Table 2 shows the nine combinations of the possible candidate modes in the RD list, based on the mode pattern table.If the first minimum SATD cost mode is planar (case 1), the second minimum cost mode is always eliminated and DMM is not added to the list.Otherwise (cases 2 to 9), the second minimum cost mode can be eliminated or remain, but DMM is always added to the list.As a result, the total number of candidate modes for the RD optimization in the proposed method may be one, two, or three, and is strongly dependent on the mode pattern table.Figure 7 presents a flowchart of the proposed depth intra mode decision method using the mode pattern table.DMM is always added to the list.As a result, the total number of candidate modes for the RD optimization in the proposed method may be one, two, or three, and is strongly dependent on the mode pattern table.Figure 7 presents a flowchart of the proposed depth intra mode decision method using the mode pattern table.
Results
The proposed method was implemented on top of a reference software 3D-HTM 14.0.We used eight JCT-3V test sequences with resolutions of 1024 × 768 and 1920 × 1088.Table 3 shows the
Results
The proposed method was implemented on top of a reference software 3D-HTM 14.0.We used eight JCT-3V test sequences with resolutions of 1024 × 768 and 1920 × 1088.Table 3 shows the sequence information.The three view numbers represent indexes of left, center, and right views, and the MVD data for these views is input to 3D-HEVC, as displayed in Figure 8. 3D-HEVC compresses these three views with a P-I-P prediction structure as shown in Figure 9.For example, the center view is encoded as I view, which is called a base view.This base view can be decoded with HEVC because it only performs the inter prediction.On the other hand, both left and right views are encoded as P view using the inter-view prediction.Hence, they are able to use the already encoded views as references in the inter-view prediction.The arrows in Figure 9 show the prediction direction from the reference view into the target view to be compressed.The view synthesis generates six synthesized views by using the three decoded texture images and depth maps, based on a three-view configuration in the 3D video coding.All coding parameters followed all intra setting in the common test conditions (CTC) of JCT-3V [20].The coding performance was measured according to Bjontegaard delta bitrate (BDBR) and PSNR (BDPSNR) [21] in percentage and dB, respectively, and complexity reduction (CR) was measured with the encoding time as follows: where ET (reference) and ET (proposed) represent the encoding times of the reference software and the proposed software, respectively.Entropy 2019, 21, x FOR PEER REVIEW 9 of 13 sequence information.The three view numbers represent indexes of left, center, and right views, and the MVD data for these views is input to 3D-HEVC, as displayed in Figure 8. 3D-HEVC compresses these three views with a P-I-P prediction structure as shown in Figure 9.For example, the center view is encoded as I view, which is called a base view.This base view can be decoded with HEVC because it only performs the inter prediction.On the other hand, both left and right views are encoded as P view using the inter-view prediction.Hence, they are able to use the already encoded views as references in the inter-view prediction.The arrows in Figure 9 show the prediction direction from the reference view into the target view to be compressed.The view synthesis generates six synthesized views by using the three decoded texture images and depth maps, based on a three-view configuration in the 3D video coding.All coding parameters followed all intra setting in the common test conditions (CTC) of JCT-3V [20].The coding performance was measured according to Bjontegaard delta bitrate (BDBR) and PSNR (BDPSNR) [21] in percentage and dB, respectively, and complexity reduction (CR) was measured with the encoding time as follows: where ET (reference) and ET (proposed) represent the encoding times of the reference software and the proposed software, respectively.Table 4 shows the overall performance of the FHEVCI+ADMMS method [15] and the proposed method.BDBR(D) means the overall performances in terms of the average PSNR of the three decoded views over the total coding bitrate of the texture images and the depth maps, and BDBR(S) means the overall performances in terms of the average PSNR of the six synthesized views over the total bitrate [20].CR(O) was computed with the overall encoding time of the texture images and depth maps in Equation ( 1), but CR(D) was only calculated with the depth encoding time.Avg.indicates the average sequence information.The three view numbers represent indexes of left, center, and right views, and the MVD data for these views is input to 3D-HEVC, as displayed in Figure 8. 3D-HEVC compresses these three views with a P-I-P prediction structure as shown in Figure 9.For example, the center view is encoded as I view, which is called a base view.This base view can be decoded with HEVC because it only performs the inter prediction.On the other hand, both left and right views are encoded as P view using the inter-view prediction.Hence, they are able to use the already encoded views as references in the inter-view prediction.The arrows in Figure 9 show the prediction direction from the reference view into the target view to be compressed.The view synthesis generates six synthesized views by using the three decoded texture images and depth maps, based on a three-view configuration in the 3D video coding.All coding parameters followed all intra setting in the common test conditions (CTC) of JCT-3V [20].The coding performance was measured according to Bjontegaard delta bitrate (BDBR) and PSNR (BDPSNR) [21] in percentage and dB, respectively, and complexity reduction (CR) was measured with the encoding time as follows: where ET (reference) and ET (proposed) represent the encoding times of the reference software and the proposed software, respectively.Table 4 shows the overall performance of the FHEVCI+ADMMS method [15] and the proposed method.BDBR(D) means the overall performances in terms of the average PSNR of the three decoded views over the total coding bitrate of the texture images and the depth maps, and BDBR(S) means the overall performances in terms of the average PSNR of the six synthesized views over the total bitrate [20].CR(O) was computed with the overall encoding time of the texture images and depth maps in Equation ( 1), but CR(D) was only calculated with the depth encoding time.Avg.indicates the average Table 4 shows the overall performance of the FHEVCI+ADMMS method [15] and the proposed method.BDBR(D) means the overall performances in terms of the average PSNR of the three decoded views over the total coding bitrate of the texture images and the depth maps, and BDBR(S) means the overall performances in terms of the average PSNR of the six synthesized views over the total bitrate [20].CR(O) was computed with the overall encoding time of the texture images and depth maps in Equation ( 1), but CR(D) was only calculated with the depth encoding time.Avg.indicates the average performance of all the test sequences.Both the FHEVCI+ADMMS method and the proposed method only increase the bitrates by about 0.1% and 0.6%, in terms of the decoded and synthesized PSNRs on average, respectively, which is a very small coding loss.In terms of the complexity reduction, the proposed method reduces the encoding time by about 10% more than the FHEVCI+ADMMS method, on average.For instance, the proposed method saves the encoding time by 34.42% and 39.27%, on average, and in terms of the overall and depth encoding times, respectively.However, the FHEVCI+ADMMS method only reduces the encoding times by 23.95% and 27.38% on average, in terms of the overall and depth encoding times, respectively.In addition, the proposed method achieves better results than the FHEVCI+ADMMS method in all the test sequences.The FHEVCI+ADMMS method always investigates the four prediction modes-which include the planar, DC, horizontal, and vertical modes-in the original mode decision process with the RD optimization, whereas the proposed method only tests one or two modes, based on the mode pattern table.Hence, a higher encoding time reduction can be achieved.Table 5 shows the detailed information of both methods.∆Bits and ∆PSNR were measured with the total coding bitrate and PSNR of the six synthesized views.Avg.indicates the BDBR and BDPSNR performance in each test sequence.As shown in Table 5, even though both methods use a small number of candidate modes in the mode decision, the coding degradation is significantly small at all the QPs.This means that the planar, DC, horizontal, and vertical modes are enough to determine the optimum mode for depth coding.It also demonstrates that the proposed method further reduces the number of candidate modes in these four modes without significant coding loss.In addition, Table 6 shows the mode decision accuracy of the proposed method for QPs of 39 and 42 in percentage.This indicates the degree to which the optimum mode as determined by the proposed method is the same as that determined by the original method, among the four modes.As shown in Table 6, the accuracy is very high for all the test sequences.This indicates that the number of candidate modes can be efficiently reduced based on the mode pattern table in Table 1.
Conclusions
This paper proposed the fast intra mode decision method in 3D-HEVC depth coding.Based on the mode analysis, the proposed method generated the mode pattern table, which can adaptively reduce the number of candidate modes in the original intra mode decision process.The experimental results demonstrated that the proposed method is more efficient than the state-of-the-art method in terms of the complexity reduction.
However, the proposed method is only applied to the depth intra coding, so its impact is very limited in real-time applications of 3D-HEVC.Therefore, in future work, I will extend the proposed method to the depth inter coding.
Figure 1 .
Figure 1.Example of the multiview video plus depth (MVD) format including a texture image (a) and a depth map (b).
Figure 1 .
Figure 1.Example of the multiview video plus depth (MVD) format including a texture image (a) and a depth map (b).
Figure 2 .
Figure 2. Flowchart of the original depth intra mode decision method in 3D-HEVC.
Figure 2 .
Figure 2. Flowchart of the original depth intra mode decision method in 3D-HEVC.
Figure 3 .
Figure 3. Flowchart of the fast conventional HEVC intra mode decision and adaptive depth modeling mode (DMM) search method (FHEVCI+ADMMS) for the fast depth intra mode decision.
Figure 3 .
Figure 3. Flowchart of the fast conventional HEVC intra mode decision and adaptive depth modeling mode (DMM) search method (FHEVCI+ADMMS) for the fast depth intra mode decision.
Figure 4 .
Figure 4. Mode decision runtime of the original and DMM-related mode decision processes in (a) Poznan_Hall2 and (b) Kendo test sequences with quantization parameters (QPs) of 34, 39, 42, and 45.
Figure 4 .
Figure 4. Mode decision runtime of the original and DMM-related mode decision processes in (a) Poznan_Hall2 and (b) Kendo test sequences with quantization parameters (QPs) of 34, 39, 42, and 45.
Entropy 2019 ,Figure 5 .
Figure 5.The probability that the optimum mode belongs to several minimum SATD (sum of the absolute transformed difference) cost modes for QPs of (a) 39 and (b) 42.
Figure 5 .
Figure 5.The probability that the optimum mode belongs to several minimum SATD (sum of the absolute transformed difference) cost modes for QPs of (a) 39 and (b) 42.
Figure 6 .
Figure 6.Correlation between the optimum mode and the minimum SATD cost mode for QPs of (a) 39 and (b) 42.
Figure 6 .
Figure 6.Correlation between the optimum mode and the minimum SATD cost mode for QPs of (a) 39 and (b) 42.
Figure 7 .
Figure 7. Flowchart of the proposed depth intra mode decision using the mode pattern table.
Figure 7 .
Figure 7. Flowchart of the proposed depth intra mode decision using the mode pattern table.
Figure 8 .
Figure 8. Three-view configuration including the left, center, right views in the 3D video coding.
Figure 9 .
Figure 9. Inter-view prediction structure (P-I-P) in the three-view configuration.
Figure 8 .
Figure 8. Three-view configuration including the left, center, right views in the 3D video coding.
Figure 8 .
Figure 8. Three-view configuration including the left, center, right views in the 3D video coding.
Figure 9 .
Figure 9. Inter-view prediction structure (P-I-P) in the three-view configuration.
Figure 9 .
Figure 9. Inter-view prediction structure (P-I-P) in the three-view configuration.
Table 1 .
Mode pattern table to eliminate the candidate modes in the rate-distortion (RD) list.
Table 2 .
Possible candidate in the RD list based on the mode pattern table.
Table 1 .
Mode pattern table to eliminate the candidate modes in the rate-distortion (RD) list.
Table 2 .
Possible candidate modes in the RD list based on the mode pattern table.
Table 4 .
Overall performance of (a) the FHEVCI+ADMMS method and (b) the proposed method.
Table 5 .
Detailed information of (a) the FHEVCI+DMMS method and (b) the proposed method.
Table 6 .
Mode decision accuracy of the proposed method. | 9,996 | 2019-04-13T00:00:00.000 | [
"Computer Science"
] |
Fiscal expenditures, revenues and labour productivity in South Africa
Abstract The COVID-19 pandemic emerged at a time when the South African economy was already battling to recover from the aftermath of the global financial crisis of 2007–09 which led the country to experience a decade-old slowdown in labour productivity. Our study investigates the role which government plays in influencing labour productivity by estimating a log-linearized growth model augmented with a fiscal sector using the autoregressive distributive lag model applied to annual data of 1990–2020. We further disaggregate the composition of government size into seven expenditure items and six revenue items, and find i) education, health, recreation and public safety to be expenditure items most beneficial to short-run and long-run labour productivity ii) income taxes and VAT to be revenue items most beneficial to long-run productivity and yet most taxes have adverse short-run effects. The policy implications of the study are discussed.
Introduction
This study examines the effect of government expenditure and revenues on labour productivity in South Africa over the last three decades. We consider this study important since conventional ABOUT THE AUTHORS Andrew Phiri, who is the corresponding author of the manuscript, is an associate professor with the department of economics at Nelson Mandela University, South Africa. He enjoys a wide range of publications with research interests mainly in macroeconomics, applied econometrics and financial economics. Chuma Mbaleki is a post-graduate student at the department of economics at Nelson Mandela University and is the first author of the article. His research interests are in public economics and applied econometrics.
PUBLIC INTEREST STATEMENT
Labour productivity is the amount of output which can be produced by each labourer and is considered an encompassing measure of welfare for economists. For instance, businesses are interested in increasing labour productivity as it has the potential to lower costs and increase profits. On the other hand, improved productivity could translate to higher wages and improved working conditions for labours whilst governments consider labour productivity as key to longterm job creation. Notably, South Africa has had poor labour productivity performance since her democratic transition in 1994 and this has worsened since the 2007-2008 global financial crisis. Our paper investigates the extent to which fiscal policy instruments such as taxes and expenditures can play a role in improving labour productivity in South Africa and by taking a disaggregated approach in our empirical analysis, we able to identify the individual tax and revenue items which either distort or improve labour productivity. economic theory predicts labour productivity to be key in raising a country's long-term living standards and wellbeing, and yet labour productivity growth has been on a decline since the great recession period of 2008-2010 which coincides with an era of deteriorating economic growth and welfare (Bloom et al., 2020;Fernald, 2014). The more recent coronavirus pandemic has further deteriorated labour productivity and international government bodies such as the International Labour Organization (ILO), International Monetary Fund (IMF) and the World Bank have placed emphasis on government intervention as a prescription for inducing labour productivity recovery and closing the growing "productivity-gap" between industrialized and non-industrialized economies (;ILO, 2020, ;IMF, 2021;World Bank, 2021).
Considering the stark differences in economic and fiscal structures globally, it is likely that the influence of the public sector on productivity varies across different countries, and this provides a motivation for country-specific empirical investigation. Moreover, it is unlikely that different/ disaggregated classifications of public expenditures and taxation affect labour productivity the same way. We point this out because both conventional economic theory and empirical evidence commonly aggregate the effects of fiscal size on labour productivity (Cassou & Lansing, 1999;Mabugu et al., 2013). Furthermore, the "one-rule-fit-all" policy recommendations from the IMF and ILO do not provide a precise prescription of the fiscal mixture and adjustment needed to improve domestic labour productivity levels.
In our paper, we investigate the impact of government expenditures and taxes on labour productivity for South Africa between 1990 and 2020 using seven disaggregated measures of government expenditure items (education; health; housing; defence; social protection; public safety and order, as well as recreation, culture and religion) as well as six measures of fiscal revenue collections (Value Added Tax; personal income tax; Property tax; fuel levies and SACU payment). Whilst we acknowledge the existence of previous international studies on the subject matter, we observe that the existing literature either investigates the impact of government expenditure on labour productivity (Ali, 1985;Aschauer, 1989;Auci et al., 2021;Fedotenkov et al., 2021;Hansson & Henrekson, 1994;Knight & Sabot, 1987;Najarzadeh et al., 2014;Wei et al., 2018) or investigate the impact of taxation on labour productivity (Thomas, 1998;Vartìa, 2008;Ordonez, 2014;Salotti & Trecroci, 2016;, McPhail et al., 2018;Davanzati & Giangrande, 2020;Peng et al., 2021). Our study bridges these two strands of empirical literature from a disaggregated perspective and therefore provides a more "encompassing" outlook on the subject matter.
figure 1 presents a time series plot between the disaggregated classes of government expenditure items and labour productivity in South Africa for the period 1990-2020. As can be observed, labour productivity has been on a gradual increase since 1994. Furthermore, it can also be observed that educational expenditure has taken a greater portion of overall government spending 0 200000 400000 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 since the dawn of the democratic dispensation in 1994. This increase became more pronounced from 2006, in the era of ASGISA, a government policy strategy which focused primarily on improving skills and innovation to accelerate growth and productivity. Other notable increasing expenditure items have been social protection and health expenditures. Social protection is particularly important for South Africa whose economy is characterized by high unemployment and wide income gaps, most of which exist as a legacy of apartheid. In order to mitigate these effects, the government offers a wide range of social grants to the South African population. Spending on health is important for alleviating chronic diseases and other life-threatening illness by increasing access of the poor to quality health services. Moreover, health and social protection expenditure items have increased drastically as government mitigates the health, social and economic effects of the COVID-19 pandemic (Bhorat et al., 2021).
figure 2 presents a time series plot of disaggregated classes of government revenue items and labour productivity in South Africa for the period 1990-2020. Tax revenue has been notably increasing in South Africa following the establishment of the South African Revenue Services (SARS) in 1998, and tax collections have been dominated by personal income tax and valueadded tax (VAT). A notable shortfall in revenue collection was experienced in most tax items from 2008 to 2010 as a consequence of the global financial crisis and recession periods. This shortfall was more notable because it followed a period (i.e. 2004-2006) when the South African economy recorded its first fiscal surplus in the new democratic era. Nevertheless, the growth in revenue collection picked up in the post-recession period, although there has been an increased dependence and concentration on income taxes and VAT. In recent years, there has been debate on whether the SARS should include a wealth tax as an additional source of fiscal revenues (Arendse & Stock, 2018) and this debate has intensified during the COVID period.
Our study's main contribution to the literature is that it sheds light on two key policy questions pertaining to the sound mix of fiscal policies which will be required to boast labour productivity. Firstly, "do all public expenditures and taxation affect labour productivity the same way or do their effects differ across disaggregated fiscal items?" Secondly, "which fiscal expenditure or revenue components of government spending are most conducive for labour productivity?" "Are there any expenditure or taxation items that are redundant or unproductive?". Against the backdrop of a lack of academic literature addressing these policy questions for the case of South Africa, our study fills in this empirical "hiatus" and contributes to the literature by estimating log-linearized growth regressions using the autoregressive distributive lag (ARDL) model of Pesaran et al. (2001) to investigate the short-run and long-run cointegration relationship between disaggregated fiscal expenditure/revenue items and labour productivity.
The rest of the study is organized as follows. Section 2 presents the literature review. Section 3 outlines the empirical framework and the methods used for estimation purposes. Section 4 presents the data and main empirical analysis. Section 5 concludes the study in the form of policy implications and avenues for future research.
Literature review
A country's ability to improve its standard of living depends on its ability to raise its output per worker and the literature identifies numerous channels through which this occurs. Firstly, higher productivity results in higher profits and growth for companies (Bloom et al., 2015). Secondly, higher productivity leads to higher employment and wages (Lazear, 2019;Stansbury & Summers, 2017). Thirdly, higher productivity leads to higher economic growth (Eichengreen et al., 2011). Fourthly, higher productivity leads to lower costs for consumers (Byerlee et al., 2005). Lastly, an increase in labour productivity within economic sectors is a main driver of economic growth, particularly in the industry and services sectors (Holman et al., 2008).
Labour productivity has grown to be an important tool used to measure economic performance in growth accounting policy models (Solow, 1956). Dynamic growth theory attributes the growth in labour productivity to improvements in labour, human capital, technology and government size. According to new growth models, technological advancement through research and development and high quality of human capital play a leading role in enhancing the output capabilities of labourers (Mankiw et al., 1992;Romer, 1990). Improving human capital through education, skills and research are viewed as essential components to advancing and learning new technologies required for dynamic growth (Romer, 1986) and new growth theory further hypothesizes that improvements in labour productivity can be fostered through an efficient government spending and collection structure (Barro, 1990;Barro & Redlick, 2011;Bleaney et al., 2001;Hansson & Henrekson, 1994).
In the next two subsections we review the two strands of empirical literature which examine the impact of government structure on labour productivity. Firstly, we review the literature which looks at the impact of government expenditure items on labour productivity. Secondly, we review the literature which investigates the impact of taxation items on labour productivity.
Government expenditure and labour productivity
Whilst traditional theory highlights the importance of government spending in stimulating the economy through multiplier effects, more recent dynamic growth theory argues that the effects of increasing fiscal size on output productivity are ambiguous. On one hand, government spending addresses market failures, increases utilizations, reduces social inequality and are a natural portion of GDP accounting, hence exerting a positive effect on productivity. On the other hand, government size, may crowd out private investment and production and encourage rent-seeking opportunities, which would exert a negative effect on output productivity (Hansson & Henrekson, 1994). Further ambiguity can be placed on traditional theory and empirical evidence on account of aggregating government expenditures which masks any possible heterogeneous effects of individual spending items on growth and output productivity (Easterly & Rebelo, 1993;Engen & Skinner, 1992;Fölster & Henrekson, 1999;Ram, 1986;Romer & Romer, 2010).
Interestingly, there exists a handful of international empirical literature which have examined the impact of disaggregated fiscal expenditure items on labour productivity. For instance, an earlier study by Ali (1985) uses traditional OLS estimates based on 1978 data on a sample of 65 developing and 15 industrialized economies and finds that an increase in adult literacy improves labour productivity. The author concludes that education is an investment in human capital that raises labor productivity through skills provision, facilitating innovations and enhancing labour mobility. Knight and Sabot (1987) take Tanzania (lower educational quality) and Kenya (higher educational quality) as a natural experiment to examine the impact of educational policy on labour productivity within a growth output accounting framework in the 1980ʹs. The authors find that if the Tanzanian government had implemented similar educational investment policies to Kenya, that would have increased the quality and quantity of Tanzania's secondary education, which in turn, would increase productivity and average earnings. Aschauer (1989) examine the impact of disaggregated public expenditure items for the US economy between 1949 and 1985 using ordinary least squares (OLS) and two staged least squares (2SLS) estimates and finds that i) non-military expenditure is more productive than military expenditure ii) from the non-military components, it is infrastructure spending which exerts the most positive influence on productivity. Hansson and Henrekson (1994) also use OLS investigate the impact of government transfers, government consumption, government investment and education spending on labour productivity for 14 OECD countries between 1970 and 1987. The authors find that whilst government transfers, consumption and outlays have negative effects on labour productivity, education expenditure has a positive effect whereas government investment has no effect on productivity.
More recent studies include those of Najarzadeh et al. (2014) who estimated Panel OLS (POLS) and Generalized Methods of Moments (GMM) regressions to examine the impact of ICT (internet usage) and public expenditures on educational on labour productivity for a sample of 108 countries between 1995-2010 and found that both variables have a positive and significant impact on labour productivity. Wei et al. (2018) conducted a study on China's prefecture-level cities between 2007 and 2013 and using two-way fixed effects (FE) estimators the authors found that health expenditure improves labour productivity in agriculture and non-agriculture sectors by improving people's cognitive abilities although this relationship turns negative in regions which have poor infrastructure. Applying the true random effect (TRE) estimators to the stochastic production frontier technique, Auci et al. (2021) investigate the impact of disaggregated public expenditures (public services; economic affairs; social protection; recreation; culture and religion; public order and safety; education; health; housing; environmental protection and defence) for 15 European countries between 1996 and 2014, and find that whilst education and health expenditures exert a positive effect on productivity the remaining classes of expenditures exert a negative effect. Fedotenkov et al. (2021) investigated the impact of 10 disaggregated classes of public expenditures (defence; economic affairs; education; environmental protection; health; housing; public order and safety; public services; recreation; social protection) on services labour productivity in 21 EU countries between 1996 and 2017 using two-way fixed effects and GMM estimators. For the business services sector, the authors find that a negative impact of defence spending on productivity, a positive impact for public order and safety whilst the rest of the expenditure items exert insignificant effects. For the industry sector, environmental protection exerts a negative effect on labour productivity whilst social protection produces a positive effect and the remaining expenditure items produce insignificant estimates.
Taxation and labour productivity
Taxation is an integral part of any country's fiscal policy simply because a major part of government expenditure is financed through tax revenue. In retrospect, taxation directly affects labour productivity by distorting prices and allocation of factors of production and alters the rate of return expected from entrepreneurship, influences private investment decisions and reduces incentive to supply labour (Stephan, 1975). For instance, income taxes reduce labour earnings and affect how much people are willing to work whilst taxes on goods and services as well as businesses, can distort efficient production decisions. Moreover, since taxes change decisions and behaviours of economic agents, they can further impact private sector productivity by reducing savings, investment, the supply of labour, entrepreneurship, and innovation (Rao et al., 2008).
Notably, there are a number of empirical studies which have examined the impact of different tax categories on labour productivity. Thomas (1998) investigates the impact of labour taxes on the Swedish labour market between 1970 and 1996 using OLS and GMM estimates applied to a labour market model and finds that an increase in payroll taxes raises the cost of labour, lowers total labour hours and thus diminishes labour productivity. Vartìa (2008) studies the cross-sectional impact of taxes on productivity at industry level for manufacturing and business sectors for 16 OECD countries between 1983 and 2001 using OLS estimators. The author finds that whilst corporate and top income taxes adversely affects labour productivity, the tax incentives for Research and Development (R&D) improve productivity particularly in more profitable industries which have more entrepreneurial and R&D activity. Ordonez (2014) calibrates a dynamic general equilibrium (DGE) model of occupational choice and capital accumulation for Mexico and find that in the presence of limited tax enforcement, capital-labour ratios are lowered, unproductive entrepreneurs enter the market and cause a misallocation of resources to low-productive firms, which then lowers output productivity. However, in the presence of complete tax enforcement these inefficiencies are removed, and productivity is improved. Salotti and Trecroci (2016) examine the impact of fiscal deficits, expenditures, income tax, social tax, property tax and good tax on labour productivity 20 advanced and 80 emerging economies between 1970 and 2009 using FE and GMM estimators and find a negative impact of expenditure and all tax classifications on labour productivity. McPhail et al. (2018) investigate the impact of marginal tax rates on labour productivity across 48 US states between 1981 and 2015 by estimating a neoclassical growth model with OLS. The authors find that whilst marginal property tax, sales tax, marginal tax on capital returns all exert negative effects on labour productivity, the effect of labour tax on productivity is neutral. Davanzati and Giangrande (2020) recently studied the impact of labour market deregulation and taxation on labour productivity in Italy using an analytical model derived from a Marx-Kaldor framework and found that the Italy's liberalization agenda has increased income inequality, lowered economic growth and consequentially lower labour productivity The authors confirm that higher taxes, caused by increased debt, adversely affects labour productivity through a deterioration of the quality of the labour workforce. More recently, Peng et al. (2021) study the impact of the replacement of business tax with VAT in the Chinese services sector on firm-level productivity in manufacturing sector using difference-in-difference estimators and find that the VAT tax reforms have a positive effect on productivity levels. The authors find that the positive effect is caused by its effect on firm specialization which is more prominent in non-state owned and labour intensive firms.
Empirical framework
According to dynamic growth theory, the output for an economy (i.e. Y), can be modelled as a function of factor inputs, in particular capital (K) and labour (L), via a production function of general form: Which, in per capita terms can be represented as: Where Y/L is labour productivity and K/L is the capital-labour ratio. According to Hansson and Henrekson (1994) government activities, G, can affect output or labour productivity through the level of total factor productivity, A, such that: In substituting equation (3) into (2) and log-linearizing the outcome, we can specify the following time series empirical function: Where α and β's are the intercept and regression coefficients, respectively, e t is a well behaved error term, X t is a vector of control variables inclusive of inflation (π t ) and the exchange rate (ER t ), which are included in order to capture the effects of monetary policy (i.e. the South African Reserve Bank (SARB) currently practices inflation targeting as its policy mandate) and openness (i.e. the exchange rate plays a key role in international trade activity in financial and goods markets), respectively. To estimate the empirical regression (4), we make use of the Autoregressive Distributed Lag model (ARDL) of Pesaran et al. (2001) to model the short-and long-run cointegration relationships between the time series. The ARDL framework is preferred due to its well-known empirical advantages, such as, i) its flexibility in accommodating a mixture of stationary and non-stationary series and ii) its suitability for small sample sizes iii) produces unbiased estimates of the long-run coefficients even if some of the regressors are endogenous (Pesaran et al., 2001). We specify our baseline ARDL model as: Where ∆ denotes the first difference, c 0 denotes a drift component, c 1 . . . . c 5 and ψ 1 . . . . ψ 5 are the regression coefficients, ε t denotes a white noise residual. Our modelling process consists of four interrelated steps. In the first step of the modelling process, we test for cointegration effects using the bounds testing procedure of Pesaran et al. (2001) which involves testing the null hypothesis no long-run cointegration effects, i.e.
H 0 : c 1 �c 2 �c 3 �c 4 �c 5 �0 The test is based on an F-statistic whose critical values are non-standard, hence Pesaran et al. (2001) formulate lower I(0) bound and upper I(1) bound critical values in which ARDL cointegration effects are validated if the computed F-statistic exceeds the upper bound. In the second step of the modelling process, we use the coefficient estimates obtained from regression (4) to compute the long-run regression parameters represented in regression (4) as β 1 = c 2 /c 1 , β 2 = c 3 /c 1 , β 3 = c 4 /c 1 , β 4 = c 5 /c 1 , and the intercept α is computed via backward substitution. In the third step of the modelling process, we model the short-run and error correction mechanism by making use of the error from the long-run estimated regression to create the error correction term i.e. ECT t = lg(Y/L) t -α-β 1 lg(K/L) t -β 2 lgG t -β 3 lgG t -β 4 lgER t , and derive the following specification: Where the coefficient γ measures the speed of adjustment to equilibrium after a "shock" to the system, and the coefficient is expected to be negative. Moreover, Pesaran et al. (2001) treat the t-statistic of the coefficient estimate of γ, as an alternative test for cointegration within the ARDL model with a significant (insignificant) coefficient indicating the presence (absence) of cointegration. In the final step of the modelling process we perform conventional diagnostic tests on the regression error terms such as tests for residual normality, serial correlation tests, heteroscedasticity tests as well as tests for correct functional form and regression stability.
KBP5395J
Real effective exchange rate LgREER
Data description, descriptive statistics and pairwise correlations
For our empirical analysis, the study uses annual time series data sourced from the South African Reserve Bank (SARB) over a 30-year period of 1990 to 2020. The main dependent variable we source is labour productivity. The main independent variables are government expenditure on education, defence, health, housing, social protection, public safety and order, and recreation, culture and religion as well as the government revenue components, namely, net tax, VAT, property tax, tax on international trade, fuel levies and income tax. The other control variables included are capital-labour ratio, consumer price inflation and real effective exchange rate (REER). A summary of the variables is provided in Table 1. Table 2 presents the summary statistics of the time series (Panel A) as well as the correlation matrix of variables (Panel B). From the summary statistics we observe some stylized facts on public expenditure and revenue items such as education, social protection and health having the highest averages on expenditure items whilst income tax and VAT are the leading revenue items. Moreover, the correlation provides us with some evidence on the co-movements between the series and shows a negative correlation between labour productivity and i) defence ii) education, whilst the remaining expenditure items and all revenue items are found to have a positive correlation. Nonetheless, we treat these findings as preliminary to our main empirical analysis and hence we do not draw any inferences based on them.
Unit root tests
Before estimating the ARDL model, we test for stationarity amongst the variables to establish the order of integration. Even though ARDL is flexible enough to deal with a combination of I(0) and I(1) variables, it is important to ensure that none of the time series is integrated of order I(2) or higher. The findings from the conventional ADF and PP unit root tests reported in Table 3 indicate that whilst most series contain a unit root in their levels, all variables are stationary in their first differences. These results allow us to proceed and carry out the ARDL modelling procedure.
Co-integration test
As a first step to the modelling process we perform bounds tests on the selected ARDL model regressions. Since we have seven expenditure items and six revenue items, we have a total of 13 regressions to work with. The modified AIC and SC information criterion all advocate for an ARDL model specification (1, 0, 0, 0, 0) as optimal lag selection for all regressions. The F-statistics obtained from bounds test for cointegration for the 13 regressions are reported in Table 4. Note that all reported statistics are above the upper 5% critical bound level, which implies that the null hypothesis of no ARDL cointegration is rejected. This allows us to proceed to provide long-run and short-run estimates for the 13 regressions.
Government expenditure and labour productivity estimates
Having validated ARDL cointegration effects, we proceed to estimate the ARDL (1, 0, 0, 0, 0) regressions for the 7 expenditure items with results being reported in (1)-(7) in Table 5. From the long-run estimates presented in Panel A of Table 5, we observe, in order of magnitude (strongest to weakest), that the education, health, recreation and public safety expenditure items all produce positive and statistically significant coefficient estimates whilst those for defence, housing and social protection produce statistically insignificant estimates. From the short-run estimates presented in Panel B of Table 5, only education, health and recreation spending items are positively and significantly related with productivity whilst the remaining expenditure items (i.e. defence, housing and social protection) produce insignificant coefficient estimates.
Our findings are comparable to those in previous literature. For instance, Hansson and Henrekson (1994), Auci et al. (2021) andFedotenkov et al. (2021) similarly find positive effects of health and educational expenditures on labour productivity, whilst finding insignificant effects on the remaining expenditure items. Moreover, Aschauer (1989) also found similar insignificant effects of defence or military spending on productivity levels. Further note that over both the longrun and short-run the coefficient estimates on the capital-labour ratio and real effective exchange rate are generally positive whilst those for inflation are negative, and the signs on these variables concur with those depicted by growth theory and empirical evidence (Barro, 1990;Schmoller & Spitzer, 2021). Moreover, the negative and statistically significant error correction terms provide further confirmation of significant cointegration effects and give additional information on the speed of reversion back to equilibrium after a shock, with safety and order having the highest reversion speed followed by health, education.
Government revenues and labour productivity estimates
We now present the estimated ARDL (1, 0, 0, 0, 0) regressions for the 6 revenue items and report the results in (1)-(6) in Table 6. From the long-run estimates reported in Panel A of Table 6, we observe positive and statistically significant estimates on income tax and VAT whilst the remaining revenue items produce insignificant coefficient estimates. These results are comparable to those obtained in Ordonez (2014) and McPhail et al. (2018) who similarly find a positive impact of labour tax on labour productivity whilst Peng et al. (2021) also find a positive effect of VAT on productivity. Conversely, the short-run estimates reported in Panel B of Table 6 inform us that VAT, SACU revenues, income tax and property tax all have negative and statistically significant effects on labour productivity whilst the short-run coefficients for SACU, interest tax and fuel tax remain statistically insignificant. These later results are more similar to those found in previous literature such as Vartìa (2008), Salotti and Trecroci (2016), McPhail et al. (2018), and Davanzati and Giangrande (2020) who find a negative impact of income tax, property tax and capital gains tax on labour productivity.
The sign on the coefficient estimates for the control variables are similar to the ones obtained for the "expenditure items" regressions reported in the previous section of the paper. In this regard, panels A and B of Table 6 generally present positive long-run and short-run estimates on the capital-labour ratio and real effective exchange rate whilst a negative coefficient dominates the inflation variable. Moreover, the error correction terms reported in Panel B of Table 6 produce their negative and statistically significant estimates and further reveal that income and property taxes exert the quickest equilibrium correction behaviour whilst the remaining revenue items display slow reversion rates.
Diagnostic tests
As a final step in our modelling process, we perform residual diagnostic tests and tests for correct functional form and regression stability on each of the estimated regressions. Firstly, we test for normality in the residuals using the Jarque-Bera (J-B) test. Secondly, we check for serial correlation in the residuals by performing the Breusch Godfrey (B-G) LM test. Thirdly, we test for heteroscedasticity effects by performing ARCH tests. Fourthly, we test for correct functional form by performing Ramsey's RESET test. Lastly, we check for reliability and stability of the model using CUSUM and CUSUM of squares.
Panel A of Table 7 reports the p-values of the diagnostic tests on the 7 "expenditure regressions" whilst Panel B of Table 7 reports those for the "revenue items" regressions. Judging by the reported p-values, all estimated regressions fail to reject the null hypotheses of normality, no serial correlation, homoscedasticity and correct functional. Moreover, the CUSUM and CUSUM.SQ plots advocate for regression stability in all estimated functions. All-in-all, the findings from our diagnostic tests allow to interpret our empirical results with confidence as they do not violate the classic regression assumptions.
Conclusions
This paper investigates the impacts of fiscal expenditures and revenue collections on labour productivity for the South African economy between 1990 and 2020. To this end, we estimated a log-linearized production function augmented with a government sector using an ARDL model and further disaggregate our measure of public spending and revenues into separate categories. For expenditure items, we focus on health, education, housing, social protection as well as public safety and order, whilst for revenue components, we focus on income tax, VAT, tax on trade, fuel levies, SACU income receipts and property taxes.
Our empirical findings can be summarized in two points. Firstly, for expenditure items, health, recreation and public safety are the only items which have positive and significant (over both the long-run and short-run) effects on labour productivity whereas the remaining expenditure items have no significant effects on productivity. Secondly, for revenue items, most items such as VAT, SACU revenues, income tax and property tax exert an adverse effect on productivity over short-run whilst over the long-run income taxes and VAT exert positive effects on labour productivity whilst the remaining revenue items have no significant impacts.
Overall, our findings have important implications for policy debate. For instance, on the expenditure side, there have been proposals of a national health insurance (NHI) as a means of increasing access of the poor to quality health services. Our results show that increased health expenditures in such policies could have positive effects on labour productivity. On the other hand, our results further imply that the decrease in educational and public safety expenditures which have occurred since 2016 are likely to adversely impact output productivity whereas the experienced decreases in the military budget are unlikely to affect labour productivity. On the revenue side, our findings suggest that only income taxes and VAT increase productivity over the long-run whilst other wealth-related taxes such as taxes on property do not have a significant impact on productivity. We treat our evidence in favour of the recently proposed wealth tax as a supplement to income taxes to improve the efficiency of fiscal revenues collections. Our results indicate that if wealth taxes are designed as an extension of income taxes this would have positive spillover effects into labour productivity over the long-run.
Notwithstanding the contribution of our study to the literature, one aspect of the "government size-labour productivity" debate which our study has not addressed is that concerning the possibility of nonlinear relationships between the expenditure and revenue items on productivity. As theoretically postulated by the BARS curve, it is possible that relationship between government size and output and productivity is not linear and either of concave or convex shape, with the turning points being considered optimal points which fiscal policymakers should strive to keep fiscal size at. Future studies could focus on identifying possible optimal points for expenditure and revenue items using more specialized nonlinear econometric techniques. | 7,547.4 | 2022-04-12T00:00:00.000 | [
"Economics"
] |
Effect of contact incubation on stress, behavior and body composition in the precocial Red jungle fowl
Birds use contact incubation to warm their eggs above ambient temperature required for embryonic development. In contrast, birds in the industry as well as many birds in breeding programs and scientific studies are incubated in conventional incubators that warm eggs via circulating warm air. This means that contact incubated eggs have different thermal properties than eggs incubated in a conventional incubator. In light of previous studies showing that small differences in incubation temperature can affect chicks post-hatching phenotype, we investigated the consequences of incubating Red jungle fowl eggs at the same temperature (37 °C) either via contact incubation or warm air incubation. We found that contact incubated chicks had a more robust body composition, were more explorative and had a higher temperature preference early in life, as well as a sex dependent difference in plasma Corticosterone levels pre-hatch (measured in down-feathers) and post-hatch (measured in plasma) compared to chicks incubated in a conventional warm air incubator. While previous studies have demonstrated that embryonic development and post-hatch phenotype is sensitive to small variations in temperature, our study demonstrates for the first time that the way heat is distributed to the egg has a similar magnitude of effect on post-hatch phenotype and highlights the sensitivity of the incubation period in shaping birds post-hatch phenotype.
Introduction
Variation in the pre-hatch environment can have profound effects on birds' post-hatch phenotype and thereby overall fitness (for review see Henriksen et al., 2011 andDuRant et al., 2013). The pre-hatch environment can be subdivided into two distinct components. Firstly, the composition of the egg determines the amount and quality of nutrition available during pre-hatch growth (Williams, 1994) and secondly, the condition under which the egg is incubated determines if and how fast pre-hatch development will proceed (Deeming and Ferguson, 1991). The majority of research on long-term effects of the pre-hatch environment have focussed on effects of alterations in the composition of the eggs (Willems et al., 2016, for review see Henriksen et al., 2011, Groothuis et al., 2005and Dixon et al., 2016 and only more recently has it become evident that the incubation conditions under which the embryo develops are not only important for hatching success but also influences the birds post-hatch phenotype (DuRant et al., 2013). This line of research is however still scarce.
To develop properly, bird embryos must maintain a high body temperature during pre-hatch growth. They do not generate sufficient heat themselves to manage this and must rely on heat from one of the parents delivered through a specialized patch of skin on the parents' breast known as the brood patch (Deeming, 2002), which the incubating parent bird presses up against the egg. In the poultry industry and other breeding programs birds do not incubate eggs themselves. Instead, eggs are placed in forced draft (FD) incubators that maintain high (for chickens, 37 • C) ambient air temperature throughout incubation. Eggs placed in conventional FD incubators have a uniform temperature, while eggs warmed by a brood patch have a substantial temperature gradient within, from the warm patch through the egg (Turner, 1994a(Turner, , 1994b. This means that an egg has very different thermal properties depending on whether it's being warmed in a conventional incubator or is being warmed by a brood patch. The common notion that the embryo is a mere passive recipient of heat from the parent (or incubator) and at most contributes heat as it grows is too simplified. The embryo has striking physiological capabilities for managing the flow of heat into its egg, most notably through the developing embryonic circulation of blood (Turner, 1997) and during the first week of incubation the chicken embryo is able to perform thermo-regulative behavior by moving within the egg to more optimal temperature locations (Li et al., 2014). This means that an embryo, to some degree, is able to redistribute its blood flow or itself to optimize its temperature exposure during incubation. However, these capabilities emerge only when there is a thermal gradient within the egg, which occurs during contact incubation but not when an egg is incubated in an FD (Turner, 1997;Li et al., 2014). During incubation heat production from the embryo increases daily, thereby increasing the temperature of the egg. In FD incubator where the surrounding temperature is high, excess heat is not always lost into the environment and embryos are therefore at risk of overheating during development, which can lead to reduced post-hatch conditions (Wineland et al., 2000a(Wineland et al., , 2000b. To what extend conventional FD incubators influence embryonic development and post-hatch phenotype in a way that is different than under natural Contact-incubation has yet to be investigated. Research on incubation temperature in both precocial and altricial birds have demonstrated that differences in incubation temperature of only 1-2 • C can influence embryonic development leading to alterations in early post-hatching body composition, stress sensitivity and mobility (Hepp et al., 2006;Olsen et al., 2008;DuRant et al., 2010;Nord and Nilsson, 2011). Given the very different temperature profile of an egg warmed uniformly by surrounding warm air in a FD incubator and a contact incubated egg, it is likely that similar differences in embryonic growth and post-hatching phenotypic traits will be evident between these two types of incubation condition.
In this study, we investigate if contact incubation affect the pre-hatch development and thereby post-hatch phenotype of precocial birds differently than conventional FD incubation using the Red jungle fowl as a model species. The Red jungle fowl (RJF) is the wild progenitor of the domesticated chicken and in the wild they incubate their eggs in a nest on the ground (Collias and Collias, 1967). A female will lay an egg every day until she has a clutch of around 6-10 egg and once the last egg is laid she will start to incubate continuously until the chicks hatch. During the incubation period, the female only leaves the nest for 0.5-1.0 h every 1 or 2 days (Sherry et al., 1980). The eggs hatch (asynchronously) after 19-20 days of incubation over a period of 7-33 h (Meijer and Siemers, 1993). The Red jungle fowl have similar requirements during incubation to the domesticated chicken, regarding temperature and humidity, but hasn't been incubated in FD incubators for as many generations as domestic chickens, thereby limiting potential adaptation to this type of incubation. In turkeys and chickens, incubation temperature has been reported to influence thermoregulation, post-hatching growth and hatchling morphology (Hulet et al., 2007;Nichelmann and Tzschentke, 2002), while in wild birds, incubation temperature has been reported to influence HPA-axis sensitivity and thermoregulation (DuRant et al., 2013). We therefore choose to focus on these traits, since they all affect young birds' ability to survive and cope with their environment. Additionally, we also measured the birds' fearfulness, cognitive ability as well as their general behavior in an undisturbed environment, to get insight into variation in coping style. Finally, to assess any difference in pre-hatch stress-levels between the two incubation environments, we measure corticosterone (CORT) in the down feather of the newly hatched chicks. CORT have previously been measured in bird feathers and used as an indicator of stressful conditions in the post-hatch environment (Harms et al., 2010), but to our knowledge this is the first study to measure hormones in down-feathers.
Ethical note
This study was approved by Linköping Council for Ethical Licensing of Animal Experiments, ethical permit no 122-10.
Animals and housing
We used one-year-old Red jungle fowl females (n = 11) from a captive, pedigree-bred population kept at Linköping University, Sweden. This population was kept in the facility for the purpose of a breeding program as part of ongoing research in behavior genetics and has not been bred or used for commercial purposes and their behavior is therefore similar to that observed in wild red jungle fowl. Full details of animal housing and husbandry systems are given elsewhere (Campler et al., 2009).
Incubators and design
Using a newly developed incubator (Brinsea Contaq Z6, https://www.incubators.org/brinsea-contaq-z6-incubator.html) specifically designed to mimic contact incubation by pressing artificial skin inflated by warm air down on top of the egg (mimicking a bird's brood patch, see Supplementary Fig. 1) we investigated the consequences of incubating Red jungle fowl eggs in warm air (FD incubators, Masalles Mod.25-l HLC, http://www.masalles.com) versus via contact incubation (Brinsea Contaq Z6, see Supplementary Fig. 2). Using a split-brood design we allocated sibling-eggs collected within the same week from the same RJF hens (n = 11) to either a FD incubator or the Contact incubator.
Egg measurements during incubation
Eggs were stored at 13 • C for up to 1 week until they were all placed in one of the preheated incubators. The FD incubator's temperature was 37 • C and the humidity was held at 58%. The contact incubators contact zone was 37 • C and humidity was 45%. The eggs were turned automatically every 6 h in both incubators. After 18 days of incubation, all eggs were placed into a new FD hatcher with cameras and separated into separate glass-containers to record exact hatching time for all chicks and humidity was raised to 80% in the FD incubator. All eggs were weighed before (day 0) and during incubation (day 7 and day 14). Exact hatching time of all chicks was recorded using video-cameras installed in the incubators.
Hatchlings handling and down feather sampling
The hatchlings were removed from the incubator and wing-tagged as soon as their feathers were completely dry, approximately 18 h of hatching. Between 8 and 15 mg of down feather were sampled from each chick as soon as they were removed from the incubator. This was done by cutting of a small section of down feather from the back of their neck. The cut was made ½ cm above the skin, leaving behind the lower part of each down feather. The down feathers were stored in plastic bags for up to 6 months in a − 80 • C freezer. The offspring were raised in pens measuring 70 × 77 m, in groups of 11-12. All pens were equipped with fresh water and food ad libitum. Ambient temperature in the room was kept at 21 • C and during the first 2 weeks of life the chicks had access to heating lamps.
Growth
Chicks were weighed to the nearest 0.01 g on day 0 (hatching day), day 5, day 11, day 19 and at 4, 5 and 6 weeks of age. Their tarsus length, from the hock of the bent leg to the joint of the back toe was measured with a slide caliper, to acquire information about the birds' structural size when the birds were 0 and 5 days old and at 6 weeks of age. In order to explore if there was an overall difference in body composition between the two groups, we estimated body condition from the mass and tarsus length data for each individual using Peig and Green's (2009) scaled mass index (SMI). This index accounts for the covariation between body size and body mass components by standardizing body mass at a fixed value of a linear body measurement (tarsus length) based on the scaling relationship between mass and length. This index is calculated using the equation SMI = Mi (Lo / Li)^bsma where Mi and Li are the body mass and the linear body measurement of individual i respectively, bSMA is the scaling exponent estimated by the SMA regression of M on L, L0 is the arithmetic mean value for the study population.
Temperature preference
Early in life chickens have not yet developed the mechanisms necessary to maintain a constant body temperature and their main source of heat is the mother. Their main thermoregulatory mechanism is to seek heat (the mother) when they experience a drop in temperature. The chicks' temperature preference was measured at 3 days of age by placing them in an 80 cm × 20 cm long arena with a temperature gradient that gradually increased from 25 • C at one end of the arena to 40 • C in the other end (see Fig. 3 in Supplementary material). The birds were placed individually at one end of the arena at 25 • C, after which their placement was noted every 15 s for 6 min, to determine if there was any difference in temperature preference between FD and contact incubated chicks.
Undisturbed explorative behavior
Explorative behavior consists of a range of behavioral acts, but combined they are all concerned with gathering information about the environment. At 8-9 days of age the chicks' explorative behavior was measured in a novel undisturbed arena (see Fig. 4 in Supplementary material). The chicks were placed in pairs (from same treatment group) in an 80 × 80 cm arena with access to food, water, shade and different levels of elevation. The chicks were tested in pairs to minimize the level of anxiety in the novel environment and the observer were blind to the chicks' treatment and not visible to the chicks during the testing. During the testing period, it was noted how long (seconds) it took for the birds to leave the 'start zone', were they were placed at the start of the testing period. Additionally the location (zones, see Fig. 4 in Supplementary material) and whether they were active (moving or standing) or lying down was scanned every 15 s.
Fear test
The birds' fear level was assessed at 4 weeks of age using an emergence test (Jones et al., 1991). Emergence from a dark box into a lighted compartment or arena has been successfully used to measure fear in domestic chicks (Jones, 1979) under the assumption that more fearful or timid birds will show longer emergence latencies. The birds were tested individually by placing them in a dark box, measuring 30 × 20 cm with a sliding door. The box was placed in a lighted room, and the box-door was closed and a 2 min acclimatization period was allowed before the sliding door was raised. The latencies from raising the door until the chick a) put its head through the hole and b) moved its entire body out of the box, were recorded.
Cognition
To test if there was any difference in cognitive ability between the 2 treatment groups we performed a simple visual associative learning task. In the literature, visual discrimination is broadly defined as learning to pick one kind of visual stimulus over another. Chicken have good colour vision (Osorio et al., 1999) and we therefore based the test on the birds' ability to discriminate between the colour blue and yellow. The whole testing period took place when the birds were between 12 and 24 days old and had 3 components to it: 1) 1earning 2) memory and 3) reversal learning, with reversal learning referring to the adaption of behavior according to changes in the stimulus-reward contingency (see Fig. 5 in Supplementary material).
All birds were hand-feed mealworms on several occasions from the age of 2 days and all birds were very eager to eat mealworms. When the birds were 12 days old (day 1 of the test) they were presented with 2 bowls (one blue and one yellow) individually. Four times a mealworm was placed in one of the bowl in front of them. For half of the birds, the mealworm was always placed in the blue bowl, for the other half it was placed in the yellow bowl. On day 2 the birds learning ability was tested, by placing them in a 20 × 30 cm arena, were both the blue and yellow bowl was attached to one of the walls, at a height low enough for the birds to peak into if they stretched their necks but too high for the birds to see if the bowl contained a mealworm. Each bird was tested twice (with 2 mealworm) and the position of the bowls were switched between tests. When the birds picked (pecked at) the right bowl a mealworm was placed in that bowl. Duration until the bird made the right choice and number of failed attempts were recorded.
On day 6 the birds were tested again following the same procedure as above to test their ability to remember (memory) the right bowl colour.
On day 7 the 2 bowls were placed in front of each bird and they were given 4 mealworm each, similarly to day 1. However, this time the mealworm was placed in the opposite bowl (opposite colour) as on day 1. This was done to test the birds' reversal learning. The following 4 days (from day 8 to 11) the birds were tested in the same way and in the same arena as on day 2 and 6. However, this time the birds were rewarded when they picked the bowl in which the mealworms had been placed on day 7. The reward colour (bowl colour) was balanced over treatment.
HPA-axis sensitivity
To test the birds HPA-axis sensitivity post-hatch all birds underwent a stress test to assess the reactivity of their HPA-axis at 7 weeks of age. This was done by quantifying the CORT response to a standard stressor (the bag protocol or capture stress protocol, Wingfield et al., 1992). Birds were blood sampled from the wing veins and baseline samples were obtained within 3 min after the person entered the room. After a blood sample was collected, each bird was placed in a cloth bag that allowed light to penetrate, in order to avoid a calming effect of darkness. The birds were blood sampled again 10 min and 30 min after being placed into the bag and returned to their pen after the last sampling. Blood was collected in EDTA-coated tubes, kept on ice and centrifuged (800g for 5 min.) within 2 h of sampling and then stored at − 20 • C until further analysis (see below). CORT secretion was calculated as area under the total response curve (see Fig. 3) using the trapezoid formulas AUCg and AUCi (Area Under the Curve, g = ground, i = increase) according to Pruessner et al. (2003). With AUCg representing the total amount of hormone produced over time with respect to a starting value of zero, thus not accounting for baseline levels of circulating hormone, and AUCi characterizing the sensitivity of the HPA axis by evaluating the amount of hormone produced above the starting baseline level.
Corticosterone measurements in blood and feathers
All down feather samples were weighed to the nearest 0.1 mg and extracted twice. For the first extraction (based on a protocol by Bortolotti et al., 2008), 1 ml methanol was added to each sample and then the samples were placed in a sonicating water bath for 30 min at room temperature before being incubated at 50 • C overnight in a shaker. The next morning the samples were centrifuged for 10 min. and the methanol extraction (~0.8 ml) was transferred to new tubes. The new tubes were placed in a SpeedVac vacuum concentrator until all the methanol had evaporated. Once the methanol had evaporated the remaining pellet in each tube was dissolved in 250 assay buffer (from CORT ELISA kit, see below). For the second extraction, a metal bead was placed in the tubes with the already extracted down feather, after which the tube was dropped in liquid nitrogen for 2 min. Immediately thereafter the tubes were placed in a Tissuelyser (Qiagen TissueLyser II) at 23 Hz for 2 min and then dropped in liquid nitrogen again to repeat the procedure. Then 1 ml methanol was added to each sample and left overnight at room temperature on a shaker. The next morning all samples were centrifuged and the methanol extraction (~0.8 ml) was transferred to new tubes. The new tubes were placed in a SpeedVac until all the methanol had evaporated. Once the methanol had evaporated, the remaining pellet in each tube was dissolved in 250 assay buffer.
The concentrations of CORT in the feather samples and the plasma samples (from the stress test) were determined using a commercial CORT enzyme-linked immunosorbent assay (ELISA) kit (Enzo Life Sciences, NY, USA). All samples were tested in duplicate following a standard protocol (see online manual http://www.enzolifesciences. com/ADI-900-097/corticosterone-eia-kit/). Inter-assay and intra-assay coefficients of variation were 7.2% and 9.2% respectively for the plasma analyses, and 9.4% and 6.4% for the feather analyses.
Statistics
Statistical analyses were performed in SPSS version 22. One-way ANOVA was used to determine between-incubation type differences in egg mass variation during incubation, differences in incubation duration until hatching and for the exploration test where chicks were tested in same treatment pairs. Effects of incubation type on hatchling size and post-hatch growth was analysed using a factorial ANOVA with sex and treatment in the model. The same factorial ANOVA was also used to test for effects of incubation treatment on fearfulness, down-feather CORT concentration and time to solve the cognitive tests. Temperature preference at 3 days of age were analysed using a mixed repeated-measure ANOVA with treatment as between-subject factors and time as withinsubject factor. A normal distribution could not be achieved for the cognitive tests score (number of wrong choices). Comparison of differences in cognitive score between incubation type was therefore made via Mann-Whitney U test. The statistical significance level was set at P < 0.05.
Egg mass and number
The 11 hens laid 62 eggs in total, and 31 eggs were placed in each incubator. After 3 days of incubation all eggs were candled to determine if they were fertile, 28 eggs in the FD incubator were fertile and 27 eggs in the contact incubator were fertile. In total 24 eggs hatched (10 males and 14 females) in the FD incubator and 22 hatched (11 males and 11 females) in the contact incubator. There was no overall difference in egg mass between the eggs placed in the FD incubator and the contact incubator (Day 0, see Table 1: F = 0.034, df = 1, P = 0.854). After a week in the incubator the eggs in the two treatments did not differ in mass (Day 7, see Table 1: F = 0.907, df = 1, P = 0.478). Two weeks after being placed in the incubators the contact incubated eggs had lost more mass than FD incubated eggs (Day 14, see Table 1: F = 4.439, df = 1, P = 0.041).
Body mass and growth
At 5 days of age, contact incubated chicks weighed significantly less (see Table 2: F = 4.755, df = 1, P = 0.035) and had a significantly shorter tarsus length (see Table 2: F = 10.526, df = 1, P = 0.002) than FD chicks. Contact incubated chicks had a significantly higher SMI than FD incubated chicks (Contact = 38.58 ± 0.6165, FD = 36.11 ± 0.63: F = 6.445, df = 1, P = 0.015), indicating a more robust body composition. Contact incubated chicks continued to have a smaller body mass at age 11 days of age (see Table 2: F = 4.847, df = 1, P = 0.033) and at 19 days of age (see Table 2: F = 6.259, df = 1, P = 0.016). After this age there was no longer any significant difference in body mass or tarsus length between the two treatment groups (P > 0.05). There was no significant interaction between treatment and sex on body mass or tarsus length (P > 0.05), but males were generally heavier than females from 4 weeks of age (P < 0.05).
HPA-axis sensitivity
With endocrinological data, it is often assumed that the use of the AUC G will result in a measure that is more related to 'total hormonal output', whereas the use of AUC I is more related to the sensitivity of the system. Total CORT output (AUCg) was significantly affected by the interaction of sex and incubation condition (F = 4.445, df = 1, P = 0.041) with contact incubated females having significantly higher total CORT output compared to FD incubated females and both FD and contact incubated males (P > 0.05). There was, however, no significant effect of incubation condition on the sensitivity of the birds HPA-axis (AUCi, F = 0.456, df = 1, P = 0.503), nor was there any significant difference between the sexes (F = 1.079, df = 1, P = 0.305) or significant interaction between sex and incubation condition (F = 2.819, df = 1, P = 0.101) on AUCi.
Temperature preference
The two treatment groups both moved towards warmer temperatures during the temperature preference test (treatment x time: F = 4.679, df = 5, P = 0.002). After 3 min contact incubated chicks had moved to a warmer zone than FD incubated chicks (see Fig. 4: F = 8.995, df = 1, P = 0.005) and from 4 min until the end of the testing period at 6 min, there was no effect of time, and neither of the treatment groups moved significantly to warmer areas (P > 0.05).
Exploration behavior
Contact incubated chicks left the start zone and started exploring the arena significantly sooner than FD incubated chicks (see Table 3, F = 8.888, df = 1, P = 0.003). There was no difference in overall time spend being active (F = 2.231, df = 1, P = 1.441) passive (F = 0.525, df = 1, P = 0.428) or lying (F = 1.441, df = 1, P = 0.699). Nor was there any difference between the treatment groups in time spend in the different zones (P < 0.05, see Table 3).
Table 3
Undisturbed behavior in novel arena, tested in pairs at age 8-9 days. Time (s) in start zone (mean ± S.E.M.) until chicks started to explore the rest of the arena. Scan counts of activity (mean ± S.E.M.) and location (mean ± S.E.M.) in novel arena time in each zone. P-values below 0.05 indicated with an *.
Discussion
This study shows for the first time that contact incubation, mimicking natural parental incubation via a brood patch, leads to chicks with a different post-hatch body composition, altered temperature preference and increased exploration behavior as well as altered plasma CORT levels compared to conventional FD incubated chicks. Phenotypic alterations that all have the potential to affect how these chicks cope with their surrounding environment. To date, studies investigating the implications of incubation temperature on birds' phenotype have relied on FD incubators to test the effect of different incubation temperature within the range of natural nest temperatures. These studies tend to find that the performance of chicks is lowest when temperature differ (either slightly higher or lower) from the intermediate nest temperature, suggesting that small differences in incubation temperature by the parents can have a significant negative effect on the chicks' phenotype and possible fitness. As discussed below our results don't indicate that alteration to the chicks posthatch phenotype merely reflects reduced incubation temperature during contact incubation, but instead demonstrate that the different thermal properties of a contact incubated egg versus a FD-incubated egg (see Supplementary Fig. 2), affect the prenatal development and thereby post hatch phenotype of precocial Red jungle fowls.
One of the most notable effects of incubation condition found in this study was on the chicks' post-hatch growth. Although contact incubated eggs lost more mass during the first 2 weeks of incubation and that chicks from these eggs hatched on average half a day later, contact incubated chicks did not weigh less than FD chick at hatching, nor was there any difference in overall body composition (measured via scaled mass index, SMI). This suggest that pre-hatch growth was slower for contact incubated chicks and that they therefore did not use up their prehatch nutrition as quickly as FD incubated chicks. At 5 days of age contact incubated chicks were significantly smaller both regarding body mass and structural size (tarsus length) than FD incubated chicks. However, the SMI was significantly higher for contact incubated chicks than FD incubated chicks. This indicates that although contact incubated chicks were smaller at this age than FD incubated chicks, they had a more robust body composition. Contact incubated chicks continued to have a smaller body mass than FD incubated chicks until 19 days of age, after which there was no difference in body mass or structural size between the 2 treatment groups. Demonstrating that effects of incubation on growth were transient and didn't last more than a few weeks. Previous studies looking at the body-composition of precocial birds incubated at reduced temperature (1 • C) have found that these birds are structurally larger but with fewer energy reserves (Hepp and Kennamer, 2012, DuRant et al., 2010, which is opposite to our finding on contact incubated chicks which were structurally smaller than force draft incubated chicks and more robust. This indicates that the effects we see on growth due to incubator conditions are not due to reduced temperature during contact incubation. For precocial birds one of the most important traits influencing the survival of hatchlings is the early development of thermoregulatory ability (DuRant et al., 2013). In domesticated chickens reduced incubation temperature (1-2 • C) have been reported to reduce the neonate's ability to thermoregulate Burggren, 2004a, 2004b). We tested whether incubation conditions would affect the chicks' thermoregulatory behavior when the birds were 3 days old. Although all control elements of the thermoregulatory systems are functional at hatching in precocial birds, chickens are not fully homoeothermic until day 10 after hatching (Nichelmann and Tzschentke, 2002) and until then they are dependent on heat from the mother's body or from another heat source. We found that contact incubated chicks preferred a higher temperature at 3 days of age than FD incubated chicks. Although contact incubated chicks were smaller than FD incubated chicks, their SMI index at hatching and at 5 days of age suggest that their body composition was similar to or more robust than FD incubated chicks and it therefore seems unlikely that their body composition made them less cold tolerant. Precocial chicks are able to increase their own heat production immediately after hatching, with this ability increasing with age (Nichelmann and Tzschentke, 2002). Differences in heat production abilities or the development of other thermoregulatory control elements, such as changing cutaneous blood flow or growth of plumage could explain the difference in heat preference between the 2 treatment groups, if these were more developed at 3 days of age in FD incubated chicks, than contact incubated chicks. This could potentially signify reduced survival chance for contact incubated chicks, since they might be more sensitive to a reduction in ambient temperature, and also because they might need to spend more time under the mother's brood patch instead of searching for food. However, it can't be excluded that FD incubated chicks perhaps had the same temperature preference as contact incubated chicks but were just slower at moving to this zone during the testing period (6 min) and would have reached the same preferred temperature as contact incubated chicks, had the testing period been longer. Support for this last claim comes from the explorative behavioral test, where contact incubated chicks left the start zone much faster to explore the rest of the arena than FD incubated chicks. In this test there was no difference in overall level of activity or overall explorative behavior as both groups spend similar amounts of time in the different zones of the arena and showed the same level of activity. In the cognitive test contact incubated chicks were on average also faster at solving the task, although this did not react significance. The faster initiative of contact incubated chicks than FD incubated chicks did not result in them being better at solving the tasks, but it does indicate, together with their behavior in the exploration test, that the contact incubated chicks were less hesitant, potentially indicating a more proactive personality type (Cockrem, 2007).
The less hesitant behavior of contact incubated chicks did not seem to be caused by differences in fear level as no difference was found between the two treatment groups when comparing their behavior in a fearfulness test or measuring their CORT production during a stress test. There was no difference between groups in the sensitivity of the HPA axis evaluated by the amount of hormone produced above the starting baseline level (AUCi), but contact incubated females did have significantly higher overall CORT production (AUCg). Corticosterone and glucocorticoids in general have many functions ranging from regulation of glucose metabolism (McMahon et al., 1988) to being part of the feedback mechanism in the immune system (Coutinho and Chapman, 2011) to its multiple effects on fetal development, such as lung maturation (Lupien et al., 2009). It is therefore almost impossible to hypothesize about the cause and potential function of the higher CORT production in contact incubated females. Also, although the HPA-axis is fully functional at hatch in Red jungle fowl, it still goes through a maturation process during the initial weeks post-hatch with decreasing CORT levels and response to stressors (Ericsson and Jensen, 2016). The difference in overall CORT production at 6 weeks of age in the birds from this study could therefore also reflect difference in the speed of maturation, and it is therefore not possible to conclude whether this difference was permanent (lasting beyond sexual maturity, at 4-5 months of age) or transient. Again, it seems unlikely that these effects are due to reduced incubation temperature during contact incubation, since reduced incubation temperature has been shown to increase baseline and stress induced HPA-axis activity in precocial birds (DuRant et al., 2010) and reduced mobility (Hopkins et al., 2011).
Force draft incubated males seem to have a higher pre-hatch CORT production, as indicated by the higher CORT concentration in their down feathers. Feather CORT concentration has previously been linked to different environmental condition in both adult birds and nestling (Harms et al., 2010;Koren et al., 2012), however, this is the first study to measure down-feather CORT and link it to the pre-hatch environment. The significantly higher concentration of CORT in the feather of FD incubated males could suggest that FD incubation might have been more stressful or energetically demanding than contact incubation, but only for the males.
Down-feathers buds are visible on the chicken embryo from around embryonic day 10 and soon after this the feathers start to grow and continues until the end of incubation, with the most rapid growth occurring when the embryo is around 2 weeks old (Meyer and Baumgärtne, 1998). Down-feather growth therefore mainly occurs during the second half of incubation when the risk of overheating increases for the embryo (Molenaar et al., 2010). The HPA-axis is functional in chickens around the 14th day of incubation (Jenkins and Porter, 2004), although the presence of CORT in the blood of chick embryos has been confirmed already around the 10th day of incubation (Jenkins and Porter, 2004). The ability of chicken embryos to activate their HPA-axis to cope with environmental factors therefore correlates with down feather growth. Our discovery that CORT can be measured in down feathers could have great importance for the field of pre-hatch stress (see Henriksen et al., 2011) in precocial birds as a non-invasive way of measuring the impact of maternal stress during egg formation or parental stress during incubation.
The humidity set for the force draft incubator in this study was based on supplier instructions (58%), whereas the contact incubator (being more of an open design, like a nest) stabilized at 45%. Previous studies have demonstrated that humidity can affect several traits in the newly hatched chicks (Molenaar et al., 2010) and we can therefore not exclude that the effects of incubation type in our study aren't partly due to differences in humidity. However, the methods used to alter humidity in previous studies, such as inlet of air or water, also indirectly influence the temperature of the egg close to the treatment area. In fact, it has been shown that in chickens, effects of humidity variation on development and post-hatch phenotype disappear when egg temperature are kept constant (Van der Pol et al., 2013). This, together with the fact that nest humidity in Red jungle fowls has been reported to be between 38 and 41% (depending on the study, Rahn et al., 1977, Chattock, 1925, Koch and Steinke, 1944, indicate that variation in humidity might have a larger effect in force draft incubators than during contact incubation, where heat is transferred via contact as opposed to warm circulating air.
The fact that we find FD incubation to have significant effects on birds' phenotype compared to contact incubation even when both types of incubation are fixed at 37 • C, questions the use of FD incubators when testing effect of different nest temperatures, since these incubators might not correctly mimic effects of varying nest temperature and thereby potentially overestimates the effects of incubation temperature. It would be interesting to test different temperatures within the range of naturally occurring nest temperatures using the Contact incubator in future studies, to see just how much the embryo can buffer potential effects of incubation temperature when contact incubated.
Conclusion
While slight temperature differences in incubation temperature have previously been shown to have significant effects on chicks' post-hatch phenotype, the findings from this study demonstrate for the first time that the way heat is distributed to the egg can also significantly affect birds post-hatch phenotype. Our findings add another factor to the growing field of effects of the pre-hatch environment in birds, by demonstrating that contact incubation creates a different pre-hatch environment and chicks with a significantly different phenotype than conventional warm air incubators. Additionally, our finding that CORT can be measured in down-feathers and that differences in CORT concentration between individuals can be related to the pre-hatch environment, provides a potential useful tool for studying pre-hatch CORT production in future studies. | 8,803.2 | 2020-11-20T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Introducing the magnetic properties in Fe doped ZnO nanoparticles for spintronics application
Proper correlation among the microstructural, optical and magnetic responses of Fe doped ZnO nanoparticles have been established in this work. All the Fe doped ZnO nanoparticles (Zn 1-x Fe x O: x = 0.00, 0.05, 0.10 and 0.15) were prepared using chemical co-precipitation route. Average crystallites size of 18 nm to 28 nm was estimated using Scherrer’s formula. Compressive microstrain was detected in pristine ZnO samples, which moved toward tensile regime upon introducing Fe ions of different weight percentages. Mean crystallites size obtained from Scherrer’s formula was found in almost exact match with the particle size estimated from HRTEM images. Nearly spherical ZnO nanoparticles were seen in HRTEM images and negligible agglomeration among particles was also observed. Direct optical band gaps were found in the range of 2.89 eV to 3.24 eV as estimated from Tauc plots. A decent ferromagnetic signature in non-magnetic ZnO nanoparticles was also introduced at room temperature with the doping of Fe ions.
Introduction
In the last decade, crucial efforts have been dedicated to synthesize and characterize semiconductor materials doped with transition metals. DMS materials show semiconducting as well as ferromagnetic properties at room temperature [1][2][3][4]. Transition metal (TM) doped semiconductors device used in the field of quantum computation, storage and communication devices and logic elements [5][6][7][8]. These materials can be termed as dilute magnetic semiconductor since a small proportion of transition metal can give drift to room temperature ferromagnetism. Theoretically, the presence of stable ferromagnetism is predicted in wide band gap semiconducting material [8][9][10]. The presence of room temperature ferromagnetism in ZnO attracted many researchers' interests in this area. Most of the results are questionable in nature and the magnetic ordering in wide band gap DMS are associated with defects and impurity phases. In a few cases, the absence of magnetic ordering has also been highlighted [11,12]. Synthesis methods play a major role in defining the magnetic ordering. Among these dilute magnetic semiconductor materials ZnO was observed as a potential candidate. ZnO is chemically and thermally stable n-type semiconductor. TM doped ZnO is a promising candidate due to its wide band gap (3.3 eV), large exciton binding energy around 60 meV and high carrier density [12][13][14].
In past few decades, researchers have tried to introduce magnetic signature in various non-magnetic metal oxide semiconductors [15][16][17]. Although partial doping of 3d transition metal ions as well as 4f rare earth ions in non-magnetic nanosized ZnO showed a high possibility of observing magnetic properties at room temperature [12,[16][17][18]. In the development of new generation non-toxic advanced spintronics materials, transition metal (TM) ions doped ZnO has emerged as a promising candidate from the class of oxide based diluted magnetic semiconductors. Zinc oxide has been become the first choice of researchers due to its good piezoelectric effect, biomedical compatibilities and room temperature ferromagnetism [12]. The past decade records the attention of researchers from different fields on this material due its vast technical applications, such as in the field of chemical sensors, UVdetectors, semiconductor lasers of short wavelength, non-linear varistors and in semiconducting-based MEMS/NEMS technology [12,14]. Further enhancement of electrical conductivity for pristine ZnO can be done by doping with selected elements such as Ga [19,20], Mn [17] and Al [21].
In this article, we have presented the synthesis and physical characterizations of ultrafine homogeneous Fe ions doped ZnO nanoparticles. Chemical co-precipitation method has been utilized to prepare all the samples. A decent magnetic signature in non-magnetic ZnO nanoparticles was also recorded at room temperature due to doping of Fe ions. A proper correlation among the structural, optical and magnetic responses of Fe doped ZnO nanoparticles has been established in this work.
Synthesis of nanoparticles
Fe doped ZnO nanoparticles with a generic formula Zn 1-x Fe x O (x = 0.00, 0.05, 0.10 and 0.15) were fabricated using standard chemical co-precipitation method [4,22]. Raw chemicals ZnCl 2 and FeCl 2 were used for synthesizing nanoparticles. All the reagents were purchased from Merck having a purity level of 99.99% for ZnCl 2 and 98% for FeCl 2 . All the chemicals were used without any purification.
At first, all the glassware was washed with nitric acid, distilled water and acetone in that order to ensure that no trace of impurity would contaminate the sample via the glassware. ZnCl 2 and FeCl 2 were dissolved in 200 ml of distilled water in stoichiometric ratio. 500 mg PVP was added into the solution which acts as a binding agent. The solution was kept on a magnetic stirrer and rotated with a rate of 700 rpm for homogeneous mixing. Precipitating agent NaOH solution was mixed in the form of droplets into the solution under continuous stirring. NaOH solution was added continuously with the same rate until the pH of the solution reached 12 to ensure no elements remain unreacted. After that, the precipitate was washed several times using distilled water and ethanol to reduce pH until 7.
The precipitate was dried in an open atmosphere and then ground into the fine powder [23]. All the synthesized samples were indexed as Fe-00, Fe-05, Fe-10 and Fe-15. The prepared samples were kept safely in separate containers and were used for all the characterizations.
Characterizations
Confirmation regarding phase-purity and generation of hexagonal wurtzite structure was verified by diffraction profiles as obtained from Rigaku Ultima IV x-ray diffractometer with copper Kα line Absorption spectra of all samples were recorded between 200 nm to 800 nm at 300 K using Thermo-Scientific Evolution. Magnetic characterization of all Fe doped ZnO nanoparticles were carried out with the help of vibrating sample magnetometer (Quantum Design, VSM) at room temperature. [2,4]. Rietveld analysis of all the diffraction patterns was performed using the general structure analysis system (GSAS) along with EXPGUI interface. The obtained values of cell parameters and refinement parameters are displayed in table 1. All the diffracted peaks were shaped using the pseudo-Voigt function (superposition of Lorentzian and Gaussian function). Estimated values of reliability factors (W Rp and R p ) below 10% along with decent goodness of fit (χ 2 ) verified a good agreement between obtained and standard experimental results [24]. The observed broadening in diffractograms ensured that the prepared samples were in the nano range. Ratio of lattice constants (c/a) was found close to 1.6 for all the samples which also verified the formation of hexagonal wurtzite structure as noticed in table 1.
Scherrer's formula
The average crystallite size was calculated using Scherrer equation using full width at half maximum (FWHM) as follows [4] 0.94 where λ is wavelength (1.5406 Ȧ) of Cu Kα line, D is the average particle size and β d represents the FWHM value of (101) where is full-width half maxima, represents the geometrical factor (0.9 for spherical shaped particles), signifies wavelength of Cu K α line (= 1.5406 Å), D corresponds to the mean crystallite size, addresses microstrain and θ is the Bragg's angle respectively. Rearranging the equation 3, we get the known form as shown below [25] cos = 0.9 cos + 4 sin Separation of size and microstrain effects from total line width of diffracted peaks was done by plotting cos as a function of 4 sin which is familiar as the Williamson-Hall (W-H) graph as described in equation (4). The slope of this curve represents microstrain present inside the crystal and crystallite size was calculated from intercept on cos axis [24,25]. W-H plot of all the samples is presented in figure 2. The average crystallite size was found in the range of 14 nm -35 nm and negative value of microstrain for pristine ZnO sample also showed its compressive nature [24]. The estimated microstrain was gradually increased and becomes tensile in nature with increasing Fe content as seen from
Halder-Wagner Method
It is well known that the x-ray diffraction peaks are neither pure Gaussian or Lorentzian but rather a convolution of both as the peak region of the pattern matches well with Gaussian function but the tail region fails in matching and similar situation is observed in Lorentzian function in which matching is good at the tail region but not in the peak region. Halder-Wagner method overcomes this problem as it assumes peak broadening to be a symmetric Voigt function which is convolution of Gaussian and lorentzian function [27].
The plot between 2 tan (y-axis) and tan sin (x-axis) is a straight line whose slope provides average particle size and intercept provides the value of microstrain [28].
HRTEM image analysis
HRTEM images of Fe-00 and Fe-10 samples are illustrated in figure 4(a) and 4(b) respectively. All the nanoparticles were appeared to be nearly spherical in shape. Excellent homogeneity in both size and shape was achieved as verified by HRTEM micrographs [2,8]. An average particle size of 17±1 nm was obtained for the pristine ZnO (Fe-00) sample, which is a close match with XRD results. A considerable agglomeration among the synthesized nanoparticles was observed which may be attributed to the Van der Waal interactions [25].
UV-vis spectra studies
Room temperature absorption spectra of all Fe doped ZnO nanoparticles were collected within the range of 200 nm to 800 nm using UV-Vis spectroscopy. Bulk ZnO consists of direct optical band gap at higher energy (≈ 3.3 eV) [29]. The direct band gap of prepared samples can be obtained using Tauc relationship as follows [24] ( ) ℎ = ℎ − where is an absorption coefficient, represents an arbitrary constant, corresponds to the optical band gap and denotes an index which attains values of ½ for direct bandgap and 2 for indirect band gap semiconductor. The value of is calculated using this general formula [23,24] = and = log (8) where 'A' is absorbance and 't' indicates the thickness of materials. The absorption coefficient takes the form [24] = 2.303 (9) A graph between ( ℎ ) 2 versus ℎ was plotted to obtain the direct optical band gap of all samples which is well known as Tauc plot [23]. Figure 6 depicts the magnetic hysteresis (M-H) curves registered at room temperature of all the samples. Due to the absence of unpaired 'd' electrons, the pristine ZnO in bulk form exhibits diamagnetic behavior [30]. In nanoscale, the existence of several defect states together with a certain correlation among them induces weak ferromagnetism or paramagnetism in pure ZnO sample [12].
Magnetic studies
Defects driven weak ferromagnetism or paramagnetism in pure ZnO nanoparticles have been reported in many articles [31][32][33]. week ferromagnetic nature at room temperature as verified by hysteresis loops.
Conclusion
In brief, we have successfully fabricated diluted magnetic semiconductor Zn
Author contribution statement
All the authors contributed equally in this work.
Declaration of interests
Authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2,529.8 | 2021-02-09T00:00:00.000 | [
"Materials Science",
"Physics"
] |
AGILITY: THE NEED OF AN HOUR FOR SOFTWARE INDUSTRY
: The need of the current Software industry is changing rapidly, and the demand is ever increasing. The industry needs to provide the solution to this demand. As the software development proceeds, factors such as requirements, needs, priorities, underlying technology may change. Thus development process must be highly dynamic and a good software development methodology must adapt to these evolving and changing requirements. Traditional software development models are unable to handle such dynamic requirements. However, there are many new development models introduced in context to provide satisfactory solutions to increasing needs of the industry. Comparison between different new software development methods will help in the selection of appropriate development model in a particular scenario.
I. INTRODUCTION
As a solution to the present developing difficulties in the programming industry, a wide range of new methodologies of SW development are introduced. Business exercises are quickly changing these days and there are progressively critical prerequisites set on programming paradigms. This puts traditional programming advancement strategies behind and prompts the requirement for various methodologies. Most present-day advancement procedures can be eccentrically portrayed as agile.
Agile SW development refers to a bunch of programming improvement procedures in based on iterative advancement, where prerequisites and arrangements develop through a joint effort among self-organizing cross-functional groups. Agile procedures are disciplined that team work and adjustment, an authority theory that energizes collaboration, self-association, and responsibility, an arrangement of designing accepted procedures expected to take into consideration fast delivery of top quality SW, and a business approach that adjusts improvement to client needs and organizational objectives [7].
In English, Agile signifies 'capacity to move rapidly and effectively' and reacting quickly to change -this is a key part of agile programming improvement also. These techniques give distinctive methods for creating SW [7]. In loads of cases they turned out to be more fruitful than conventional ones. In the subsequent sections some theses methodologies are discussed [11].
The research paper presented here compares the usefulness and applicability of the current software development methodology in context of the current industry needs.
II. CAPABILITY MATURITY MODEL INTEGRATION
CMMs holds the fundamental components of powerful procedures. These components depend on the ideas created by Crosby, Deming, Juran, and Humphrey.
The primary model to be created was the CMMI for Development (at that point basically called "CMMI"). [6].
To evolve and enhance the models for organizations three distinctive CMM models were joined to the coordinated one, CMMI.
At first, CMMI was alone model that joined three source models: • Model (IPD-CMM) v0.98. These three main models were chosen in light of their effective appropriation or promising way to deal with enhancing forms in an association.
A. The Concept of CMMI Continuous Representation and Staged Representation:
Organizations need to look over one of two ways to take after when beginning utilizing CMMI, the continuous portrayal or the staged portrayal. The two ways offer a marginally unique way to deal with go up against CMMI [6]. On picking the ceaseless portrayal there is a great deal of ease and getting things done in various rates, and is appropriate if it is clear which forms that are risky. With the nonstop portrayal, the association is experiencing ability levels.
The Model Components:
The model parts of the CMMI are gathered into classes that reflect how they should be translated; there are required, expected and enlightening segments. The required segments in the CMMI are the particular and nonexclusive objectives that speak to what an association must do to fulfil a procedure zone. Expected segments incorporate the nonexclusive and particular practices and are a manual for what an association need to execute to accomplish the particular and non-specific objectives. Enlightening parts are point by point data on, for instance, work items, sub hones, intensifications, non-specific practice titles, objective and practice notes, and references.
B. Advantages of CMMI
There are various advantages of executing CMMI in an IT/Software Development Organization, some of these advantages are recorded beneath: • Culture for keeping up Quality in ventures begins in the brain of the normal developers to the senior software engineers and undertaking administrators.
• Centralized QMS for usage in activities to guarantee consistency in the documentation which implies less learning cycle for new assets, better administration of task status and wellbeing. • Incorporation of SW Engineering. Best Practices in the Organizations as portrayed in CMMI Model. • Cost sparing as far as lesser effort because of less deformities and minimum rework [8].
• This likewise brings about expanded Productivity.
• On-Time Deliveries • Making customer happy by providing proper product.
• Decreased Costs • Improved Productivity
C. Disadvantages of CMMI
• CMMI-DEV is may not be reasonable for each association. • It might include overhead regarding documentation.
• May require extra assets and information required in littler associations to start CMMI-based process change. • May require a lot of time and effort for execution [8].
• Require a noteworthy move in hierarchical culture and state of mind.
III. AGILE MODEL
The term Agile signifies 'moving rapidly' [4]. The Agile process itself is a product improvement process by smaller groups, in a brief timeframe design, and including framework clients and additionally engineers [4]. This coordinated procedure is an iterative approach in which consumer loyalty is at most elevated need as the client has direct association in assessing the product.
The AGDM emphasizes on four important values: 1. Individual and group associations over procedures and tools. 2. Working SW over far reaching documentation. 3. Customer joint effort over contract specifications. 4. According to plan respond to changes.
A. Principles
What the originators of the agile practices held in like manner was an arrangement of qualities they mutually distributed as the Manifesto for Agile Software Development. The twelve key purposes of characterized in the Agile Manifesto are: 1. Satisfy client through right on time and persistent cycles [11]. 2. Deploy first cycle inside couple of weeks and the entire programming inside couple of months. 3. Customer and Agile groups must work together day by day all through the task. 4. Agile group and client must have direct gatherings [18]. 5. Accept requirements even in late periods of the framework development. 6. Trust and regard must be kept up among coordinated colleagues. 7. Velocity of the task must be measured after conveyance of every increment. 8. Emphasis ought to be on great plan to build agility. 9. Best engineering and configuration dependably turn out from self-association. 10. Adjust and tune as indicated by the circumstance. 11. Whole advancement process must take-off after making it simple (KIS) rule. 12. Agile undertaking needs predictable work until finalization [18]. The most vital of these standards is: "The most productive and viable strategy for passing on data to and inside a development group is direct discussion.
B. Advantages of Agile Model
• Adaptive to the evolving conditions [15].
• Agile accelerates the SDLC stages and sidesteps process steps that enhance the value to product development. • Involves the partners ceaselessly with the goal that the new necessities are assembled quicker and there is no extension for guess work by the groups [13]. • Saves cost, time and endeavours by following iterative incremental work completion and in this way recognizing deviations early. • Very less documentation is needed [8].
• Provides the final product of higher nature of the product delivered and highly happy client.
C. Disadvantages of Agile Model
• Time taking and wastage of assets due to steady difference in necessities. • More supportive for administration than engineer.
• Only senior engineers are in a superior position to take the choices important for the deft sort of advancement. • Once groups become large, this technique start to fail, as they don't scale to substantial groups, or groups spread crosswise over geologies. • If the ventures are big, then it is hard to judge the endeavours and the time required for the undertaking in the SDLC.
IV. EXTREME PROGRAMMING (XP)
XP is a lightweight strategy for small to medium groups creating SW despite obscure or quickly evolving requirements. XP is easy and trained way to deal with programming improvement. XP emphasis on customer satisfaction [1]. An extraordinary endeavour to significantly improve the way toward creating SW frameworks is made concentrating on what delivers value: the prerequisites for the framework or the code that actualizes the framework [2]. Prerequisite determination as User Stories, code improvement by sets of engineers (Pair Programming), rearrangements of the code through Refactoring and watchful testing are the extraordinary highlights of XP method. XP enhances a product venture in four basic ways; correspondence, straightforwardness, criticism, and courage. XP has revived the idea of evolutionary outline with rehearses that enable evolution to end up noticeably a practical plan methodology [4].
A. XP Core Practices
The Planning Game: Business and improvement cooperate to create the most extreme business esteem as quickly as could reasonably be expected. The planning game occurs at different scales, yet the essential principles are the same [1] [3]. Small Releases: XP groups rehearse small outputs in two critical ways: First, the group releases running, tested programming, conveying business esteem picked by the Customer, each cycle. The Customer can utilize this product for any reason, regardless of whether assessment or even release to end clients. Straightforward Design: XP utilizes the most straightforward conceivable plan that takes care of business. The necessities will change tomorrow, so just do what's expected to meet the present prerequisites. Plan in XP is not a one-time thing but rather an all-the-time thing [21]. Metaphor: XP groups build up a typical vision of how the program functions, which we call the "illustration" or "Metaphor". Taking care of business, the illustration is a basic suggestive depiction of how the program functions. Continuous Testing: XP groups concentrate on approval of the product consistently. Developers create SW code by composing tests initially, and afterward code that satisfies the prerequisites reflected in the tests. Clients give acknowledgment tests that empower them to be sure that the highlights they require are given.
Refactoring: XP group Refactor out any copy code created in a coding session. Refactoring is rearranged because of broad utilization of computerized test cases [3]. Pair Programming: All creation code is composed by two software engineers sitting at one machine. This training guarantees that all code is looked into as it is composed and brings about better Design, testing and better code. Collective Code Ownership: No single individual "claims" a module. Any engineer is relied upon to have the capacity to take a shot at any piece of the code-base any time.
Continuous Integration: All changes are coordinated into the codebase, in any event, every day. The unit tests need to run 100% both previously, then after the fact reconciliation. Occasional combination prompts major issues on a product venture. Most importantly, despite the fact that integration is basic to deliver great working code, the group is not rehearsed at it, and frequently it is appointed to individuals who are not familiar with the entire framework [21].
40-Hour Work
Week: Software engineers go home on time.
In crunch mode, up to one week of extra minutes is permitted. In any case, various back to back a long time of additional time are dealt with as a sign that something is off with the procedure as well as calendar.
On-site Customer: Programming group has constant access to the client who will really be utilizing the framework. For activities with loads of clients, a client agent (i.e. Product Manager) will be assigned for Development group get to. Coding Standards: Everybody codes to similar benchmarks. The specifics of the standard are not critical: what is vital is that entire code looks natural, in support of aggregate possession [1] B.
Advantages of Extreme Programming Methodology
• XP techniques emphasise on client association.
• XP builds up sound designs and plans and to get the engineers by and by focused on their timetables which are unquestionably a major favourable position in the XP model [9]. • XP is reliable with most current advanced development strategies along these lines, engineers can deliver quality programming [14]. • It concentrates on client inclusion.
• Developers are particularly dedicated to the task.
• Equipped with advanced techniques for quality programming.
C. Disadvantages of Extreme Programming Methodology
• This philosophy is just as viable as the general population included, Agile does not eliminate this issue. • This sort of programming advancement model requires gatherings at visit interims at enormous cost to clients [21]. • It requires excessively improvement changes which are truly exceptionally hard to adapt each time for the product designer. • In this technique, it watches out for difficult to be known correct estimates of work exertion expected to give a quote, on the grounds that at the beginning of the venture no one aware about the whole degree and necessities of the task. • Effectiveness relies upon the general people included [9]. • Requires visit meeting for improvement raising aggregate expenses. • Necessitates for excessive advancement changes.
• Exact conceivable outcomes and future results are truly unknown.
V. SCRUM
Jeff Sutherland made the scrum procedure in 1993; he utilized the expression "scrum" from an investigation established forth in a recent report by Takeuchi and Nonaka, distributed at Harvard Business Analysis [4]. It is the main deft advancement approaches. The Scrum Association changes the way we handle complex SW projects.
Scrum is known as lightweight procedure system for flexible SW development, and the most broadly utilized one. Scrum is a one of the part of agile strategy. A "product framework" is a specific arrangement of strategies that must be followed all together for a procedure to be reliable with the structure [4]. "Lightweight" implies that the overhead of the procedure is kept as little as could reasonably be expected, to boost the measure of work done.
An agile form of Scrum process benefits the organization by helping it to…. [16].
• Better finished result • Easy to change and apply • Takes less time and create better estimates • Has better control on project plan • The primary practical unit of scrum is a working group. The group does not lead by a specific appointed team leader. Nobody chooses who will do what. The issues are handled by the group as a whole [10]. Things with respect to status of the project task, issue identified with the tasks, work that is done since last meeting, and work that needs be done before next meeting (i.e. past, current and future) are examined each day [16]. The Team may comprise of 5 to 9 individuals. These meetings are time boxed and conducted not more than 15 minutes. Scrum does not characterize exactly what kind of requirements are to take, yet essentially says that they are assembled in the Product Backlog, and referred to nonexclusively as "Product Backlog Items," or "PBIs" for short [5].
A. Advantages of Scrum
• The real favourable position of User Story lies in the client driven definition itself. This is on the grounds that, eventually, the client will be utilizing the item in the important client situations. It associates the end clients to the colleagues [19].
• The sentence structure of the User Story itself guarantees to catch the objective or advantage or esteem that the client needs to accomplish [20]. • Since the acknowledgment criteria frames some portion of client story itself, it will be an additional favourable position to the Scrum Team [10]. • It is conceivable to make improvements to a client story in course of the execution of the undertaking. On the off chance that the extent of the client story turns out to be vast, it should be part into littler client stories. The conditions in the acknowledgment model can likewise be changed. As working item increments are conveyed to the clients toward the finish of each sprint, the scrum group can get criticism from the clients in run survey meeting [19]. This empowers consolidation of criticism into the item ceaselessly.
B. Diadvantages of Scrum
• For extensive undertaking, once in a while it winds up noticeably hard to assess the exertion required [20]. • The plan is less absolutely said.
• The venture may go toward another path if client agent does not have clear thought regarding the necessity [21]. • Just senior software engineers can take choices regarding the improvement procedure.
VI. FEATURE DRIVEN DEVELOPMENT (FDD)
For making quick functionality from SW, FDD can serve the purpose could be the key. FDD revolves around fast improvement cycles and provides organizations with include rich frameworks since they are always creating. The FDD was suggested by Jeff Luca in 1997 to meet the SW product development needs of a Singapore bank. His idea was a gathering of five procedures intended to cover the model's improvement and furthermore it's listing, outline, arranging and the working of its features [12].
Since its unique implication, FDD, and its five essential exercises have ceaselessly been utilized to create enterprise SW since it is viewed as both agile and practical oriented [20]. When it is delivered well, FDD can offer auspicious status reports and precise advance following in light of all levels of authority in the undertaking.
A. Five processes of FDD
1. Build up a general model: The FDD strategy demands that groups apply the sufficient measure of input toward the beginning of the task so as to build an object procedure featuring the domain issue [21]. Demonstrating with FDD is time-boxed and community oriented. Domain models ought to be made in detail by small gatherings and afterward introduced for associates to survey. It is trusted that a proposed model -or conceivably a blend of them -will then be utilized for every zone of the domain. They will then be compiled after some time to create a general model. 2. Prepare a feature list: From the experience during the product building procedure, a rundown of highlights is set up by separating domains into branches of knowledge that contain data on business exercises. The means that are utilized for every business action speak to an arranged list of features. Highlights are commu-nicated as: "activity, result, and question". The desire is that they won't take over two weeks to finish, on the off chance that they do, they ought to be broken into smaller tasks [20]
B. Advantages of FDD Methodology
• Lads to move to huge ventures and acquire repeatable achievement. • Practicing the five procedures gets new staff with a shorter increase time [19]. • Feature-Driven Development is worked around a centre of industry-perceived prescribed procedures. • Regular Builds: Regular Builds guarantee there is dependably an updated framework that can be shown to the customer and helps featuring incorporation mistakes of source code for the functions early. • Visibility of progress and results: By visit, fitting, and exact progress revealing at all levels inside and outside the venture, in view of finished work, administrators are helped at directing a task effectively [21]. • Risk Reduction by means of emphasis of outline and work in little pieces. FDD helps in minimizing risks by utilizing shorter cycles of planning, comprehension of the necessities and the framework in an unmistakable and particular way, in this manner prompting a state where there are no ambiguities, as the requirements and desires are as of now saw exceptionally well [19]. • Clarity of necessities and better understanding of framework to be built is increased through the Develop Overall Model process. This procedure incorporates abnormal state walk-through of the extent of the framework and its unique situation. Next, in depth domain walkthroughs are held for each modelling region. • Costing the task by include prompts more noteworthy accuracy.
C. Disadvantages of FDD Methodology
• Not a perfect technique for smaller tasks along these lines, it is bad for an individual SW developer [19]. • High reliance on the main engineer implies the individual ought to be completely prepared for a go about as organizer, lead planner, and mentor. • No drafted documentation given to customers in this philosophy in this way, they are not ready to get a proof for their own particular SW [21] VII. CRYSTAL METHODOLOGY The Crystal system is a standout amongst the most lightweight, versatile ways to deal with SW development. Crystal is really included a group of coordinated approach, for example, Crystal Clear, Crystal Yellow, Crystal Orange and others, whose exceptional attributes are driven by a few factors, for example, group measure, framework criticality, and task needs [12]. This Crystal family tends to the acknowledgment that each project may require a marginally custom fitted arrangement of approaches, practices, and procedures keeping in mind the end goal to meet the undertaking's extraordinary attributes.
A. Process Categories of Crystal Methodologies
•
B. Advantages of Crytsal Methodology
• Iterative-incremental process • Continuous reconciliation [12] • Iterative improvement engine administered by arranging and considering • Flexible and configurable process • Methodologies utilized for a low -criticality undertaking can normally be tuned to fit a higher-criticality venture, gave that the task measure is not expanded significantly • Active client association
C. Disadvantages of Crytsal Methodology
• Only restricted adaptability • Lack of an unambiguous regular process • Limited applicability, not reasonable for growing exceedingly basic frameworks • Over-dependence on human correspondence
VIII. COMPARATIVE ANALYSIS OF CURRENT DEVELOPMENT METHODOLOGIES
The modern methodologies for development of SW follows some good principles from the traditional methodologies. All the modern SW development approaches are either iterative or incremental. In some cases, like SCRUM and XP, they follow both Incremental as well as Iterative strategy.
These model follows different time scale for their iteration life cycle as XP takes 1-6 weeks, SCRUM takes 2-4 weeks, and the FDD takes from 2 days to 2 weeks.
As per the need of current SW process requirements the product can large as well as complex. With respect to this measure, the XP methodology can deal with small and simple projects. The Scrum strategy can be used with big and critical problems. The crystal approach is applicable to any size project as the scope for human correspondence is more. In CMMI strategy, it can handle almost any size project but the use of extra resources and due to extra documentation it is burdening process for large size problems.
There is very much important factor of involving the user in SW building process, which is must in current development scenario, it is addressed in most of the current development strategies. In the XP and in the Crystal approach customer is enthusiastically involved in almost every phase of development. Which is helpful to build a fully functional quality product. In SCRUM the customer or the end user is not directly involved. The feedback from the customer is obtained through product owner. In FDD approach customer interaction happens through the reports. In CMMI approach customer is involved at the time of requirement gathering only.
The documentation is again an essential part for follow up during the maintenance as well as for guiding next venture. The XP, the SCRUM and the Crystal follows only basic documentation. The CMMI, the FDD approach follows high level of documentation, which sometimes become time consuming process result in delay to release the end product.
In regards with models work process practices all the current approaches have different style of practices. The XP approach mainly achieved with simple steps, programming in pair and by following test driven approach. In SCRUM approach complete work is formed and done through regular meetings. In FDD approach object modelling is used. The functionality is accomplished through feature driven approach. For architecture design purpose the UML is used. The CMMI is having continuous and staged representation. There are many versions of CMMI, which are applicable on various types of SW associations. The CMMI majorly keeps the exact track of each activity with proper documentation. The crystal approach comes in variety options which is has more human correspondence of work. As the task size increases the Crystal team is also increases in size.
In all the modern SW development technologies one point is common that, these all models support concurrent functionality development. Almost in all the modern development approaches the requisition from the clients are acquired in iterative manner. As the approach is iterative and sometimes incremental the cost for the reworks is less. The flexibility in designing the architecture is easily achievable. There is no rigidness in all the approaches, which lets computer programmer to concentrate on features than the whole process. These all are very flexible processes in terms for directions for development. The modern approaches have better processes to address the issues of bugs, the processes are mature and capable co catch the faults in functionality in earlier phases.
The modern approaches believe on continuous testing. Almost in every approach testing is performed after every iteration. These approaches stresses on interpersonal skills and required the basic business knowledge among the working group, so that the particular domain requirement can implemented with full expected functionality exactly as per the requirement by the client. These model approaches support reusability up to larger extent. By using these features of modern approaches of SW development organisation can achieve high client satisfaction and high performance.
However, as there are many advantages of modern approaches for the project development, there is downside also. One the problem with modern approaches is these models are not suitable for big and critical level projects. As the criticalness of venture increases these models face difficulty in implementation. This issue can be conquered by avoiding some problematic practices and introducing some new practices in these model processes.
IX. CONCLUSION
The research paper discussed and compared six new development models. Agile SW procedures are picking up prominence and are presently favoured over traditional SW programming strategies which have a few deficiencies, for example, failure to adapt up to the always showing signs of change client necessities and surpassing the allocated time and spending plan budget. However as per the analysis of the comparison made in above section, the XP can be a good option for satisfying current industry needs. Still there are shortcomings in this model which need to be addressed. All the above model proved their usefulness in specific domain or specific project size. There is a need to have a comprehensive model for the development. According to the study performed in this research paper the XP approach can turn in to slandered model if some the process modified with some better option. | 6,188 | 2017-09-30T00:00:00.000 | [
"Computer Science"
] |
Potential drug interactions in patients given antiretroviral therapy
ABSTRACT Objective: to investigate potential drug-drug interactions (PDDI) in patients with HIV infection on antiretroviral therapy. Methods: a cross-sectional study was conducted on 161 adults with HIV infection. Clinical, socio demographic, and antiretroviral treatment data were collected. To analyze the potential drug interactions, we used the software Micromedex(r). Statistical analysis was performed by binary logistic regression, with a p-value of ≤0.05 considered statistically significant. Results: of the participants, 52.2% were exposed to potential drug-drug interactions. In total, there were 218 potential drug-drug interactions, of which 79.8% occurred between drugs used for antiretroviral therapy. There was an association between the use of five or more medications and potential drug-drug interactions (p = 0.000) and between the time period of antiretroviral therapy being over six years and potential drug-drug interactions (p < 0.00). The clinical impact was prevalent sedation and cardiotoxicity. Conclusions: the PDDI identified in this study of moderate and higher severity are events that not only affect the therapeutic response leading to toxicity in the central nervous and cardiovascular systems, but also can interfere in tests used for detection of HIV resistance to antiretroviral drugs.
Introduction
HIV infection affects 36.9 million people worldwide, representing about 0.6% of the world's population.
There are an estimated 1.6 million deaths yearly due to acquired immunodeficiency syndrome (AIDS) (1) . This disease causes a negative impact of multidimensional order into the lives of people. However, a great transformation occurred in the epidemiological profile with the emergence of highly active antiretroviral therapy (HARRT) (2) .
The use of HARRT, which can reduce the viral load to undetectable levels and raise the count of CD4 + T lymphocytes, resulted in the mortality reduction and increased survival rate of the infected individuals (2) .
However, the success of HARRT is associated with the maintenance of a high rate of patient compliance and the prevention and management of drug-drug interactions (DDI) (3) .
DDI is defined as a clinical or pharmacological effect
that results from the co-administration of medications, which alter a patient's response to treatment. DDI occurs when the action of one drug (object, substrate) is altered by the presence of another drug (precipitant, interacting drug) (4) . DDI represents one of the most frequent adverse drug events that result in hospitalization, increase of cardiovascular risk, and abandonment of treatment. These induce adverse events or reduce the therapeutic efficacy, particularly in individuals subjected to polypharmacy (5) .
Polypharmacy, combined with factors such as age, alcohol consumption, illicit drug use, and potentially interactive features of some antiretroviral drugs such as protease inhibitors and reverse transcriptase inhibitors but not nucleoside analogues increase the complexity of therapeutic management and the risks pertaining to DDI (4- 7) . Antiretroviral therapy (ART) agents represent one of the main therapeutic groups with the greatest potential for DDI. Both protease inhibitors and nucleoside analogues are substrates and modulators of the cytochrome P450 enzyme system (6)(7)(8)(9) . International consensus and national guidelines for the management of patients undergoing HARRT, must be put in place to avoid hardships arising from DDI. Despite this, studies conducted in different countries indicate that the prevalence of DDI in the users of ART in outpatient context varies from 21.5% to 67.1%, depending on the age of individuals, the therapeutic classes involved, and the database used to analyze DDI (8)(9) . Participants exposed to DDI showed reduced treatment adherence (9) .
Post-marketing surveillance reported the use of HARRT focused on the identification of potential drug-drug interactions (PDDI), particularly in Brazil, where there are over 405,000 individuals involved in the treatment and this surveillance can contribute to a better understanding and management of clinically relevant PDDI (10) . The term PDDI refers to the possibility of a particular medication altering the intensity of the pharmacological effects of another medication therefore increasing or decreasing the therapeutic effect and/or adverse reactions or the responses other than those originally stemming from the medications (10) .
In this context, it is fundamental that health professionals have knowledge regarding PDDI in people subjected to antiretroviral treatment, as the prescription must consider the characteristics of the drugs and especially the possibilities of these interactions. The scientific literature shows that few studies in the area are carried out by nurses, even though the routine of medications should occupy a strategic position leading to interactions in order to enable nurses to examine their daily work and interfere with medication routine geared to to prevent the occurrence of adverse reactions due to drug interactions. The objective of this study was to determine the prevalence of potential drug-drug interactions in patients with HIV infection undergoing HARRT and to identify the major PDDI in this group and associated factors.
Results
The subjects included 161 participants undergoing HARRT, of which 52.2% (n = 84) were exposed to PDDI.
The average viral load was 5658.89 ± 30020.70 copies/ ml, and the average CD4 + T-cell counts was 476.17 ± 269.69 cells/µl. About 44% of the population had some opportunistic disease in the last year, however there were not present at the time of data collection.
The groups of individuals exposed to potential PDDI and unexposed to PDDI, showed no differences pertaining to gender, age, alcohol consumption, drug use, adherence to therapy, or adverse reaction reporting.
The polypharmacy (p = 0.000) and time of treatment (p = 0.00) showed a significant association with the presence of DDI.
The average age in the group of individuals exposed to potential PDDI and unexposed to PDDI was 44.1 ± 10.5 years and 41.0 ± 10.3 years, respectively (range: 22-67 years). Among individuals exposed to PDDI, 7.1% were elderly, whereas among individuals unexposed to PDDI, 5.2% were elderly. There was statistically significant difference (p = 0.001) among the mean of drugs used in the PDDI group: the mean was 5.08 ± 0.92 and in the non-PDDI group was 4.01 ± 0.14. The average of ART was statistically significant (p To evaluate the adherence to HARRT, we used the "questionnaire for the evaluation of adherence to ART in people with HIV/AIDS (CEAT-VIH)" translated and validated in Portuguese (11). We used the CAGE questionnaire to evaluate alcohol consumption (12).
In the group with PDDI, the average of interactions between ART per patient was 2.07 ± 0.75 and among all therapeutic classes was 2.63 ± 1.42. Among patients with PDDI, 60.7% showed two PDDIs and 16.7% showed three PDDIs. About two out of ten individuals were exposed to four to nine PDDIs.
Discussion
Despite the evidence of international and Brazilian guidelines for HARRT and the issue of interactions and potential adverse events associated with it (8)(9)15) , the prevalence in this study (52.2%) was higher in comparison to other research conducted with adults for outpatient treatments in India (21.5%) (8) , the United Kingdom (27%) (9) , and England (35%) (9) . Possible explanation for this difference is the fact that this study looked at the PDDI between medicines of groups with high-potential interactions (antiretroviral, central nervous system, and ethanol). In addition, the overall average of drugs consumed and age of the participants in this investigation was higher compared to other studies. (16) .
The association between polypharmacy and PDDI was confirmed, a fact in line with other investigations that analyzed ART (9) . Polypharmacy is a risk factor in patients undergoing HARRT and relates particularly to those individuals who have a treatment regimen with at least a medication not belonging to the ART group, which can be aggravated by age (17) . Elderly patients have 51% probability of DDI and youngsters a 35% chance in the case of use of 6 and 7 medications, respectively (17) .
Each medicine added in the therapy increases the risk of adverse events by 10%, including DDI (17) .
Despite the risk of DDI associated with polypharmacy, this strategy is critical in HIV-infected patients. The first line of initial treatment typically includes three ART, two Nucleoside analogues, and non-Nucleoside analogs. The first-line treatment consists of tenofovir, lamivudine, and efavirenz and the second-line treatment consists of protease inhibitor and nucleoside analogues (15) .
In addition, in cases where there is a presence of opportunistic infections or co-morbidities, polypharmacy is mandatory (15) . (18) .
Accordingly, longer exposure to HARRT causes increased frequency of adverse reactions to medication.
Evidence suggests that the incidence of adverse reactions to medication is about 50% in adults who receive HARRT in the outpatient setting (19) . The presence of adverse reactions to medication is 1.6 times higher in people with a CD4 + T-cell count below 200 cells/µl (20) . This may be more associated with the greatest number of drugs used by individuals than with a lower number of these cells. Therefore, therapeutic resources are essential to combat HIV infection and opportunistic infections.
In this study, the values of CD4 + T cell counts and viral load showed no association with the presence of PDDI. However, other authors observed an association of the event with a count less than 200 cells/µl (9,20) , and stated that DDI may affect patients' health. responses, including DDI (21) . In evaluating a specific DDI, it is important to note the relative inhibitory potential of the drug to the particular enzyme (22) .
Co-administration of ART and other medications can
result in important changes in the serum levels, many of which are related to preventable adverse events.
There is evidence that the HARRT, alone or combined with other medications, including action in the CNS, alters the CYP450 metabolism, which was observed in the present study, particularly in the PDDIs of moderate and higher severity (6)(7) .
Regardless of the treatment time, 68.3% of PDDI were classified as moderate. Among these,
12.8% included ART and agents that act in the CNS
(benzodiazepines, antidepressants, and neuroleptics) with a clinical impact of excessive sedation and confusion (6)(7) . Whereas the users of these drugs are in the community, including developing industrial communities, this reaction can interfere with the quality of life and lead to negative outcomes. In these cases, a modification of therapy should be considered. For example, in the PDDI of diazepam and ritonavir, the use of a benzodiazepine such as lorazepam could prevent an increase in sedation effects (23) .
Although there has not been any PDDI reported between ART and other medicines and alcohol that inhibitor association with ritonavir provides higher serum levels, stable and long-lasting protease inhibitor, increasing its power of viral inhibition and reducing the occurrence of resistance mutations (25) .
Conclusions
Risk factors were found for the occurrence of | 2,457 | 2016-11-21T00:00:00.000 | [
"Medicine",
"Biology",
"Psychology"
] |
The Interplay between Components of the Mitochondrial Protein Translocation Motor Studied Using Purified Components*
The final step of protein translocation across the mitochondrial inner membrane is mediated by a translocation motor composed of 1) the matrix-localized, ATP-hydrolyzing, 70-kDa heat shock protein mHsp70; 2) its anchor to the import channel, Tim44; 3) the nucleotide exchange factor Mge1; and 4) a J-domain-containing complex of co-chaperones, Tim14/Pam18-Tim16/Pam16. Despite its essential role in the biogenesis of mitochondria, the mechanism by which the translocation motor functions is still largely unknown. The goal of this work was to carry out a structure-function analysis of the mitochondrial translocation motor utilizing purified components, with an emphasis on the formation of the Tim44-mHsp70 complex. To this end, we purified Tim44 and monitored its interaction with other components of the motor using cross-linking with bifunctional reagents. The effects of nucleotides, the J-domain-containing components, and the P5 peptide (CALLSAPRR, representing part of the mitochondrial targeting signal of aspartate aminotransferase) on the formation of the translocation motor were examined. Our results show that only the peptide and nucleotides, but not J-domain-containing proteins, affect the Tim44-mHsp70 interaction. Additionally, binding of Tim44 to mHsp70 prevents the formation of a complex between the latter and Tim14/Pam18-Tim16/Pam16. Thus, mutually exclusive interactions between various components of the motor with mHsp70 regulate its functional cycle. The results are discussed in light of known models for the function of the mitochondrial translocation motor.
Only a very small fraction of the estimated ϳ800 -1000 mitochondrial proteins are made in situ (eight in yeast). The rest are encoded by nuclear genes, synthesized in the cytosol, and then delivered to one of the four mitochondrial compartments: the outer membrane, inner membrane, and intermembrane space and the matrix. Each compartment contains essential proteins for the viability of every eukaryotic cell. Thus, functional import systems for nuclear encoded proteins are indispensable for the biogenesis of mitochondria (1).
The import of nuclear encoded proteins into the mitochondria is a multistep process mediated by the coordinated action of translocation machineries localized in the outer and inner mitochondrial membranes (2,3). The TOM (translocase of the outer mitochondrial membrane) complex is a multimeric oligomer that constitutes the main portal of protein entry into mitochondria. As such, it serves for the recognition, insertion, and delivery of all nuclear encoded mitochondrial precursor proteins (2)(3)(4)(5).
Proteins that contain N-terminal targeting signals and that are destined for full translocation across the inner membrane are transferred from the TOM complex to the TIM23 complex (translocase of the inner mitochondrial membrane). The latter complex is composed of two integral inner membrane proteins, Tim17 and Tim23. Although the function of Tim17 in mediating protein import is not yet well understood, Tim23 forms the translocation channel in the inner membrane during the import process. A third protein, Tim50 (6 -8), probably serves as a receptor in the mitochondrial intermembrane space for proteins to be handled by the TIM23 complex. Recently, it was shown that Tim50 also maintains the permeability barrier of the mitochondrial inner membrane via a direct interaction with Tim23 (9). A precursor protein that is found in transit in the TIM23 channel requires additional help to be imported completely into the mitochondrial matrix. This final step of translocation across the inner mitochondrial membrane is catalyzed by the function of a translocation motor at whose core stands the matrix-localized, ATP-hydrolyzing, 70-kDa heat shock protein mHsp70; its J-domain-containing co-chaperone complex, Tim14/Pam18-Tim16/Pam16; the nucleotide exchange factor Mge1; and the component that anchors mHsp70 to the TIM23 channel, Tim44 (4, 10, 11). Tim44 is a peripheral membrane protein that is found in close proximity to the precursor protein during its passage in the import channel. Tim44 associates transiently with the TIM23 complex to perform its function, and this association is essential for normal import of matrixlocalized proteins by mitochondria (12)(13)(14). It was shown in vitro that Tim44 is also able to interact with negatively charged phospholipids, particularly cardiolipin (15). Upon association with the import channel, Tim44 recruits mHsp70 to the channel in a nucleotide-dependent manner. Regulation of this interaction involves the nucleotide exchange factor Mge1 (16,17) and the J-domain-containing chaperone complex Tim14/ Pam18-Tim16/Pam16 (18 -20).
Two models, the active pulling and the trapping (Brownian ratchet), have been suggested initially to explain how proteins are translocated across the inner membrane and, in particular, the ability of mitochondria to import stably folded proteins. In the pulling model, mHsp70 undergoes a conformational change generating an active pulling force on the polypeptide chain. This pulling force, controlled by ATP binding, drives the unfolding of precursor proteins and their concomitant translocation across the inner membrane. According to this model, the pulling force will be effective only if mHsp70 forms a ternary complex with the imported precursor protein and Tim44. The latter would provide a platform for a lever-like movement of mHsp70 (21).
The trapping model proposes a movement of the polypeptide chain in the translocation channel due to Brownian molecular motion, which is then trapped by interaction of mHsp70 with the matrix-exposed part of the polypeptide. Trapping by mHsp70 leads to vectorial transport, as the polypeptide chain can no longer move backward. According to the ratchet model, the unfolding of the precursor protein is achieved by trapping the conformational changes of the polypeptide chain that occur with natural breathing of the protein (4).
Recently, a third model, the "entropic pulling," was proposed. This model suggests that the bulky mHsp70 bound to the translocating chain reduces the latter's conformational freedom, thereby accelerating protein import by means of entropic pulling (22). The entropic pulling model suggests the presence of active pulling, of entropic origin, but in the absence of a molecular fulcrum. Thus, both the Brownian ratchet and entropic pulling models have similar molecular requirements for functioning during protein import into the matrix.
In this study, we used cross-linking with the bifunctional reagent disuccinimidyl suberate (DSS) 3 to study the formation of the translocation motor utilizing purified components. Mechanistic implications of the results are discussed in light of these known models for function of the translocation motor.
Construction of N-terminally Octahistidine-tagged Tim44-A yeast Tim44 open reading frame lacking 43 N-terminal amino acids (corresponding to the mitochondrial targeting sequence) was amplified using forward primer 5Ј-TAA GGA TCC CAA GGT GGA AAC CCT CGA and backward primer 5Ј-TAA GCG GCC GCT CAG GTG AAT TGT CTA GA. The PCR product was subcloned into pGEM-T-Easy (Promega Corp.) and sequenced to confirm the fidelity of the Taq polymerase. The fragment was digested with BamHI-NotI and ligated with a double-digested (BamHI-NotI) modified pET-21d(ϩ) vector (Novagen). The resulting recombinant plasmid encodes a Tim44 protein in which the mitochondrial targeting sequence is replaced by an initiation codon followed by a octa-histidine tag and a tobacco etch virus (TEV) protease recognition site. The His-tagged Tim44 was overexpressed in Escherichia coli strain BL21.
Purification of Octahistidine-tagged Tim44-The bacterial transformants were grown in 1 liter of LB medium at 37°C to A 600 ϭ 0.5-0.6, and overexpression of Tim44 was induced with 1 mM isopropyl -D-thiogalactopyranoside for 3 h. The cells were then harvested, suspended in 100 ml of buffer A (50 mM Tris-HCl (pH 7.5), 0.1% Triton X-100, 0.1 mg/ml lysozyme, 2 mM MgCl 2 , 2 mM phenylmethylsulfonyl fluoride, 5% (v/v) glycerol, 1500 units of DNase, and protease inhibitor mixture (catalog no. 11873580001, Roche Applied Science)), disrupted using a Microfluidizer (Tetra Sense), and centrifuged at 20,000 ϫ g to clear the solution. The supernatant was loaded at 1 ml/min onto a nickel-nitrilotriacetic acid-agarose column (Bio-Rad) that had been pre-equilibrated with buffer B (50 mM Tris-HCl (pH 7.4), 0.4 M NaCl, 10 mM imidazole, and 10% glycerol). The column was washed with 20 ml of buffer B and developed with a linear imidazole gradient (10 -500 mM) in buffer B. Tim44 eluted at ϳ200 mM imidazole. Fractions enriched in Tim44 were pooled, and protein concentration was determined. TEV protease was added at 1:50 (w/w) to the Tim44 eluate and incubated overnight at 4°C. To purify Tim44 further, the nickel-nitrilotriacetic acid-agarose eluate was concentrated to ϳ1 ml in Centricon tubes (Vivascience) and further purified using a Superdex 200 gel filtration column (Amersham Biosciences) in buffer C (300 mM NaCl and 20 mM Tris-HCl (pH 7.4) at a flow rate 1 ml/min. Tim44 eluted at ϳ100 ml of buffer C and was Ͼ95% pure as judged by SDS-PAGE. The Tim44 buffer was exchanged (PD-10, GE Healthcare) into 20 mM Na ϩ -HEPES (pH 7.4) and 100 mM NaCl, concentrated to ϳ10 -20 mg/ml in Centricon tubes, frozen in liquid nitrogen, and stored at Ϫ80°C.
Construction and Purification of the C-terminal Domain of Tim44-A yeast Tim44 open reading frame lacking 210 N-terminal amino acids was amplified using forward primer 5Ј-GGA TCC ACA AAT ATC GAG TCT AAA GAA and backward primer 5Ј-TAA GCG GCC GCT CAG GTG AAT TGT CTA GA. The PCR product was cloned in a modified pET-21d(ϩ) vector. The resulting plasmid overexpresses Tim44 carrying an octahistidine tag at its N terminus followed by the TEV protease recognition site. The purification procedure was carried out as described above for full-length Tim44.
Purification of the Tim14/Pam18-Tim16/Pam16 Complex-A plasmid co-overexpressing a soluble domain of Tim16/ Pam16 named Tim16 s /Pam16 s , containing an octahistidine tag at the N terminus, and the soluble domain of Tim14/Pam18 named Tim14 j /Pam18 j was constructed (23). The histidine tag is removable by digestion with TEV protease. The full purification procedure is described in Ref. 24.
Mutagenesis-Site-directed mutations were created using the QuikChange mutagenesis kit (catalog no. 0720099, Stratagene). PCR amplification of mutant Tim44 was preformed with forward primer 5Ј-GAA TGG GAG AAG TCT CAG GCA CTG CAG GAG AAC and backward primer 5Ј-GTT CTC CTG CAG TGC CTG GAG ACT TCT CCC ATT C. Recombinant N-terminally hexahistidine-tagged Tim44 cloned in the pGEM-T-Easy vector was used as a template. Following sequence analysis to confirm the mutation, the PCR product was digested with BamHI-NotI and cloned into the modified pET-21d(ϩ) vector.
Cross-linking of Complexes-Cross-linking was carried out with 1 mM DSS at room temperature for 30 min in 20 mM Na ϩ -HEPES (pH 7.5), 10 mM MgCl 2 , 100 mM KCl (50 KCl was added when Tim14/Pam18-Tim16/Pam16 was present), and 200 mM NaCl. The cross-linking reaction was stopped by the addition of SDS-containing sample buffer. The cross-linked products were analyzed by SDS-PAGE using an acrylamide gradient of 4 -16%.
Miscellaneous-SDS electrophoresis was carried using the Laemmli buffer system (25). The protein concentrations indicated in this study were determined using the bicinchoninic acid protein assay (catalog no. B9643, Sigma) with bovine serum albumin as a standard and refer to monomer concentrations. mHsp70 (26) and Mge1 (27) were purified as described.
RESULTS
Purification of Recombinant Yeast Tim44-In a previous study, it was shown that when overexpressed in bacteria, yeast Tim44 containing a C-terminal hexahistidine tag is found in the soluble fraction of bacterial cell lysate. However, Tim44 was readily degraded during our attempts to purify it. To inhibit the degrading protease, the initial purification steps were carried out in the presence of urea, which was removed during the final step of purification (15). Here, we report a new purification protocol for recombinant yeast Tim44 overexpressed in bacteria and carrying an N-terminal octahistidine tag. When a commercial protease inhibitor mixture was added to the lysates, a significant amount of Tim44 was found intact after the nickelagarose purification step. After removing the octahistidine tag by TEV protease cleavage and subsequent gel filtration, Tim44 was Ͼ95% pure (supplemental Fig. S1).
Effect of Nucleotides on the Formation of the Tim44-mHsp70 Complex-We developed a method that would be suitable to detect the formation of a Tim44-mHsp70 complex and to monitor the effect of various cofactors (i.e. nucleotides and co-chaperones) on complex formation in vitro. The method, a version of a gel-shift assay, utilizes cross-linking with bifunctional reagents and is carried out as follows. [ 35 S]Met-radiolabeled Tim44 was purified and incubated with increasing concentrations of mHsp70 to allow complex formation. Next, the Tim44-mHsp70 complex was stabilized by cross-linking with the bifunctional cross-linker DSS. Finally, the cross-linking products were separated by gradient SDS-PAGE and visualized by autoradiography. A shift in the mobility of the radiolabeled Tim44 in the presence of an added component compared with Tim44 alone indicates complex formation. Tim44 alone was detected essentially as a single band (Fig. 1A, lane 1) corre-sponding in its mobility to the monomeric form (data not shown). This result confirms our previous work showing that Tim44 is monomeric in solution (15). In the presence of mHsp70, a new major band was observed, at the expense of the monomeric Tim44, representing a complex of Tim44 bound to mHsp70 (Fig. 1A, lanes 2-6). Maximal formation of the Tim44-mHsp70 complex was reached at 3 M mHsp70, which was close to the Tim44 concentration (2 M) present in the reaction mixture. A second minor form of the Tim44-mHsp70 complex was also observed (Fig. 1, asterisks). The latter form probably represents higher oligomers of the Tim44-mHsp70 complex. It was suggested previously that the functional form of the import channel is dimeric (28). Thus, we cannot exclude the possibility that the Tim44-mHsp70 complex itself has a tendency to form dimers. Notably, much less Tim44-mHsp70 complex was observed in the presence of ATP (Fig. 1B). Similar results (i.e. weaker binding in the presence of ATP) were also observed in pulldown experiments using Tim44 carrying a histidine tag (supplemental Fig. S2).
In the presence of Mge1, we observed a new cross-linked form representing a complex composed of Tim44-mHsp70-Mge1 (Fig. 1C). Similar to the complex formed in the absence of Mge1, much less Tim44-mHsp70-Mge1 complex was observed in the presence of ATP (Fig. 1D). We conclude that ATP destabilizes the interaction between mHsp70 and Tim44.
In the cell, mHsp70 is expected to be in complex with either ADP or ATP. Therefore, the formation of the Tim44-mHsp70 complex was examined in the presence of ADP as well. As shown in Fig. 2, the Tim44-mHsp70 complex obtained in the presence of ADP was the strongest. Although the complex with ADP was consistently more stable than that lacking nucleotide, we were not able to demonstrate that this phenomenon is statistically significant (supplemental Fig. S3). Significant binding was also observed in the presence of AMP-PNP. Thus, the Tim44-mHsp70 interaction is modulated by nucleotides in a manner similar to what has been observed with solubilized mitochondria (12)(13)(14). Similar results were obtained in the presence of Mge1 as well (data not shown).
Several observations exclude the possibility that Tim44 was associated with mHsp70 as an unfolded substrate. First, when the purified Tim44 was examined by CD spectroscopy, its spectrum was consistent with that of a folded protein, with a T m of ϳ51°C (supplemental Fig. S4). Second, Tim44 did not affect the ATPase activity of mHsp70 under any conditions examined (data not shown), as would be expected from an unfolded substrate (27,29). Third, when similar experiments were carried out in the presence of DnaK, only a minute amount of Tim44 was detected bound to DnaK under all conditions examined (Fig. 2). This result is consistent with previous observations showing that DnaK does not complement a deletion of yeast mHsp70 (30). Lastly, a Tim44 mutant (E67A) was found to be impaired in its interaction with mHsp70 (see below).
Mapping the mHsp70-binding Site on Tim44-We have shown previously that yeast Tim44 contains a tightly folded domain that is located at the C terminus of the molecule (ϳ25 kDa) (15). Recently, the crystal structure of the Tim44 C-terminal domain was solved (31) and confirmed our previous predictions. We wanted to determine whether the tightly The Mitochondrial Translocation Motor NOVEMBER 23, 2007 • VOLUME 282 • NUMBER 47 folded C-terminal domain of Tim44 is able to interact with mHsp70. To this end, the radiolabeled C-terminal domain of Tim44 was overexpressed in bacteria and purified. Next, its ability to bind mHsp70 was examined using cross-linking with DSS. The results presented in Fig. 3 show that the C-terminal domain of yeast Tim44 was less able to bind mHsp70 in comparison with full-length Tim44, indicating that the N terminus of Tim44 may play an important role in the interaction with mHsp70.
Using site-directed mutagenesis, we found that mutating amino acid 67 of Tim44 from Glu to Ala was lethal for yeast cells (data not shown). Because the E67A mutation is located at the N terminus of Tim44, we examined whether the lethal phenotype is due to an impaired interaction of Tim44 with mHsp70. For this purpose, the radiolabeled Tim44 mutant was purified, and its interaction with mHsp70 was examined using cross-linking. As shown in Fig. 4A, in the absence of Mge1, E67A mutant Tim44 bound mHsp70 slightly less compared with wild-type Tim44 in the presence of both ADP and ATP. Notably, in the presence of Mge1, the binding of E67A mutant Tim44 was strongly impaired under all conditions tested (Fig. 4B). Thus, Glu 67 , located at the N terminus of Tim44, may play an important role in the binding of Tim44 to mHsp70.
Effect of the P5 Peptide on the Formation of the Tim44-mHsp70 Complex-The translocation motor binds precursor proteins during its functional cycle. The P5 peptide, derived from the mitochondrial targeting signal sequence of aspartate aminotransferase, is known to bind Hsp70 chaperones, including mHsp70 (27,32,33). Therefore, the effect of the P5 peptide on the formation of the Tim44-mHsp70 complex was examined. Notably, under all conditions examined, the P5 peptide triggered dissociation of the Tim44-mHsp70 complex (Fig. 5). A control peptide (LEEDLRGYM-SWI) did not trigger dissociation of the Tim44-mHsp70 complex (data not shown). Thus, the binding of Tim44 and the P5 peptide to mHsp70 is mutually exclusive: mHsp70 cannot bind Tim44 when a peptide is bound to it. From a mechanistic point of view, the results presented here suggest that precursor binding by mHsp70 causes instant dissociation from its complex with Tim44, which consequently cannot serve as a fulcrum for the function of mHsp70.
Effect of Tim14/Pam18-Tim16/Pam16 Co-chaperones on the Formation of the Tim44-mHsp70 Complex-A membraneassociated complex of co-chaperones, Tim14/Pam18-Tim16/ Pam16, was shown to be a vital component of the translocation motor. It has been suggested that the role of Tim14/Pam18, similar to other J-domain-containing chaperones, is to enhance the ATPase activity of mHsp70 and to promote a conformation that is strongly associated with peptide (18 -20, 34, 35). A subsequent study showed that Tim16/Pam16 acts to antagonize the enhancing effect of Tim14/Pam18 by reducing it by half (36). Nevertheless, another work has suggested that enhancement of the ATPase activity by Tim14/Pam18 is not essential for the in vivo function of the co-chaperone complex (23). Previous studies carried out using solubilized mitochondria demonstrated that both Tim14/Pam18 and Tim16/Pam16 function in vivo as one stable complex (18,19,34). Therefore, in this study, we focused primarily on the effect of the Tim14/Pam18-Tim16/Pam16 complex on the Tim44-mHsp70 interaction. The concentrations of Tim44 and the Tim14 j /Pam18 j -Tim16 s /Pam16 s complex (soluble domains of the complex) were kept equal (1.2 M each), and the formation of the Tim44-mHsp70 complex was examined. The results presented in Fig. 6A show that the presence of the Tim14 j /Pam18 j -Tim16 s /Pam16 s complex had very little effect on the formation of the Tim44-mHsp70 complex (10% less bound at 2 M mHsp70).
A very significant result is the fact that Tim14 j /Pam18 j -Tim16 s / Pam16 s did not form a complex with Tim44 (Fig. 6A) or with the Tim44-mHsp70 complex (Fig. 6A, lanes 2 and 3). One explanation for this result would be that mHsp70 can form a complex either with the Tim14/Pam18-Tim16/Pam16 complex or with Tim44 but has a much higher affinity for Tim44. To examine this possibility, complexes were formed, stabilized by cross-linking with DSS, separated by SDS-PAGE, and detected by staining with Coomassie Blue. Indeed, when incubated alone, mHsp70 and Tim14 j / Pam18 j -Tim16 s /Pam16 s associated to form a complex (Fig. 6B). Thus, in vitro mHsp70 was detected as binding either to Tim44 or to the Tim14/ Pam18-Tim16/Pam16 complex, but not to both simultaneously (supplemental Fig. 2S).
DISCUSSION
The aim of this work was a structure-function analysis of the mitochondrial translocation motor with an emphasis on interactions between various components of the motor. For the purpose of the study, we developed a novel method using crosslinking with DSS to monitor the interaction between the various purified partner proteins. The advantage of using crosslinking to investigate protein-protein interactions is that the complexes are stabilized with no disruption of the equilibrium in the system, which enables us to determine the steady-state levels of various complexes.
Properties of the Tim44-mHsp70 Complex-Cross-linking with DSS showed that Tim44 interacts with mHsp70, forming a hetero-oligomer. A dimer of the Mge1 nucleotide exchange factor associates with the complex, leading to the formation of a heterotrimeric complex.
We demonstrated previously that when examined by the protease resistance assay, the C terminus of Tim44 (ϳ25 kDa) forms a tightly folded domain (15). We have shown here that mHsp70 interacts very weakly with the purified C-terminal domain of Tim44. However, because of the fact that we have not been able to purify the N-terminal domain, we did not demonstrate its direct binding to mHsp70. Thus, we cannot exclude another interesting possibility, viz. that the N-terminal domain affected the folding of the C-terminal domain slightly and altered the binding of the latter to mHsp70. Additional work is needed to demonstrate which possibility is correct. Finally, we found that a single point mutation (E67A) of Tim44 that is lethal for yeast leads to a significant reduction in the formation of the Tim44-mHsp70 complex, in particular in the presence of Mge1. Why is the effect of the E67A mutation more pronounced in the presence of Mge1? It is possible that Mge1 induces a conformation of mHsp70 that is less tightly bound to Tim44. This view is supported by the observation that at low Met-radiolabeled C-terminal domain of Tim44 (Tim44-Ct) was purified, and its interaction with mHsp70 was examined using cross-linking as described for Fig. 1. The concentration of full-length Tim44 and its C-terminal domain was 2 M. mHsp70 concentrations, less Tim44 was associated with mHsp70 in the presence of Mge1 than in its absence (Fig. 1, C versus A) (29,37). Overall, the results suggest that Glu 67 , located at the N terminus of Tim44, is probably involved in the formation of a complex with mHsp70.
Nucleotides Modulate the Tim44-mHsp70 Interaction-The function of Hsp70 chaperones is modulated by nucleotides. Nucleotide-dependent formation of the Tim44-mHsp70 complex has also been demonstrated using solubilized mitochondria (12)(13)(14). Similarly, we found in this study that in vitro nucleotides differentially affect the formation of the Tim44-mHsp70 complex. The strongest formation of the Tim44-mHsp70 complex was achieved in the presence of ADP. In contrast, in the presence of ATP, very little mHsp70 complex was observed. Thus, mHsp70 alternates between at least two forms in a nucleotide-dependent manner. One form, in the presence of ADP, has a high affinity for Tim44, whereas the second form, in the presence of ATP, has a weak affinity for Tim44. Similar modulation by nucleotides has been observed by several groups in solubilized mitochondria (4,38). However, another study carried out in vitro demonstrated no modulation of the Tim44-mHsp70 complex by nucleotides (37). We conjecture that the contradictory results are due to the different methods used in the respective studies to detect complex formation.
Mutually Exclusive Interactions of the Peptide and Tim44 with mHsp70-An essential difference between models describing the function of the translocation motor is the need for a fulcrum to enable active unfolding of precursor proteins by mHsp70. Such a fulcrum would be provided for mHsp70 by its membrane anchor, Tim44. Notably, under all conditions examined here, we found that binding of the P5 peptide to mHsp70 triggers the dissociation of the Tim44-mHsp70 complex. Dissociation of the Tim44-mHsp70 complex was not affected by the type of nucleotide added or the presence of the Tim14/Pam18-Tim16/Pam16 complex. However, active pulling by the mitochondrial translocation motor requires that mHsp70 be anchored to Tim44 while being simultaneously bound to precursor proteins. Thus, the results that we obtained in this study are difficult to reconcile with active pulling, in its classical version (21), and are compatible with a function via Brownian ratchet (4) or active pulling as suggested by the entropic pulling mechanism (22).
Role of Tim14/Pam18-Tim16/Pam16 Co-chaperones in the Functional Cycle of the Motor-The results of this study show that, despite the fact that the Tim14 j /Pam18 j -Tim16 s /Pam16 s complex is able to form a complex with mHsp70, the interaction between mHsp70 and Tim44 is much stronger. In other ]Met-radiolabeled E67A mutant Tim44 was purified, and its interaction with mHsp70 was examined using cross-linking as described for Fig. 1. The concentration of wild-type Tim44 (Tim44-Wt) and its mutant was 2 M. Experiments were carried out in the absence (A) and presence (B) of Mge1. words, when the Tim14 j /Pam18 j -Tim16 s /Pam16 s complex and Tim44 are both present, mHsp70 associates only with Tim44. In light of these results, one could suggest possible roles for the Tim14/Pam18-Tim16/Pam16 complex in the functional cycle of the motor, as follows. (i) One role would be to enhance the ATPase activity of mHsp70 and to endorse tight locking of substrate in the binding site of mHsp70. Such an effect has not been demonstrated yet because the Tim14/Pam18-Tim16/Pam16 complex itself does not affect the ATPase activity of mHsp70. However, we cannot exclude the possibility that enhancement of the ATPase activity of mHsp70 by the Tim14/Pam18-Tim16/Pam16 complex requires the context of the translocation channel. (ii) The observation that mHsp70 favors associa-tion with Tim44 rather than with the Tim14/Pam18-Tim16/ Pam16 complex suggests that the latter may serve to recruit mHsp70 to the translocation motor prior to its transfer to Tim44. This would ensure the presence of several mHsp70 molecules that are in close proximity to the import channel, thereby increasing the local concentration of mHsp70. | 6,034 | 2007-11-23T00:00:00.000 | [
"Biology"
] |
Dark matter as a ghost free conformal extension of Einstein theory
We discuss ghost free models of the recently suggested mimetic dark matter theory. This theory is shown to be a conformal extension of Einstein general relativity. Dark matter originates from gauging out its local Weyl invariance as an extra degree of freedom which describes a potential flow of the pressureless perfect fluid. For a positive energy density of this fluid the theory is free of ghost instabilities, which gives strong preference to stable configurations with a positive scalar curvature and trace of the matter stress tensor. Instabilities caused by caustics of the geodesic flow, inherent in this model, serve as a motivation for an alternative conformal extension of Einstein theory, based on the generalized Proca vector field. A potential part of this field modifies the inflationary stage in cosmology, whereas its rotational part at the post inflationary epoch might simulate rotating flows of dark matter.
where the covariant derivative ∇ phys µ , the Einstein tensor G µν phys , the Ricci scalar R phys and matter stress tensor are determined with respect to the physical metric, as well as their traces, T phys = g phys µν T µν phys , etc. The vector u µ is a four-velocity generated by the velocity potential φ (we consider the case of a timelike u µ and work in the (− + ++) metric signature), u µ = ∂ µ φ, g µν phys u µ u ν = −1.
Note that this normalization to unity in the physical metric is a kinematical relation -the corollary of (2) independent of dynamics.
Eqs.(3)-(5) differ from those of the original action (1) by an extra "matter" source -pressureless dust fluid with four-velocity u µ and density (5) satisfying the continuity equation (4). As it was suggested in [1] this dust can play the role of dark matter, whose imprint on large scale structure of the Universe can survive till now provided one includes a proper coupling of the scalar φ to the inflaton ϕ in the matter Lagrangian.
The explanation of the paradox that a simple reparametrization of variables (2) can lead to extra new solutions of equations of motion which differ from those of the original GR equations G µν = T µν is as follows. Point is that the change of variables from the original ten components of g phys µν to ten new metric coefficients g µν is not invertible even for fixed φ. 1 The original physical metric in terms of the fundamental metric g µν is conformally invariant, so that the theory in terms of new variables has local Weyl invariance with respect to the transformation of the metric with an arbitrary function σ(x). Therefore, it generates identically traceless Eq.(3) and requires the procedure of conformal gauge fixing. A natural conformal gauge can be chosen as Its advantage is that it immediately allows one to identify the fundamental metric with the physical one g µν = g phys µν and remove label "phys" in all the equations (3)-(5). This, actually, gives a reinterpretation of the kinematical relation in (7), which becomes now a gauge condition in the local gauge-invariant theory with the action S[ g phys µν (g µν , φ), ϕ ]. Thus, the model of [1] turns out to be a conformal extension of Einstein theory, which is local Weyl invariant in terms of the fundamental metric field g µν . Similar extensions of general relativity were repeatedly used for various purposes, including the attempts of avoiding conformal anomalies [2] or embedding the Einstein theory into Weyl invariant gravity [3]. However, in contrast to the conformal off-shell extension suggested in [2], which preserves Einstein theory on-shell and only modifies its offshell effective action, here the Einstein theory is modified already at the classical level and acquires extra degree of freedom of a pressureless perfect fluid. According to [1] this fluid can mimic the behavior of a real cold dark matter.
Primary check on quantum consistency of this model is its stability with respect to possible ghost modes. This issue was not exhaustively considered in [1]. So here we show by explicitly calculating quadratic part of the action on the generic background that the theory is free of ghosts whenever this background satisfies positive energy condition ε > 0. For this reason we develop the Lagrangian and canonical formalism of this theory in the gauge (9) in which the latter emerges as one of the equations of motion. Then we show that the dynamical degree of freedom of the dark matter fluid is free of ghosts, though it can still suffer from caustic instabilities. Finally we suggest a Proca vector field which can model also rotational flows of dark matter (which are not available in the model of [1]). Proca nature of the vector field guarantees it from ghost instabilities. For non-rotational configurations this model implies algebraic relation between the dark matter density and the inflaton field, so that they both decay in the end of inflation and cannot simulate real cold dark matter. However, the rotational part of the vector field might mimic real dark matter and its adiabatic perturbations.
Mimetic dark matter and gauged out Weyl invariance
Local conformal invariance of the theory (1) implies a gauge fixing procedure which includes imposing a relevant conformal gauge χ(g µν , φ) = 0 and adding, under quantization, the Faddeev-Popov ghost determinant Det Q into the path integral measure (here Q is the Faddeev-Popov operator which 1 For free φ the transformation g phys µν → (gµν , φ) is of course not one to one, because this is a map from ten variables to eleven. determines the transformation of the gauge condition, ∆ σ χ = Qσ). Choosing as χ(g µν , φ) the left hand side of (9) and representing its delta-function as the integral over the Lagrange multiplier ε we get Here for brevity we omitted integration over matter fields ϕ and introduced the gauge fixed version of the action (1) enforcing the conformal gauge via the Lagrange multiplier ε. Note that in view of the delta-function type gauge in the integrand of (10) the argument g phys µν (g µν , φ) of the original action can be replaced by the fundamental field g µν .
Note also that the Faddeev-Popov gauge fixing procedure for diffeomorphism invariance is implicit in the canonical integration measure D[ g µν , φ ]. As far as it is concerned the conformal invariance, its ghost determinant Det Q is trivial because for the conformal gauge (9) the Weyl transformation (8) yields a unit operator, Thus, the gauged out theory is described by the action (11) where the matter Lagrangian L = L(g µν , ϕ, ∂ϕ, φ) may include arbitrary matter fields ϕ and their interaction with the scalar φ. All its classical and quantum effects are described by the path integral generating functional (10) (with additional path integration over matter fields ϕ). In what follows we will consider this theory in the tree-level approximation and analyze its quantum consistency with regard to ghost instability.
Variations with respect to ε, φ and g µν give respectively and the trace of the last equation gives so that the system of equations, as it is expected, becomes equivalent to (3)-(5) with g phys µν = g µν . Thus, indeed mimetic dark matter arises as a conformal extension of the Einstein theory, its density playing the role of the Lagrange multiplier for gauged out Weyl invariance.
The form of the action (11) clearly shows that the theory does not have higher-derivative ghosts. They could have been expected to arise in (1) under the conformal transformation (2), g phys µν = Φ 2 g µν , in view the well-known relation [2] The second term on the right hand side here generates fourth-order derivatives in equations of motion, but this term can be identically canceled by the conformal gauge breaking term −6[∇ µ (Φ − 1)] 2 = −6(∇ µ Φ) 2 instead of the delta-function type gauge in (10). This leaves us with the gauged out Lagrangian of the second order in derivatives, g 1/2 (−g αβ ∂ α φ ∂ β φ) R( g µν ), which as one can easily show again leads to mimetic DM equations. However, it is not yet guaranteed that the extra degree of freedom comprised by the fields φ and ε is free of ghost instabilities. The domain of their ghost stability is considered in the next section.
Absence of ghosts
Dynamical properties of the field φ and the Lagrange multiplier ε follow from their canonical formalism. The latter is easily available on a flat-space background, which is sufficient for the analysis of kinetic terms of all degrees of freedom [4]. We have the canonical momentum conjugated to φ and the Hamiltonian H = 1 2 Now the Lagrangian multiplier can be excluded by the equation ∂H/∂ε = 0 (contrary to the Lagrangian formalism where it was impossible in view of linearity of (11) in ε), With this ε the Hamiltonian becomes linear in momentum, H = p 1 + φ 2 i , and the canonical action becomes [4] Opposite sign of the square root in (20) leads to flipping the sign of the Hamiltonian, but the resulting action remains equivalent to (21) because of the trivial canonical transformation (φ, p) → −(φ, p).
Equations of motion for phase space variables (φ, p) reaḋ The first one is obviously the norm of the 4-velocity (13), whereas the second is just the continuity equation (14) with p = εφ.
If we repeat all these steps in curved spacetime, the final result for the canonical form of the (φ, ε)-part of the action (11) is where (N, N i ) are the lapse and shift functions, and their coefficients are respectively the canonical superhamiltonian and supermomenta of the scalar field. Ghost modes arise in the theory when its kinetic term in the Lagrangian action is not positive definite and can dynamically evolve to −∞ for large time derivatives of the field. An obvious difficulty with (21) is that the canonical momentum p cannot be excluded from the action by its variational equation. However, the Lagrangian form of the action can be obtained by switching the roles of momenta and coordinates. The configuration space coordinate φ can in principle be expressed from Eq.(23) in terms of p andṗ, and the sign of the kinetic term as a function ofṗ would indicate the nature of this mode. Exact solution of (23) for φ is not available, so that we shall have to analyze the situation in the linearized theory on the generic background.
Decomposing the dust variables φ → φ + ϕ, p → p + π, into their background values and perturbations (ϕ, π) we have the quadratic part of the canonical action where we integrated by parts the symplectic term πφ with respect to time and introduced a notation v i for the 3-dimensional velocity of the background dust and recovered a background value of ε, cf. Eq. (18), In terms of it the linearized equation (23) readṡ Here, in view of the fact that the generalized Laplacian∆ involves squares of all spatial derivatives both parallel to the velocity vector, ( || ), and transversal to it, ( ⊥ ), (only in the ultrarelativistic limit v 2 → 1 it tends to the 2-dimensional Laplacian ∆ ⊥ acting in the plane transversal to v i ). Under appropriate boundary conditions it is invertible and gives the nonlocal in space expression for ϕ in terms of π andπ Substituting it into (25) we obtain the Lagrangian action in terms of π andπ, which is nonlocal in space, but local in time, Under zero boundary conditions at infinity and positive ε the operator∆ is negative definite (and for short wavelengths modes it is negative definite independently of boundary conditions at infrared infinity). Therefore quadratic in momentum "velocities"π part of the Lagrangian is positive, which implies that the theory is free of ghost instabilities for ε > 0. Physically this is a very natural criterion which coincides with the positive energy condition for a dust fluid with the stress tensor T µν = ε u µ u ν , ε > 0. Since ε is expressed here in terms of scalar curvature and matter stress tensor, the ghost stability imposes the bound and gives strong preference to dS-type backgrounds with a positive cosmological constant. Another type of instability which is perhaps not so dangerous at the quantum level is due to formation of caustics. They are inevitable for generic geodesic flow which is associated with the potential φ satisfying the Hamilton-Jacobi equation (9). Field "dust" moves along geodesics -the characteristic curves of this equation -and forms caustic singularities in view of its pressureless nature (see discussion of this phenomenon in context of field models of dark matter [5] and also Horava and ghost condensation gravity models [6,7,4]). Eq.(9) and its geodesics are artifacts of conformal gauge fixing in our approach. However, this particular gauge has a distinguished status because in this gauge the physical conformally invariant metric g phys µν coincides with the auxiliary metric g µν (or in the original formulation of [1] this is a kinematical relation (7)). Therefore, this problem cannot be circumvented by an alternative conformal gauge fixing and remains a serious difficulty.
Vector field model of DM
In addition to the caustic problem the model of [1] does not admit rotating dark matter because of the potential flow of the 4-velocity u µ = ∂ µ φ. These limitations might perhaps be circumvented within the vector field model with the physical metric parameterized by the dynamical vector field u µ g phys We may start with the action in terms of the physical metric which also contains the Maxwell kinetic term to make this vector field propagating Here µ 2 is the parameter having mass squared dimension and L(g phys µν , ϕ, ∂ϕ, u µ ) is a matter Lagrangian containing some direct coupling of the vector field to matter, ∂L/∂u µ = 0. F 2 -term provides a kinetic term for u µ and guarantees absence of ghosts among the components of this vector field.
This theory is obviously Weyl invariant by the same mechanism as in [1] and needs a conformal gauge. This gauge can be chosen in the form analogous to (9) (we consider the case of a timelike vector field) Equations of motion in this gauge read as those of general relativity with matter sources given by "pressureless dust fluid" of Proca vector field u µ which has a non-uniform mass squared m 2 = ε/µ 2 given by the density of this fluid ε, Here the Proca field kinetic term of u µ guarantees the absence of ghosts. Similarly to Eq.(5) of [1] the dust density is given by traces of the Einstein and matter stress tensors, and the conservation law for the dust fluid (obtained by differentiating (40)) reads In view of the generalized Proca equation (40) ε algebraically expresses via ϕ in terms of the coupling of u µ to matter and the rotational component ε rot , The latter of course vanishes for a potential vector field with F µν = 0 and T µν F = 0. Now consider the effect of the potential vector field u µ = δ µ 0 in the homogeneous Friedmann cosmology driven by the inflaton field ϕ with the Lagrangian L(g µν , ϕ, ∂ϕ, u µ ) = − 1 2 (∂ µ ϕ) 2 − V (ϕ) + L int (g µν , ϕ, ∂ϕ, u µ ).
It depends on the choice of the interaction between u µ and ϕ. The simplest non-derivative and derivative interactions can be organized as follows where the dust densities of the potential flow are respectively expressed according to the first term of Eq.(44). The first of these interactions, from the viewpoint of inflationary dynamics, is not interesting because in the gauge (38) it reduces to a simple modification of the inflaton potential by an extra term F (ϕ). The second (derivative) type of interaction is more interesting. With the Friedmann metric ds 2 = −dt 2 + a 2 (t) dx 2 it generates the following contribution to the total matter stress tensor and dark matter density In view of the relation ε + T 00 int = 0 the Friedmann equation for the Hubble parameter H ≡ȧ/a -00-component of the gravitational equations (39) -remains unmodified by the vector field, whereas the equation for the inflaton acquires extra friction and rolling force terms (we recover the reduced Planck mass M 2 P assumed to be one above), In the slow roll regime this allows one, without changing the known expression for the Hubble parameter H ≃ V /3M 2 P , to vary the duration of inflation stage, by varying the magnitude and the sign of F (ϕ). 2 However, the potential vector field with F µν = 0 cannot serve as dark matter, because its density is algebraically related to the inflaton (49) and completely decays simultaneously with the inflaton in the end of inflation (we assume that F (ϕ) → 0 for ϕ → 0). Yet, the role of dark matter can be played by the rotational part of the vector field which survives the decay of ϕ and ∂L/∂u µ in (44). In view of (43) it satisfies the usual conservation law ∇ µ (ε rot u µ ) = 0. Under a natural assumption that the rotational part of u µ is much smaller than its potential part u µ (0) ≡ δ µ 0 , this equation reduces to ∂ 0 (a 3 ε rot ) = 0 and gives at post-inflationary stage a typical dust evolution law ε rot (t, x) = C(x)/a 3 (t) with C(x) accounting for inhomogeneities of the inflaton field at the end of inflation. This might simulate adiabatic perturbations of real cold dark matter.
Conclusions
Mimetic dark matter model of [1] can be interpreted as a conformal extension of the Einstein general relativity. Gauging out the local Weyl symmetry of this theory results in an extra degree of freedom describing a potential flow of dust which can serve as a model of real cold dark matter. The density of this dust is played by the Lagrangian multiplier for the conformal gauge in the gauged out version of the theory. It is shown that for a positive energy density of this dust the theory is free from ghost instabilities, though it can suffer from the gravitational instability associated with caustic surfaces of the geodesic flow. Positive energy criterion gives strong preference to stable configurations with a positive scalar curvature and trace of matter stress tensor, because in this theory the dark matter density is given by the sum of those. Analogous conformal extension of the Einstein theory is suggested in the form of the generalized Proca vector field with the inhomogeneous mass parameter playing the role the dark matter density. This model includes both potential and rotational flows of the pressureless perfect fluid. Depending on the coupling of the vector field to matter, the potential part of this flow can essentially modify inflationary scenario, but cannot model modern dark matter, because its density decays simultaneously with the inflaton field in the end of inflation. The rotational part of the vector field, however, might play the role of rotating dark matter and mimic its real adiabatic perturbations. In view of optically inert nature of the real cold dark matter this model might not contradict missing observational signatures for rotating dark matter halos. | 4,614.4 | 2013-11-13T00:00:00.000 | [
"Physics"
] |
IoT System for Vital Signs Monitoring in Suspicious Cases of Covid-19
Currently the world is going through a pandemic caused by Covid-19, the World Health Organization recommends to stay isolated from the rest of the people. This research shows the development of a prototype based on the internet of things, which aims to measure three very important aspects: heart rate, blood oxygen saturation and body temperature, these will be measured through sensors that will be connected to a NodeMCU module that integrates a Wi-Fi module, which will transmit the data to an IoT platform through which the data can be displayed, achieving real-time monitoring of the vital signs of the patient suspected of Covid-19. Keywords—Covid-19; vital signs; internet of things; NodeMCU; IoT platform
The health system in Peru is currently going through a very serious problem, according to the latest reports, there are more than 1,102,000 positive cases and more than 39,800 deaths due to Covid, with a 3.62% lethality rate, with only 11,200 hospitalized patients, 1,892 ICU beds, of which only 7 ventilators are available nationwide. [3], Hospitals and health centers do not have the resources to attend all suspected cases and positive patients. The most recent study on human resources in the health sector indicates that in Peru there are 13.6 physicians for every 10,000 inhabitants, i.e. only 1 physician for 1,000 patients, in addition to an inadequate distribution of medical personnel at the national level. [4], making the healthcare system totally deficient and inadequate to deal with the increasing number of patients caused by As a result of the aforementioned data, the following question arises: What happens to the people who tested positive for Covid-19, because although Covid-19 cases are classified into five stages: asymptomatic, mild, moderate, severe and critical? [5]. It is those in serious and critical condition that are treated in health centers. Once the patient has been diagnosed with Covid-19, he/she is obliged to remain isolated in his/her home until the incubation and infection stage has passed, which can last between 12 to 15 days. [6], in addition to maintaining distance from family members to reduce the likelihood of contagion.
A new question arises: What happens to patients who are isolated in their homes, because they suffer the risk that the disease caused by Covid-19 worsens, and if they are not administered the necessary drugs, they may die, to perform this follow-up they would normally have to be taken to the hospital, where they will undergo various tests to identify the heart rate, respiratory rate, blood oxygen saturation, blood pressure and body temperature, because Covid-19 to develop in the body [7], the health system carries out patient followups by medical personnel, who go to the homes of positive or suspected Covid cases, where the lives of medical personnel are exposed to contracting the disease, in addition to generating effort and expense in the process.
Given the current situation in Peru, many of the medical centers nationwide are full of patients, exceeding their capacity of care, in these circumstances the medical centers do not attend in the right way, so people have to opt for private health services, as is the case of clinics, However, low-income people cannot have access to this service, neither to health services, nor to a Covid screening, having to spend the incubation stage in their homes, keeping home isolation, increasing the number of people vulnerable to contracting Covid-19, exposing the family of the infected or suspected case, if the necessary measures are not taken such as: isolating the infected person, keeping a distance of at least 2 meters and controlling symptoms on a daily basis. Faced with this situation that the country is going through, it is necessary to resort to innovative and outstanding ideas for the solution of the different problems that this pandemic has generated in society.
As we know, the internet of things has been developed even in the health sector, called telemedicine [8], but it can also be applied in the same homes for medical and health purposes. That is why an internet of things system will be developed to monitor vital signs in patients or suspected cases of Covid-19, this is done with the help of different specialized sensors.
The internet of things (IoT) is the interconnection of devices (sensors and actuators) or objects (everyday objects with internet access) through a network, in order to communicate and transfer information, without the need for human presence to do so, this is called machine-machine communication (M2M), for the development of an IoT system protocols, communication technologies, domains and applications are established [9]. The proposed IoT system aims to measure certain vital signs in order to provide prompt help in case of any drastic change in their health, reducing the effort of medical staff [10], also avoiding that the patient goes through stress, produced when a person is hospitalized, in the same way, reducing stress in medical personnel, according to a study done in China, cases of 1257 workers are reported, 50% began to feel symptoms of depression and more than 70% presented symptoms of psychological distress [11] thus generating a high risk for those who face this pandemic in the first row, with this proposed solution, the time to obtain vital signs, the time of medical care of home visits and the response time to an anomaly in the vital signs are reduced.
A. Internet of Things (IoT)
Also known as IoT. It is the interconnection between devices, objects or things-electrical appliances, modules, machines, devices and more, through the internet to communicate and exchange data [12].
To make it possible to develop this technology it is necessary an integral series of technologies, such as the (API) that are those that connect to the internet the different devices, in addition to the use of standards and IoT platforms where the different devices that are connected will be visualized.
B. Arduino IDE
It is an open source Arduino software, where it is easier to code, load and run a series of codes, which will form the program we develop, this software is a text editor and compiler at the same time, serves to program and to transfer the code to the Arduino board, but is also compatible with many other modules, note that this software works with the Processing programming language and can be installed on operating systems such as Windows, Mac and Linux [13].
C. NodeMCU ESP8266
It is a development board belonging to the NodeMCU family, it is a totally free software and hardware, this board allows the connection of several devices with each other, through the internet, thanks to the ESP8266 Wi-Fi module that has incorporated, this chip is also compatible with TCP/IP, being the easiest and fastest way to develop IoT projects [14].
D. Pulse Oximeter Sensor MAX30102
Very compact sensor, it is considered non-invasive, with which you can measure: the level of oxygen saturation in hemoglobin (SpO2), through a LED circuit and a photodetector capable of measuring the amount of light reflected through the finger, as there are variations between the reflection that occurs through the blood loaded with oxygen with deoxygenated blood, oxygenated blood tends to absorb more infrared light, while deoxygenated blood absorbs more red light [15].
E. Sensor LM35
It is a temperature sensor of good assertiveness index, having a very low cost, with a working range between -55°C to 150°C, has an analog output with its respective power pins, has an accuracy of 0.5°C making it easy to use with a variety of applications where it can be implemented [16].
III. BACKGROUND
In recent years, medicine has made great advances, developing a variety of technologies for health monitoring. At [17] the importance of the development of portable biomedical sensors to facilitate the remote monitoring of patients is expressed, focusing on the measurement of heart rate and body temperature, for different conditions presented by the patient, whose data will be sent to a doctor through the Zigbee network.
In the investigation [18] presents a monitoring system developed for the measurement of cardiac pulse and oxygen saturation, focused on preventing and monitoring different diseases, consisting of an oximetry sensor and a Nellcor DS-100 sensor in charge of detecting the signs and their variations, which will then be sent to a mobile application, which will process the data to issue an alarm if necessary and to visualize the data.
In the investigation [19] shows a prototype system for monitoring vital signs, including body temperature, heart rate and oxygen desaturation, using an lm35 sensor, Pulse Sensor Amped and an Arduino board, which will allow the detection, processing and sending data to a mobile application on a cell phone and with a monitor you can view the graphs with respect to the heart rate.
In the investigation [20] presents a system to monitor the patient's desaturation remotely, taking into account the anomalies that can cause oxygen desaturation in a patient, so it is considered a permanent monitoring of this sign, to improve the diagnostic process of the patient, who is at home, this is achieved through different electronic components and a Wi-Fi module, at the end the data are displayed on a local host by the doctor.
In all the mentioned works is present the importance of monitoring vital signs, focused on different types of diseases, the research work developed, focuses on the monitoring of Covid-19, establishing the different levels of severity of the different signs, has the minimum number of sensors to capture the signs that vary in that disease, In addition to having a web system, in which the doctor can view statistics and graphs of the different vital signs, managing to send alerts to an email or a mobile device, in this way, this research unifies different technological aspects of the antecedents and seeks better alternatives to focus on monitoring patients suspected of Covid-19.
IV. METHODOLOGY
For the development of this project, Methodology V is used, this methodology is used for the development of ICT projects, used for the management and also for the development of systems, especially software development for ICT components. www.ijacsa.thesai.org The reason why this methodology is used is because it is very easy to use for the development of this research, has 6 phases, focuses on quality management procedures, because at each level it has, there is an opposite side that performs the tests and thus reduce the risks that the project or product goes wrong.
The V methodology or V method, has 4 levels of which there is a parallel phase of verification, referring to the shape of the model, as it compares the phases of development with their respective quality control, in each phase describes the activities performed and the results produced throughout the development, on the left side are the phases of specification, containing the tasks of design and development of the system, while on the right side are the phases of testing, which contain the control measures of each phase as unit tests and integration tests [21].
V. DEVELOPMENT The IoT system that was developed, can perform three measurements of a patient, which are: heart rate, oxygen saturation and body temperature, these being the vital signs that are affected by the disease Covid-19, presenting with various symptoms such as fever, cough among others, the data of these vital signs, are obtained through the use of two sensors such as the MAX30102 sensor, with which it is possible to obtain the heart rate and oxygen saturation in the blood and the LM35 sensor for measuring body temperature.
A. Phase 1: Specifications
At this stage the appropriate sensors and modules will be chosen, among the sensors is the MAX30102, which is used to measure heart rate and oxygen saturation, through red and infrared LED circuit, both lights are intercepted are reflected through the finger and a photodiode captures it and calculations are made of oxygen saturation, can also measure heart rate while held down, after a series of operations are performed, such as calculating the average per minute, you get a specific number, this being the number of beats per minute [22]. The other sensor to be used is the LM35 sensor that can measure the temperature, with a high assertiveness, having an accuracy of 0.5°C.
We also use the NodeMCU board, which has a microcontroller, where it is programmed through the Arduino IDE. The important thing about this board is that it has a WiFi module, this allows a wireless connection that can easily connect us to the internet, allowing us to send or receive files over the internet. Table I shows the functional requirements of the prototype, then Table II shows the non-functional requirements of the prototype to be developed.
B. Phase 2: Overall Design
The design of the system is presented interacting with the web and those involved, visualizing the path of the vital signs data, it starts when the patient is using the prototype and this is connected to the WiFi network, where the NodeMCU board will be the brain of the system, thanks to the microcontroller it has, in this module will be connected LM35 sensors for temperature and MAX30102 sensor for heart rate and oxygen saturation, then the data will go to the internet where they reach the Ubidots platform and then to the web application.
All the above process is reflected in the system architecture, as shown in Fig. 1. Fig. 2 shows the established design, using the Fritzing software, the connections between the NodeMCU board and the MAX30102 and LM35 sensors can be observed. The prototype must connect to the home WiFi network automatically.
RFP2
The prototype must be connected to the Ubidots platform.
RFP3
The prototype must analyze serial communication data from the MAX30102 sensor to obtain frequency and saturation data.
RFP4
The prototype must analyze the serial communication data from the LM35 sensor to obtain the temperature data. The prototype must be connected to the MAX30102 sensor.
RNFP2
The prototype must be connected to the LM35 sensor.
RNFP3
The prototype must integrate a NodeMCU
RNFP4
Prototype must be connected to a battery for portability.
RNFP5
The prototype must not be invasive to the user.
RNFP6
The prototype should start operating when connected to a power source. The LM35 temperature sensor is very small and has three pins for connection. When connected with a typical jumper cable, it does not connect firmly to the pins of the LM35 and when it comes in contact with the skin, it alters the captured data.
It is therefore necessary to solder the pins and cover them with thermofit, as shown in Fig. 3, where the LM35 sensor pins are completely covered.
2) LM35 sensor programming: The LM35 sensor is programmed to obtain the body temperature data, in addition to establishing the Wi-Fi connection, the HTTP protocol must be established to send the data to the Ubidots platform, a certain part of the code is as shown in the following Fig. 4.
3) MAX30102 sensor programming: The MAX30102 sensor is programmed to obtain the data related to the oxygen saturation and heart rate signals. Fig. 5 shows part of the code, specifically the Void Loop, where the data that will be printed on the serial monitor at the time of execution can be seen.
E. Phase 5: Verification
This phase focuses on the verification of the hardware and software modules, on a unitary basis, to check that they are working properly.
1) MAX30102 sensor test:
After programming the MAX30102 sensor, the necessary test is performed, where the oxygen saturation data captured by the sensor is observed through the serial monitor, as shown in Fig. 6.
2) LM35 sensor test: After programming the LM35 sensor, we proceed to perform the necessary tests. In Fig. 7, the patient data captured by the sensor are shown, printed on the serial monitor, where the values must be stable, so that the values are automatically sent to the Ubidots platform.
F. Phase 6: Integration
In this phase, the sensors will be integrated into the same code in the Arduino IDE, and the data will be obtained at the same time from the NodeMCU board, so that they are then sent to the Ubidots platform and subsequently sent through the web system, thus, to perform the respective monitoring of the patient.
1) Connection of the prototype:
We proceed to connect all the system components: the NodeMCU board, the MAX30102 sensor and the LM35 temperature sensor. Fig. 8 shows the prototype with all the components correctly connected.
2) Unified prototype programming: After making the prototype connections, integrating the NodeMCU board with the MAX30102 and LM35 sensors, we proceed to perform the necessary programming. As shown in Fig. 9, we can see the libraries and variables established to operate both sensors, in addition to placing the credentials to establish the Wi-Fi connection and access to the Internet. Fig. 10 shows some operations established in the Void Loop, necessary for sending data to the Ubidots platform, the data are captured and sent in a specific order, in addition to containing an identifier for that prototype.
3) Prototype testing:
The test is performed, where the data must be real values of the patient, and the test is performed by comparing the printed values with the values obtained by means of specialized instruments. Fig. 11 shows the data printed on the serial monitor, the data are captured by the sensors and sent to the Ubidots platform, in addition to having established a condition not to capture data if the patient does not have his finger on the MAX30102 sensor, thus avoiding the sending of invalid data.
As shown in Fig. 12, the Ubidots platform can be seen, with the data sent by the prototype.
The data from the Ubidots platform are sent to the developed web system, where the patient chosen to perform the test is selected, and the respective values of the patient's vital signs are displayed, as shown in Fig. 13.
The patient data in the web system can also be viewed through graphs, as shown in Fig. 14. Different tests were performed to test the hypothesis: that the use of a prototype system based on the Internet of Things improves the process of monitoring vital signs in suspected cases of Covid-19, taking into account the following indicators: Indicator 1: The use of a prototype system based on the internet of things reduces the time required to obtain vital signs.
In this test we want to demonstrate that the time in which the measurements of the patient's vital signs are taken, using the patient's daily instruments and having to go to the patient's home to take the measurements, takes an average of 29.5 minutes, while using the prototype developed and taking the measurements remotely, the average is 4.6 minutes, as shown in Fig. 15.
Indicator 2: The use of a prototype system based on the Internet of Things reduces the medical care time.
In this test we want to show that the time in which the patient receives medical care, including results, recommendations and prescription, is expedited, this process takes on average about 28.3 minutes, instead using the prototype developed and perform medical care remotely, the average is 12.4 minutes, as shown in Fig. 16. In this test it is desired to show that the time in which the doctor is alerted about any change in the patient's signs, so that the doctor can perform the necessary actions in relation to his condition, this process takes on average about 2528.5 seconds, instead using the developed prototype, which can issue a notification, message or call, the average communication time of the patient's condition is 18.8 seconds, as shown in Fig. 17.
VII. CONCLUSIONS
In conclusion, this system is of great help to patients or suspected cases of Covid-19, who are in their homes, monitoring their vital signs and see if the disease worsens in their body to provide a prompt solution and provide appropriate assistance, preventing the disease from worsening in their body.
In a different way, it will help medical personnel by preventing them from being exposed to many Covid-19 positive cases and running the risk of becoming infected, eliminating the face-to-face follow-ups that are currently performed on positive patients who are isolated in their homes. While it is true that medical staff will always be needed in hospitals or clinics, this system will help to reduce the number of patients presenting at hospitals, and staff can focus only on severe cases and reduce to some extent their stress and fear of exposure and interaction with so many patients.
An internet of things prototype, built with the NodeMCU board, is beneficial for monitoring vital signs and facilitates sending data to the internet in a fast and secure way, with the different communication protocols that can be used to send data to the internet.
The construction of a vital signs monitoring system can be done with a minimum amount of sensors and expenses, since the sensors and boards used in this project are the cheapest in the market, and the code for its operation can be found by searching the internet. | 4,928 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
An Overlapping Communities Detection Algorithm via Maxing Modularity in Opportunistic Networks
. Community detection in opportunistic networks has been a significant and hot issue, which is used to understand characteristics of networks through analyzing structure of it. Community is used to represent a group of nodes in a network where nodes inside the community have more internal connections than external connections. However, most of the existing community detection algorithms focus on binary networks or disjoint community detection. In this paper, we propose a novel algorithm via maxing modularity of communities (MMC)to find overlapping community structure in opportunistic networks. It utilizes contact history of nodes to calculate the relation intensity between nodes. It finds nodes with high relation intensity as the initial community and extend the community with nodes of higher belong degree. The algorithm achieves a rapid and efficient overlapping community detection method by maxing the modularity of community continuously. The experiments prove that MMC is effective for uncovering overlapping communities and it achieves better performance than COPRA and Conductance.
Introduction
Opportunistic networks [1] are special networks in which nodes contact each other opportunistically to forward information. Due to unpredictable node mobility and without any fixed infrastructures, there is not an end-toend path in most situations. Different from the store-andforward manner in traditional networks, information are forwarded in a store-carry-and-forward manner, so applications need to tolerant long period of time delay in opportunistic networks. For example, people can use portable intelligent devices with short-range wireless communication capability (e.g. Bluetooth, WiFi) or some computing power to store and forward information, it can forward information more conveniently and easily without network infrastructure.
Community detection in opportunistic networks has become a significant and hot issue, which is used to understand characteristics of network through analyzing structure of it. Community is used to represent a group of nodes in opportunistic networks where internal connections of nodes inside the community are denser than external connections. Community detection can help us to uncover and understand local community structure in both offline mobile trace analysis and online applications, and it is helpful in decreasing forwarding time as well as the storage capacity of nodes. Since the relationships between nodes usually seem to be stable and less volatile than node mobility, forwarding schemes based on community [2][3][4][5][6]outperform traditional approaches [7,8]. Overlapping community detection, one of the most interesting research of community detection, is the primary focus of this paper. Overlapping community means that a node may participate in more than one community in the network. Furthermore, most of real-world networks exhibit the feature of overlapping communities, such as social networks, information networks and biological networks. For example, we divide communities according to people's interests. A man may belong to multiple communities. People who like sports may have interests in music and others may still have interests in cooking.
The rest of this paper is organized as follows. Section 2 introduces related work in community detection. Section 3 presents our community detection algorithm for opportunistic networks. Then, we evaluates the performance of our proposed algorithms with COPRA and Conductancein Section 4. Section 5 concludes the paper and states the future of the field.
Related Work
Several community detection algorithms have been proposed for opportunistic networks. In this section, we divide the community detection algorithms into three categories: modularity-based, label propagation-based and attribute-based.
In modularity-based detection algorithms, early scheme is GN algorithm [9].Girvan and Newman proposed edge betweenness which means the number of shortest paths in which the given edge is included. In this method, the edges with high edge betweenness score will be removed in every step. However, the method needs to recalculate the score of edge betweenness for all edges every time after each removements. It is computationally intensive and suffers with scalability problem. Additionally, Newman et al. [10] proposed a bottom-up hierarchical approach that optimizes the modularity score in a greedy manner. Initially, every node is a community. Communities are merged iteratively based on optimal modularity score until there is no increasement in modularity score. In [4], Pan Hui et al. proposed a detection algorithm based on cliques and modularity. In addition, CPM is proposed in [11], which is based on the assumption that a community consists of all k-cliques that can be reached from each other through a series of adjacent k-cliques. A k-clique is a fully connected subgraph and two k-cliques are said to be adjacent if they share k-1 nodes. However, it doesn't consider all the characteristics of links, such as connection time or connection frequency. The value of k is also hard to be determined and the method is more suitable for networks with dense connected parts.
In label propagation-based detection algorithms, the typical algorithm is the Label Propagation Algorithm (LPA) [12], researchers present an algorithm that assigns k labels to each node in the network and updates its label according to the most frequent label in its neighborhood. This method is faster than others but produces different results each time based on initial configuration. So one need to run the algorithm several times to build the consensus. It consumes time. SLPA [13] is an extension of LPA, in this method each node has a memory and considers information that has been observed in the past to make current decision. COPRA [14] can achieve good performance in some cases, but it limits the number of communities for each node which decreases the accuracy wheneverνis too big or too small. In [15] , authors propose a balanced multi-label propagation algorithm(BMLPA) for overlapping community detection. Compared to COPRA, the advantage of the strategy is that it allows nodes to belong to any number of communities without a global limitν.
In [16], researchers present an Interest Community Routing(ICR) algorithm which is founded on social network theory. Authors define an interest metric and a message header to represent individual interests and data types in the networks. By comparing the similarity between the message header and the interest metric of node, the node will be put into corresponding interest community. However, the interest is not stable and one may have many kinds of interests in reality, so it's not applicable.
These methods only detect disjoint communities, or are applied in networks where nodes are in frequent contacts, or need to run many times to achieve stable status. Those are not sufficient to process opportunistic networks with overlapping communities effectively.
In this paper, we first calculate relation intensity between nodes in opportunistic networks, it can solve the problem which community structure is fuzzy in binary networks. In binary networks, multiple contacts and single contact are considered the same as having contacts. We consider nodes with higher relation intensity as initial community. Finally, we extend the community with nodes of higher belong degree so that the modularity of community can be increased continuously while solve the problem of overlapped community detection.
MMC Algorithm
We aggregate node mobility traces into weighted contact graphs. The vertices of the graphs are nodes, the edges are relationships, and the weight of edge is computed based on the times of contacts and the duration of each contact between two nodes. Like social networks, the more often they contact, the more familiar they are. And the more time they spend together, the closer they are. For simplicity, we use contact graphs, where V is the set of nodes and {( , )} E v w is the set of edges. We use the adjacency matrix A to represent the edge in E and vw A to denote the weight of the edge between two nodes , v w V .
Relation Intensity
In order to help us present the contact information between two nodes and make it easier for further processing, we simplify the presentation of contact record in [19]. We use a three tuples where v G is the contact history of node v. So we can get the following equations to calculate the relation intensity between node v and any other node w.
w v int is the fraction of time that node v spend with node w over the time that node v spend with all nodes. The relation intensity between node v and node w is shown in Eqs (3): Where and are the weighting factors. Values of and as the case may be can be changed considering the proportion between the number of contacts and the duration of each contact.
Modularity
In this paper, we use Newman's weighted modularity proposed in [4] as a measurement of the quality of the community structure it detects. One can get the corresponding fitness value with the following definition of modularity (Q): From the formula (4), Q is defined as this fraction minus the fraction of the edges that would be expected to fall within the communities if the edges were assigned randomly while we keep the degrees of the vertices unchanged. Generally, if Q is greater, community structure is clearer; on the contrary, if Q is smaller, community structure is more ambiguous.
Belonging Degree
During the detection of community structure, the belonging degree proposed in [17] between a node v and a community C is defined as: If all neighbors of a node v are included in community C, ( , ) 1 B v C .
The Community Detection Algorithm
Our detection algorithm works as follows: Step 1: After a period of warm time t, each node begin to calculate the relation intensity between itself and other nodes according to the contact history. We get a weighted contact graph.
Step 2: Sort edges of contact graph by size. Then choose two nodes with highest relation intensity as a new community C and calculate its modularity according to Eqs.(4).
Step 3: Expand C, put neighbors of nodes in C into C N and sort them by belonging degree. Choose the node with the highest degree to C ' to form a new community C ' , calculate Q of C ' . Then there are two situations need to be handled: If Q increases, repeat from Step 3 and the expanding process is continued for community C ' ; Otherwise, the expanding process of C is finished. Meanwhile, the edges within community C are removed from the edge set E.
Step 4: Repeat from Step 2 to Step 3 until E is empty. For completeness, the pseudo code of the detecting algorithm is shown as follows.
Input: traces of nodes and nodes set V Output: Community of node 1: According to formula (4), if Q increase, node u is involved in community C, we have: From formula (7), we have: Node u will be added into Community C when (9) is founded. We use an example to illustrate the expanding process. Fig.1 shows an example of overlapping community detection using our detection algorithm. Two communities are detected and shown in different circles.
As shown in Fig.1
Experiments
In this section, we present some representative numerical results to validate the effectiveness of MMC comparing with two other schemes: Conductance [17] and COPRA [14], both of which can detect overlapping communities.
We use LFR benchmark [18] proposed by Lancichinetti et al. to test overlapping community detection algorithms. LFR is a well-known benchmark which can generate synthetic unweighted or weighted networks. It provides power-law distributions of node degree and community size, allows overlaps between communities. We can set many parameters to control the generated network, such as the fraction of nodes that belong to more than one community over all nodes and the number of communities that a node belongs to simultaneously. And the settings of parameters we set are shown in Table 1. The meanings of all these parameters are as follows: N is the number of all nodes while is the fraction of overlapping nodes over all nodes. mut and muw denote mixing parameter for topology and mixing parameter for edge weights respectively. The exponent for weight distribution is . kand kmaxrepresent the average of node degree and the maximum of node degree respectively. We use above metrics and different parameter settings: mut = muw = 0.1,mut = muw = 0.3, N=1000,4000. And the overlapping fraction is changed from 0 to 0.5 for each setting to evaluate the performance of these algorithms. Fig. 2 to Fig.5 show the experimental results of three algorithms: COPRA, Conductance, and MMC. Fig.3 show modularity of the community structure detected by three algorithms while mut=muw=0.1 and number of nodes is different. As is shown, the modularity of all algorithms is higher. This may be because network structure is not very complex. For both COPRA and Conductance, the modularity is lower than MMC. In addition, the modularity of COPRA dramatically decreases while the overlapping fraction increases in all settings. Fig.5 show modularity of the community structure detected by three algorithms for while mut=muw=0.3 and number of nodes is different. When mut and muw is higher, and the number of nodes do not change, the modularity of all algorithms is lower. The performance of COPRA is still worse while increases.
In real situation, it's hard to determine a global limit v which has a great influence on performance for COPRA. However, MMC is stable with the increase of the overlapping fraction . Thus, we can conclude that MMC has the ability to detect overlapping communities.
Conclusions
In this paper, we propose an overlapping community detection algorithm for opportunistic networks. First, we introduce relation intensity to measure relationships among nodes. Then we design a new detection algorithm based on maxing modularity of communities. Simulation results on synthetic networks show that our community detection algorithm is much better than other algorithms in terms of modularity. In future, we will do more work on the validations in different environments. We will evaluate this algorithm on large scale networks coming from real networks. | 3,231.2 | 2016-01-01T00:00:00.000 | [
"Computer Science"
] |
Electrocardiogram Fiducial Points Detection and Estimation Methodology for Automatic Diagnose
RESEARCH ARTICLE Electrocardiogram Fiducial Points Detection and Estimation Methodology for Automatic Diagnose René Yáñez de la Rivera, Moisés Soto-Bajo and Andrés Fraguela-Collar Facultad de Ciencias Físico-Matemáticas, Benemérita Universidad Autónoma de Puebla, Avenida San Claudio y 18 Sur, Colonia San Manuel, 72570 Puebla, México Cátedras CONACYT Benemérita Universidad Autónoma de Puebla. Facultad de Ciencias Físico-Matemáticas, Benemérita Universidad Autónoma de Puebla, Avenida San Claudio y 18 Sur, Colonia San Manuel, 72570 Puebla, México
INTRODUCTION
Electrocardiographic signals (ECG) are a main tool in Medicine, since ECG analysis is a routinary part of any complete medical evaluation.This is due to the fundamental role the heart plays in human health, and because ECG provides a noni nvasive and relatively easy way of knowing how the heart is working [1 -8].
Furthermore, recently ECG and EEG (electroencephalographic) signals have proven to be an appropriate tool in fields as security, privacy, communication networks or psychology, in which biometric methods play a role [9 -16].
In this context, the estimation of fiducial points of ECG signals is basic for feature extraction, and subsequently, to ECG interpretation.Thus, algorithms and techniques which could accomplish accurately this task are especially important in designing automatic analysis and diagnose tools.
A lot of such methodologies have been developed in the recent decades, which constitutes an active area of research, with multiple challenges still to overcome [1 -4, 17 -69].The current work is a contribution to this goal.
Before to expose our method, let us describe the basic features of an ECG.In the Fig. (1) a typical cycle of an ECG with normal sinus rhythm is shown, with the P, Q, R, S and T waves.In this text, the starting and ending points of P and T waves are labeled P i , P f , T i , and T f , and its maximum/minimum as P m and T m , respectively.The starting point of the QRS complex is labeled Q i , and the ending point J, as is known as the J point.Also, the maximum/minimum of the Q, R and S waves are labeled Q m , R m and S m , respectively.Note that, because of its inherent complexity, there are no rigorous definitions of these concepts.In addition, the piece of the signal between two consecutive R m points is known as RR interval.Furthermore, the piece of the signal between P i and the following Q i point is known as PQ (or PR) interval, and the piece of the signal between Q i and the following T f point is known as QT interval.Analogously, the piece of the signal between the J point and the following T i point is known as ST segment, and the piece of the signal between P f and the following Q i point is known as PQ segment [2].
The Table 1 shows the normal values of the main ECG features of a typical lead II in sinus rhythm at a heart rate of 60 bpm for a healthy male adult [2,23,70].We will use these values for testing our methodology by simulating it in practical examples.Amplitude (mV) Time (ms) In the clinical evaluation of an ECG, physicians currently focus on the following main features [71,72]: 1.1 Measurement of Cardiac Frequency: It consists on measuring the number of cycles or heartbeats per minute.
Heart Rate Analysis:
The cardiac frequency should be almost constant in a sinus rhythm, when the sinoatrial node acts as the natural pacemaker.In the ECG, this is basically characterized by the fact that each QRS complex is preceded by a P wave.
Measurement of PR Interval:
The PR interval measures the required time for the electrical impulse to traveling from the sinoatrial node to the ventricles.In a health individual, its length is between 120 ms and 200 ms.It is useful to evaluate the signal conduction in the atriums and it could help to identify atrial blocks.
Heart Vector Estimation:
It is computed from I and III (or I and aVF) leads, and it gives information about blocks and hypertrophies.
Measurement of QT Interval:
It gives information about depolarization and repolarization processes of ventricles, and it is related to some abnormalities known as QT syndrome.
1.6 Width of QRS Complex: It represents the time in which ventricles depolarizate, and it is estimated between 80 ms and 120 ms.It is useful to evaluate troubles in the conduction system, as blocks.
ST Segment:
Its width is measured, as well as possible elevation or depression.It is related to ischemic processes, infarcts and other special diseases, as the Brugada syndrome.
8. Special Features of P, Q, R, S and T Waves: Some morphologies are connected to different pathologies.
Note that the diagnose of ECG signals is out of the scope of the current work, so the previous comments are merely orientative.Here we present a methodology for ECG analysis, focusing in the above 1, 2, 3, 5, 6 and 7 points.
Curvature Filters
Before to proceed to explain the proposed method for ECG analysis, we introduce a new tool for signal analysis (to the best of our knowledge), which we call "curvature filters".We start by explaining the underlying idea.
The onset and offset of ECG waves are characterized by a noticeable change in the slope of the signal at these points.Usually R waves climb up or fall dramatically, what weakens the influence of noise and makes easier the work of locating them.However, in general these twists are not always easy to find, mainly because of the presence of noise.It is worth to note that noise makes an appropriate slope measurement greatly difficult.
The onset and offset of the P and T waves (here denotated P i , P f , T i and T f , respectively), as well as the onset of the Q wave (called Q i ) and the offset of the S wave, i.e., the J point, sometimes are hard to be found.Frequently, the amplitude of the P wave (or even the T wave) is scarcely greater than the noise intensity.Often, this is also the case of Q and S waves, worsened by the fact of their much more short width.On the other hand, the T wave (and also the other waves) could be preceded or followed by ascending/descending periods of non vanishing slope, thus not completely isoelectric.These preceding periods are not quite different from the proper ascending/descending sides of the T wave.All these circumstances could collude in making these fiducial points almost indistinguishable from regular noise.
In order to be able to catch these twists, and taking into account the previously observed difficulty of measuring slopes under noise, we propose a technique to estimate "local curvature" of the signal.
In order to achieve this goal, we proceed as follows.We look for local information, but we have to average in order to avoid noise effects.On the other hand, the twists or slope changes we are looking for are linked in some sense to the second derivative, in "Calculus language".Thinking about Taylor approximations, this is nothing but approximate locally a function with second order polynomials.This way of thinking is merely motivational, since we are dealing with discrete signals.Nevertheless, we have found it useful in our simulations.Anyway, consequently, we have to discretize our approximations.Fix a non negative integer n ≥ 3, which will play the role of the window width.The set of integers will be denoted by , and the set of real numbers will be denoted by .We consider the basic quadratic polynomial p(x) = 3x 2 on the interval [-n, n] (the factor 3 is suitable for notation only), and we discretize it by averaging on uniform intervals of length two: For each integer k with 1 ≤ k ≤ n we compute: Proposition 2. 1 The scaling factors in the definition of curvature filters (6) are given by (1) Thus, we define In order to make the curvature filter orthogonal to constant signals, we subtract the mean We also set .Now we get from the above computations.One can also check that (5) Note that is integer for k=1,...,n.For the sake of simplicity, the n-th curvature filter is defined by where the positive integer factor λ n is chosen such that the set of integers is mutually prime (the only positive integer that divides both of them is one).That reduces the size of the filter entries, and does not impact on their use, since any multiple of the curvature filter is equally useful as long as only one order n is used to measure local curvature.
It turns out that the sequence of scaling factors is easily computed: We postpone its proof until Appendix A, in order to avoid a disruption in the explanation.Explicit expressions for the curvature filters can be found into this proof, and examples of the first curvature filters can be found listed in Table 2 at Appendix B. if n is even.
p ∈ Z
Each column lists the order n, the size , and the filter entries f n,k (for k = 1,...,n).
In virtue of identity ( 5) the filter entries are symmetric with respect to (n+1)/2.Consequently, the curvature filter is also orthogonal to linear signals.It is worth to note that symmetry implies that only the half of filter entries f n,k need to be computed.
Given a signal its k-th curvature coefficient of order n is if n is odd, and Because of the above computations, the curvature filter f n is orthogonal to affine signals: for any we have λ , µ ∈ R Next, for each noisy signal s j (with j=1,...,N t ), and for each order n (with n=3,...,2w s -1), we compute the corresponding normalized curvature coefficients.Note that we do that only for sample points such that the window is completely included into the signal range, as we are dealing with a finite signal; that depends on the order n of the
(10)
That means that the curvature coefficients are insensitive to the signal level and slope.This property is important, since we want the curvature filters for measuring only slope changes, not the slope or the proper signal itself.
If one wants to use only curvature coefficients of the same order, these definitions with integer f n,k are enough.However, if one wants to compare curvature features corresponding to different orders n, an appropriate normalization is convenient.
We define the normalized curvature filter by Given a signal , for any its k-th normalized curvature coefficient of order n is Explicit expressions can be also easily found for the normalized curvature filters: Theorem 2.2 For any n ≥ 3 the normalized curvature filter entries are given (for These formulae come from (4), (5), and the following result.
Proposition 2.3 For any integer n ≥ 3 one has (14)
We postpone its proof until Appendix A, in order to avoid a disruption in the explanation.
2.1.1.. Choice of curvature filter orders
In order to obtain an accurate result in using curvature filters for wave onset and offset estimation, an appropriate order n must be chosen on each case.In this section we argue about how to choose this optimal order.In order to do that, we test the curvature filters on a test example designed for offset estimation.The picture for onsets is completely similar because of symmetry.
We fix a time wave length w t , in milliseconds, and a corresponding wave length w s , in number of samples.The estimated sampling frequency is then obtained by the formula F s ~ 1000w s /w t , in Hertzs, where this quantity is rounded to the closest integer.The base signal is composed by a semi ellipse of length w s samples and height h (in millivolts), followed by w s zeroes.Consequently, the signal length is 2w s .The base signal is perturbed with random noise of a given intensity h n (in millivolts), giving a set of N t perturbed signals of length 2w s .In the next test example we use the following values: w t =110 ms, h=0.15 mV, h n =0.01 mV, and N t =100 signals (according to the P wave in the Table 1).The wave lengths used to test the curvature filters in this example go from w s =3 samples up to w s =100 samples, by increments of 1 sample.
curvature filter used.Then, we estimate the offset by selecting the greatest normalized curvature coefficient; thus, its offset is defined as the corresponding sample point k=o j,n with greatest normalized curvature coefficient.Note that, as we are looking for offsets for upward waves, we might select points with maximum convexity; i.e., greatest positive curvature.
In Figs.
(2 and 3) we show the resulting estimated offsets o j,n for j=1,...,N t , and n=3,...,2w s -1, for the wave lengths w s =15 samples (corresponding to F s =136 Hz) and w s =55 samples (corresponding to F s =500 Hz).We consider as a reference correct offset the sample point k=w s .If the curvature filter order is too small, then the noise has a profound effect in the curvature coefficients.That
Reference offset Noisy signals offsets
is, low order curvature filters do not distinguish between signal twists and noise.
From certain large enough order, curvature filters are very proficient on the task of detecting the offset.The accuracy of the measurement slightly deteriorates as the order increases.On average, a trend to shift outwards the wave is detected.The latter convergence for very high orders is spurious since is due to the finite character of the signal.
These remarks suggest that curvature filters could be an useful tool on detecting fiducial points in ECG signals, provided that a suitable order is chosen for the specific task.We think this is valid also for features extraction in signal processing, in general.
In order to measure quantitatively the curvature filters performance in the test example, we define the error as = (w s being fixed), for any j=1,...,N t , and n=3,...,2w s -1.Thus, for each order n we compute the mean error Then, the mean optimal order is defined as the (smallest) order n=n m for which is minimum In other words, the mean optimal order is the window size for which the greatestcurvature-coefficient method (i.e., the previous one) for detecting the offset is more accurate, on average.Note we avoid too large orders in this criteria.
Another important features which are also highly desirable, apart from accuracy, are fiability and robustness under noise presence.In order to take into account these aspects, we also analyse the measurements error dispersion provided by the greatest-curvature-coefficient method.Hence, we consider the error standard deviation In Figs.(4 and 5) we see that the magnitude of the errors dramatically falls off after the initial noise-sensitive period, as expected from the above remarks based on Figs.(2 and 3) Coherently, the mean error and the error standard deviation σ n also have the same behavior.We define the standard deviation optimal order as the first order n=n std such that σ n ≤ σ n+1 and σ n is lower than a reference value empirically set to 2. That is, we look for a small enough local minimum of the standard deviation.
Fig. (4).
Obtained errors, mean error and error standard deviation in the curvature filters order test for w s = 15.
(ε n m ≤ ε n for all n = 3, . . ., w s ).As it has been remarked before, immediately after the initial noise-sensitive period the greatest-curvature-coefficient method performs very well, providing a quite low mean error almost stagnant in a long period on n.The same could be said for the standard deviation of the error.
As it is illustrated in Fig. (6), it often occurs that the mean optimal order unnecessarily increases to larger values.Due to computational reasons, it is preferable to use low order filters as long as possible.Another factor to bear in mind is that, generally, waves do not occur isolated in an ECG, but surrounded by other ones.Hence, too high orders are undesirable since the presence of neighboring waves could cause interference.In order to avoid that, we recommend as a preferential choice the standard deviation optimal order n std to be used in the greatest-curvature-coefficient method.
In Fig. (7) we can see the optimal orders (mean optimal order n m and standard deviation optimal order n std ) for w s in the proposed range (that is, wave lengths from w s =3 samples up to w s =100 samples, by increments of 1 sample).In summary, in view of Fig. (8), we recommend to use a curvature filter order between the 15% and the 30% of the wave width w s .These values should be increased for very low sampling frequencies, as the resulting from Holter monitors.This quantity could be estimated a priori from the estimated wave length w t (in milliseconds, which could be taken from references as in Table 1), and the signal sampling frequency F s (in Hertz), which is assumed to be known, by the formula.8).Quotients of optimal (mean and standard deviation) orders obtained in the curvature filters order test with respect to w s (in percentages).
Description of Methodology
In this section we will describe our proposal of methodology for the analysis and features extraction of ECG signals.
Mean optimal order Standard deviation optimal order
Suppose we have a non trivial ECG signal, denoted ecg0, of length L. That is, is a vector of L real components.We will denote by F s the sample rate (sampling frequency) with which the ECG signal was taken, measured in Hertzs (Hz).
Preprocessing
Generally, the ECG signal is assumed to be contaminated with several noise sources.This noise could be reduced by using some filters which cut off different frequencies, but definitely cannot be completely avoided.Anyway, for this reason and previous to the feature extraction process, a preprocessing is necessary to be applied to the signal.Consequently, the original signal suffers changes which add some uncertainty to the final results.This phenomena is specially remarkable in the case of P and T waves onset and offset location.That is why we speak about feature ``estimation", and not about ``measurement".
First of all, the amplitude of the original signal is normalized to 1: The resulting signal has amplitude 1. Next, the direct level is supressed, obtaining a signal given by Obviously, one has mean(ecg2)=0.
After that, a high pass filter of type Butterworth of order 2 is applied to filter ecg2, with a cut frequency of 0.5 Hz, to obtain a preprocessed signal .This step is always present in ECG professional applications, in order to remove very low frequencies.
When applying this preprocessing to signals from the MIT database ( [73]), which proceed from Holters, this last step is very important and helps to improve the accuracy in subsequent estimations.On the other hand, when dealing with signals from pattern generating devices, it is usually required an additional filtering of smoothing or averaging type.In general, the required preprocessing highly depends on the measurement system features used to generate the ECG signal, and could include the use of other filters, as interference supressing filters out of 60 Hz.For the signals we have used in this work, this is not the case.
R m Peaks Location and Heart Rate Analysis
QRS complex detection is considered a fundamental step in the analysis of ECG signals, because of measurement and characterization of different associated parameters rely on the accuracy with which it is performed (see [19], [2]).On the one hand, from the QRS complex cardiac frequency can be obtained, and subsequently heart rate is also obtained.On the other hand, R m peaks almost delimitate each heart beat (more clearly than other waves) in such a way that estimation of the other fiducial points is more easily carried out from them.
In this work, we make use of the techniques developed in the literature [24].Concretely, we select the preprocessed signal energy maximum, choosing a threshold which is empirically set.An appropriate choice of these maximum allows to obtain the heart rate (in beats per minute).Moreover, we describe two different methods to locate the R m peaks in the original signal (in number of samples): one can simply extract the maximum peaks, or one can perform a crossed correlation between the filtered signal and the original signal (normalized and prefiltered).
We start with the preprocessed signal ecg3.Previous to energy processes, we filter it by using a band pass filter, in the band 10-25 Hz or 15-20 Hz, obtaining a filtered signal .This step is mandatory in order to remove artifacts, interferences, and the influence of P and T waves.Note that in this moment we are only interested in detecting R m peaks (Fig. 9).
Fig. (9). Example of R m peaks location by using the correlation technique.
Next, we consider the energy signal E 4 given by E 4 = .The energy threshold E th is empirically set to the value Then, the energy maximum are extracted with the constraint E 4 > E th .In the MATLAB code, the energy peak extraction is performed by using the implemented built-in function findpeaks ( [74]).The resulting energy peaks have to be sifted out, since they usually appear pairs of ``peaks" which are unnaturally quite close (presumably corresponding to the same R wave, due to the noise and filtering processes).These related peaks have to be removed, since they produce non physiological values for the cardiac frequency.
Each energy peak corresponds to an R m peak, but the last ones need to be located, since the band pass filtering provokes a constant gap between the two signals, which shifts the peaks in time.
What we do is to compute the cross-correlation and finding the correlation coefficient between the energy E 4 of the filtered signal and the energy E 3 (given by of the original preprocessed signal.Then, the shift is the difference between the signal length L and the lag corresponding to the correlation coefficient K cr , the maximum of the cross correlation.In that way, we are able to locate the R m peaks in the original preprocessed signal ecg3.In the code, the cross-correlation analysis is performed by using the MATLAB built-in function xcorr ( [74]).
Once the R m peaks of ecg3 are located, the RR intervals are known.The widths of RR intervals, or distance between consecutive R m peaks, allow to compute the approximated cardiac frequency: if L RR is the width of a given RR interval (in number of samples), and HR is the heart rate (measured in beats per minute bpm), then
QRS Complex Location
In order to describe the QRS complex, we have to locate the Q i , Q m , S m and J points of each cycle.Here, for exposition purposes, we consider the R m point as the maximum of the R wave, and the Q m and S m points are understood as the minimum of the Q and S waves, respectively.In addition, Q i is the starting point of the Q wave and the end of the exposition purposes, we consider the R m point as the maximum of the R wave, and the Q m and S m points are understood as the minimum of the Q and S waves, respectively.In addition, Q i is the starting point of the Q wave and the end of the PQ segment, and J is the ending point of the S wave and the start of the ST segment.
For this task, we proceed by using a modified slope analysis method with restrictions, starting from a peak.This algorithm is inspired by Chapter 9 of [2].The basic idea is to descend/ascend from a maximum/minimum, respectively, until a change in the sign of the slope is found.On a given sample point k (with 1 ≤ k < L), the slope of the signal ecg3 is defined by This method is justified because of in the pronounced ascents and falls of the signal, the influence of the noise is quite lower than in the isolectric segments.
In this analysis we remove the first and final cycles, since they could be incomplete or distorted because of the original signal capture process.Then, consider the original preprocessed signal ecg3, and fix some R m peak (not the first, not the last one).
On the one hand, we move backward from the R m peak until we find the first local minimum; that is, until we find a point k whose left neighbor has nonpositive slope s k-1 ≤ 0. This point is defined as the Q m point of this cycle.From this point, we continue backward looking for the first local maximum; that is, a point k such that its left neighbor has nonnegative slope This point is defined as the s k-1 ≥ 0. This point is defined as the Q i point of this cycle.
On the other hand, we proceed analogously to the right.From the R m peak, we move forward until we find the first local minimum; that is, until we find a point k with nonnegative slope s k ≥ 0. This point is defined as the s m point of this cycle.From this point, we continue forward looking for the first local maximum; that is, a point k with nonpositive slope s k ≤ 0. This point is defined as the J point of this cycle.
In practice, this method needs to be complemented with different criteria about how far the search should go, related to the estimated length of the Q and S waves.This is because of, depending on the presence or absence of Q and S waves in the ECG signal, the pure slope analysis algorithm could produce fiducial points far away from the R m peak, which would be unnatural.On the other hand, the greatest/smallest-curvature-coefficient method could assist the slope analysis method, as in the following subsection is illustrated, specially in the case of the Q i and J points.However, the short width of these waves forces to use a quite small order, which could be imprecise, specially for signals with very low sampling frequency.
Estimation of Remaining Fiducial Points
At this point, what remains is to estimate the fiducial points in P and T waves.We proceed by using a modified slope analysis method [2], as before, combined with the use of curvature filters.
First of all, we find the P m and T m peaks.Without loss of generality, we assume we deal with upward P and T waves.The case of downward waves is completely analogous, and an algorithm to distinguish between upward and downward cases is easy to implement by computing maximum and minimum and comparing them (Q i and J points could be taken as isoelectric reference level for the comparison, respectively).
Hence, from the R m peak, we look for a global maximum in a preestablished backward/forward range of samples of the signal, respectively.This range could be estimated from the reference values for the PQ and QT intervals (Table 1), the sampling frequency and the previously estimated fiducial points.Thus, the left maximum corresponds to P m , and the right one corresponds to T m .
From each one of these points, we descend in order to find the corresponding onset and offset points.The idea is to scan the surrounding signal looking for suitable candidates by using the slope analysis algorithm, and then selecting the best choice with the aid of the curvature filters.The process is as follows.
We start from the P m point (the process for the T wave is completely analogous).On the one hand, we move backward inside a preestablished range looking for local minimum: Points k whose left neighbors have nonpositive slope s k-1 ≤ 0. At each step, we store them as candidates.When the Range is exhausted, we select the winner by the greatest-curvature-coefficient method: We compute the curvature coefficients of the candidates, and the sample with greatest one among them is designated as the P i point of the cycle.On the other hand, from the P m peak we move forward inside a preestablished range (never farther than the corresponding Q i point) looking for local minimum: points k with nonnegative slope s k ≥ 0. They are the candidates, and the P f is selected among them by the greatest-curvaturecoefficient method.
Note that a stopping criteria is necessary in this algorithm.As before, the ranges could be estimated from the reference values for the PQ and QT intervals (Table 1), the sampling frequency and the previously estimated fiducial points.Also, it could happen that candidates which are quite close to the P m /T m peaks, respectively, might have to be removed, as at the top of the P and T waves they could exist points with high curvature coefficients, specially in cases of nonstandard morphologies.Consequently, a starting criteria could be also necessary.On the other hand, suitable curvature filters orders for the P and T waves need to be chosen for a well performance of the algorithm (subsection 2.1.1.).
RESULTS
Next we show some examples of application of this methodology.For QRS complex detection, the modified slope analysis method was used, aided by range criteria based on estimated wave widths.For P and T waves estimation, a combined slope analysis and curvature filters method was used, aided with an starting/stopping criteria based on estimated wave widths (Table 1).In the simulations presented here, the used curvature filters orders were n=3 for the QRS complex, and n=5 for the P and T waves, in the case of F s =128 Hz, and n=9,11,13 for the QRS complex, P and T waves, respectively, in the case of F s =500 Hz (Fig. 7).Odd orders are prefered because of they center the fiducial point in a better way.
The first ECG signal is cs403, generated by a Cardiosim II device [75] by using the pattern 03, a reference pattern of a normal ECG.The second one is a synthetic ECG signal made by the authors, simulating also a normal sinus rhythm but without noise.Both have a heart rate of HR=60 bpm and a sampling frequency of F s =500 Hz (Fig. 10).As can be observed in Fig. (11), the signal slope, defined by (22), gives information about the increase of the ECG signal.However, in Figs.(12) it is clear that noise deeply affects to the signal slope.In that context, curvature filters help to avoid noise in order to find the fiducial points, since they regularize the signal.This effect is due to the fact that the curvature coefficients are weighted averages of the signal.The third example is a piece of 8 cycles of nrsdb_16272, a real ECG signal from the normal sinus rhythm data base from the PhysioBank (MIT database [73]), reported with sampling frequency F s =128 Hz and nominal HR=60 bpm.In Fig. (13) a cycle of this signal is shown, with estimated fiducial points.All measurements have been taken with a level of noise, measured in the isoelectric segment which follows the P waveform, between P f and Q i , lower than 1.33 x 10 -4 mV 2 /Ω (a power of around 100 pW).This is equivalent to a signal contaminated with a maximum random noise of 0.01 mV in amplitude before the operation of prefiltering.That justifies the choice h n at subsection 2.1.1.
From the standpoint of signal to noise ratio S/N, a good performance criterion is obtained with S/N ≥ 20dB, figure of merit kept also in the measurements above [23,76,77].In all cases the power was estimated through the var function of Matlab which gives the signal variance [74].
DISCUSSION
Here we have presented a methodology for estimating the fiducial points of an ECG signal.Other features or measurements of interest, as widths of intervals and segments, wave heights, cardiac frequency, heart rate analysis can be computed from the located fiducial points.
One of the novelties of this work is the global strategy: first we localize the R, P and T waves peaks, and then we move backward/forward to the onset/offset, respectively.In this way, we are able to reduce the impact the noise has in the location process.Also, another significant innovation is the introduction of the curvature filters.We think this concept will prove to be an useful tool in signal processing, not only in ECG analysis.Moreover, it is worth to note that the combination of the greatest/smallest-curvature-coefficient method and the slope analysis method is significantly more effective than each of them separately: The first one adds accuracy and robustness to the second one, and the last one adds efficiency to the first one, as it reduces much more the number of curvature coefficients to be computed.Note that the signal slope is computationally cheaper than curvature coefficients.
It is worth to note that our method is specially designed to be applied to ECG signals corresponding to a normal sinus rhythm.However, we expect that the philosophy under the method, and specifically the curvature filters as a tool, will be useful in analyzing ECG signals corresponding to heart disorders which can significantly change the rhythm of the heart beat.Since f 12m,6m+2 -f 12m,6m+1 = 6, the common divisors divide 6.But 2 nor 3 divide no f 12m,k .Note that k(k-1) is even for any integer k.
Fig. ( 5 ).
Fig. (5).Obtained errors, mean error and error standard deviation in the curvature filters order test for w s = 55.
Fig. ( 6 ).
Fig. (6).Obtained errors, mean error and error standard deviation in the curvature filters order test for w s = 57.
Fig. (8).Quotients of optimal (mean and standard deviation) orders obtained in the curvature filters order test with respect to w s (in percentages). | 7,835.8 | 2018-09-28T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Bone Cancer Detection Using Feature Extraction Based Machine Learning Model
Bone cancer is considered a serious health problem, and, in many cases, it causes patient death. The X-ray, MRI, or CT-scan image is used by doctors to identify bone cancer. The manual process is time-consuming and required expertise in that field. Therefore, it is necessary to develop an automated system to classify and identify the cancerous bone and the healthy bone. The texture of a cancer bone is different compared to a healthy bone in the affected region. But in the dataset, several images of cancer and healthy bone are having similar morphological characteristics. This makes it difficult to categorize them. To tackle this problem, we first find the best suitable edge detection algorithm after that two feature sets one with hog and another without hog are prepared. To test the efficiency of these feature sets, two machine learning models, support vector machine (SVM) and the Random forest, are utilized. The features set with hog perform considerably better on these models. Also, the SVM model trained with hog feature set provides an F1-score of 0.92 better than Random forest F1-score 0.77.
Introduction
A human body consists of 206 bones. Bones are attached to the muscle of the body and provide support for the movements. Bone ligaments are fibrous tissue and filled with spongy bone marrow. A bone cancer originates from the healthy cells and starts forming a tumor (Blackledge et al. 2014) [1]. The primary symptom of bone cancer is a bone tumor. The tumor grows gradually and may spread to the other part of the body. It can destroy the bone tissue and bone becomes weaker. According to statics, 3500 people in the United State were affected by bone cancer in the year 2018, and approx. 47% of the bone-cancer diagnosed people died. The doctor diagnoses cancer via many tests. The X-ray image diagnosis is used to detect cancer in the human bone. The healthy bone and the cancerous bone X-ray assimilation rates are different. Due to which a cancerous bone image surface appears ragged (Oishila et al. (2018) [2]). The bone cancer severity is measured by a stage and the grade. Tumor (geographic bone destruction) growth rate is used by doctors to predict the disease growth rate. Diagnosing cancer in the bone requires expertise. Bone cancer diagnoses are performed manually by a doctor, so it may take time, and error possibility arises.
Early detection seems to be the only factor that increases the chance of survival of cancer-affected patients. This paper deals with the system which uses the machine learning algorithm SVM and image processing techniques to detect the tumor and classify cancer. Similar researches in this field have been carried out by researchers to develop an automated system to assist a doctor. An automated system is fast with low error probability. Machine learning algorithm SVM and digital image processing technique, preprocessing, edge detection, and feature extraction have been used to develop an automated system (Chen et al. 2007) [3]. In the other research, Yadav and Rathor (2020) [4] developed an automated system for the diagnosis of human bone. They have utilized a deep neural network to categorize healthy and fracture bone. The model is trained with the large volume of the augmented image dataset. In the augmentation process, the same copy of images is generated which may be present in the training and test dataset. A k-fold crossvalidation can be used to avoid bias performance.
Asuntha and Srinivasan (2017) [5] have used the GLCM feature to identify fractured bone. In the experiment, they concluded only GLCM-based texture feature is not sufficient to correctly identify the cancerous bone. The entropy and skewness also play a vital role in cancerous region prediction. The value of entropy is low in the cancerous region and high outside the cancerous region. The hog feature gives the shape and direction of a pixel in images. Bandyopadhyay et al. (2018) [2] have used a fusion of several techniques and texture features to identify and classify the cancerous bone and the healthy bone. The classification of the long bone is performed using SVM. The method is focused only on the long healthy and cancerous bone. The performance of models is 85%, which can be further improved.
The main contribution of the manuscript includes the following aspects: (1) In the dataset we found, pixel distribution pattern of several cancerous and healthy bone images is very similar. Due to which classification task is difficult. Therefore, after several experiments, a best feature set is identified that can classify them with high precision and accuracy even on a small dataset (2) A comparative study on selected feature set is performed with two well-known machine learning algorithm SVM and Random forest. We found SVM works best for diagnosis of human bone The proposed method is more sensitive towards cancerous bone. Hence, it can be used in real time to provide second opinion to a doctor The remaining part of the paper is organized as follows: Section 2 describes the literature survey part of the manuscript. Section 3 defines the proposed method in detail. Section 4 explains the result section of the proposed method. Section 5 defines the discussion section of the proposed result. Finally, Section 6 concludes the manuscript. Avula et al. (2014) [6] proposed a strategy to distinguish the bone malignant growth from MR images utilizing mean pixel power. Ranjitha et al. (2019) [7] utilized MRI image to distinguish malignant and benign. For this, they extracted texture features and applied K-means clustering algorithm to separate the tumor part. From the removed tumor part, all out number of the pixel is figured, and the total number of the pixel power is determined for the extrication of the tumor part to ascertain the mean pixel value. The mean pixel value is determined to recognize malignant growth. On the off chance that the mean pixel value worth is over the limit esteem, it is considered as malignancy.
Literature Survey
The strategy proposed by Jose et al. (2014) [8] is another methodology for brain tumor segmentation. Their strategy utilizes fuzzy C-means and K-means algorithms. In another paper presented by Patel and Doshi (2014), a noble approach is presented which can be connected utilizing diverse division methods on MRI and CT images. Reddy et al. (2015) [9] proposed a novel methodology to distinguish the size of the tumor and the bone malignancy stage utilizing developed area calculation. This strategy fragmented the district of enthusiasm by utilizing the areadeveloped calculation. The tumor size is determined by the number of pixels in the extricated tumor part. The contingent on the absolute pixel esteem malignant growth stage is recognized. Determination of seed point relies upon the picture, and it is hard to choose precisely. Reddy et al. (2016) [9] have used an MRI image to detect bone cancer and stage. The image is denoised to remove noise by forming clusters based on the pixel characteristics. The value 245 and mean pixel intensity are used to predict the cancer stage. ROI (region of interest) is extracted from the image and compared with a threshold value to predict the size of the tumor. Similarly, Kaushik and Sharma (2016) [10] proposed a strategy for volume computation of disease tumors. Their methodology can be utilized in the cancerous region developing a strategy for sectioning ROI that can figure out the volume of the tumor. Sinthia and Sujatha (2016) [11] proposed a novel way to deal with the identification of the bone malignant growth utilizing the K -means clustering algorithm and edge recognition strategy. This strategy utilized Sobel edge identification to distinguish the edge. The Sobel edge locator identifies just the outskirt pixels. K-Means grouping calculation is utilized to distinguish the tumor zone.
In the same manner, Asuntha et al. (2017) [12] have developed a technique to detect bone cancer in MRI images using medical image processing techniques. The proposed method preprocessing techniques uses the Gabor filter to smooth the image and remove the noise from an image. The segmentation is carried out by using superpixel segmentation and multilevel segmentation. After filtering, edge detection and morphological operations are applied. In the second stage, superpixel segmentation is performed, and some of the important features are extracted from the images [13]. Then, the extracted features are used to identify the bone cancer. The ongoing investigation on fundamental remedial methodologies is done by Shafat et al. (2017) [14]. This paper attempts to coordinate the end of dangerous stem or forebear cells. These examinations have demonstrated that focusing on anomalies of the BM may have esteem. Their methodologies can obtain the capacity to multiply and separate novel remedial methodologies for the cuttingedge issues.
Asuntha and Srinivasan (2018) [5] stated that bone cancer is a serious disease causing the deaths of many individuals. The detection and classification system must be available to diagnose cancer at its early stage. Early detection 2 Computational and Mathematical Methods in Medicine seems to be the only factor that increases the chance of survival of cancer-affected patients. Cancer classification is a difficult and challenging task in clinical diagnosis. This paper deals with the system which uses image processing techniques to detect the tumor and classify cancer. The approach has drastically reduced the time required for the detection and classification of cancer. Nisthula and Yadhu (2013) [15] applied image enhancement techniques to increase the intensity of the image to find an edge in the cancer image. The edge detection technique has been applied. The model in this paper is designed in such a way that can detect fast and reliable cancerous tissue in the bone. Torki (2019) [16] reported tumor as one of the significant medical issues. They have developed a bone disease recognition framework. It can anticipate the malignant growth in the prior satiate. Their forecast framework is examined utilizing MATLAB-based exploratory arrangement and execution. Vandana et al. (2020) [17] have worked on the basic bone tumor. They have upgraded the graph cut-based clustering algorithm for the identification of the cancerous part and the healthy part. Their method can be utilized to measure the attributes of danger and characterize them as typical, amiable, and malignant by utilizing multiclass irregular texture.
In the recent survey, Shrivastava et al. (2020) [18] have gone through various techniques to classify the cancerous and the healthy bone. In this work, bone computed tomography (CT) dataset in Digital Imaging and Communication in Medicine (DICOM) format are used. This work explains distinctive AI methods for tumor recognition and order. AI is an immense area of research, out of which medical image processing is a critical territory of work. In medicinal analysis like ulcer, break, tumor, and so forth image processing made the work simpler in finding the specific reason and most ideal arrangement. AI strategies are applied to restorative pictures for irregularity discovery. It can be seen that an acceptable degree of progress has been accomplished by applying the machine learning procedures. In this work, diverse AI methods for clustering are explained.
The method discussed above utilizes the segmentation techniques to obtain ROI. After that, texture and shape features are extracted to train the model. The performance of the model can be improved by selection of correct features and utilizing different types of feature optimization techniques [19,20]. In the proposed work, different texture and shape features have been selected through rigorous experiments. These features are capable to distinguish the healthy and cancerous bone with high accuracy.
The above survey leads the work on bone cancer detection in a manner that if the feature extraction is done to get the right segmentation and finding the core part of the bone because the cancerous bone identification requires to identify all those features which are responsible for the bone cancer like bone density, bone color, and bone texture. To get the right feature, there is a need to apply the machine learning technique that can find the features and classify the healthy bone and cancerous bone. In the present research, first, we compared the efficiency of segmentation techniques like Canny, Prewitt, and Sobel to find ROI. Sec-ond, the two feature set {HOG, Entropy, Energy, Gini Index, Skewness, Contrast, Correlation, Homogeneity Product of E(X) and D(X)} and {Entropy, Energy, Gini Index, Skewness, Contrast, Correlation, Homogeneity Product of E(X) and D(X)} are prepared to train the models. Finally, we compared the performance of the Random forest and SVM using these features. The feature set {HOG, Entropy, Energy, Gini Index, Skewness, Contrast, Correlation, Homogeneity Product of E(X) and D(X)} used by the SVM provides better results compared to Random forest [21].
Material and Methods
The proposed approach flow diagram is shown in Figure 1. The input to the system is an X-ray image. The X-ray image diagnosis is fast and the cost is less.
3.1. Preprocessing. The X-ray image contains noise, which is removed by a median filter of the size 3 × 3. The image is blurred. Therefore, the image is sharpened to increase the intensity of the image.
3.1.1. Image Segmentation. After preprocessing, the identification of an object from the image is done by segmentation. The segmentation technique's reliability is calculated based on the final precision rate. Therefore, it is rational and an effective technique for the identification of concern object. The image is parted into pixel set to gather information from the item concerned utilizing the segmentation technique (Asuntha and Srinivasan, 2018) [5]. The Canny algorithm is used to segment the image in the present research. Since, the sharp edges responsible for better ROI are obtained through the Canny edge detection algorithm [22], compared to other edge detection techniques like Sobel and Prewitt. Also, the dataset used in the study is small. The performance of the Canny edge becomes excessive as the size of the dataset increases [2,23,24]. Figure 2 shows the different categories of images.
Feature Extraction.
Haralick et al. [25] recommended texture descriptor is exceptionally regular to characterize texture qualities. In the Haralick descriptor, a specific pair of pixel events is determined by every section ði, jÞ of the GLCM matrix A. From the dark level estimations of the fragmented picture, we have calculated four texture features contrast, correlation, energy, and homogeneity.
Contrast: represents to the extent of neighborhood reduce level arrangement in an image and is constrained by the separation between max force and min control.
Correlation: it measures how the pixel is coidentified with one another in the entire picture.
3 Computational and Mathematical Methods in Medicine where μ is the mean pixel value and σ is the standard deviation.
Energy: it is controlled by the summation of squared parts.
Homogeneity: it gauges the smoothness of diminished level scattering of segments; it is oppositely related to a distinction.
Skewness: it measures the level of turning in an image from standard scattering. The range for the allocations of the pixel is evaluated by 0 and -1 to +1.
where μ is the mean of y, σ is the standard deviation, and XðtÞ is the expected value of the quantity t. The skewness work is utilized to figure the populace's esteem.
The skewness is settled not just by what number of server farms is to the opposite side and left of the mode yet, in like manner, the detachment away they are. So, more spotlights that are on the left now near the mode may not overpower a few spotlights that are on the advantage yet altogether progressively remote away, giving a general positive skewness despite the way that more spotlights are on the left.
Variance: the variance is defined as follows.
where μ is the mean of X: Standard deviation: the standard deviation is the square root of the variance defined as follows.
Computational and Mathematical Methods in Medicine
where μ is the mean of X: Entropy: the division of malignant growth bone is a very problematic assignment. The balanced Shannon entropy is used to perform division. Shannon entropy has been used by various pros to deal with such kinds of issues. The image is resized to 70 × 70 pixels reliant on many getting ready and test results. By then, it is turned to 35 degrees.
where k y is the frequency of color x, m 1 and m 2 are the total number of rows and column of the image, respectively. The force of cancer areas of bone is low. On the opposite side of the cancer bone image, entropy is high. To demand, this distinction entropy is increased by the standard deviation.
3.1.3. Income Inequality Metrics. Income inequality metrics are utilized to gauge salary disparity and the circulation of pay in the region of financial aspects. In the present research, the Hough change aggregator grid is considered as a salary.
A long queue at a predetermined edge in picture example speaks to a high salary. High-quality dissemination in the Hough collector framework speaks to inconsistent surface examples. The imbalance is calculated using the Gini index (GI) [17]. It is determined as follows.
where N is the total number of pixels. The estimation of the Gini index expands some spots in the extent of 0 and 1. Aggregator cross-section having corresponding distribution would yield GI respect approx. to 0 and the best clashing dispersing would yield GI worth near 1. Figure 3 shows the feature extraction model of cancerous detection.
SVM Model.
Bone cancer detection and classification have been carried out by SVM. A binary class classification problem uses linear SVM whereas a multiclass problem uses a multiclass SVM model. In the proposed research, linear SVM has been used for the cancerous bone and the healthy bone classification.
Let x be a vector denoting sample to be classified and y is scalar denoting their respective class label.
New pattern p ∈ R d , where the corresponding classes after classification are denoted byy ∈ f±1g.
The hyperplane is used to separate the classes, and it is represented as follows: <u:p + b > = 0, and u ∈ R d , < u:p > is the inner dot product of u and b is a real number: In the present study, linear kernel function with soft margin, 1 is considered. To maximize the distance between hyperplanes equation is given by q i ðp i −uÞ ≥ ±1.
Training of SVM Model.
In the proposed work, the SVM model is trained with two types of features vector. In the first experiment feature set of {HOG, Entropy, Energy, Gini Index, Skewness, Contrast, Correlation, Homogeneity Product of E(X) and D(X)} whereas, for the second experiment {Entropy, Energy, Gini Index, Skewness, Contrast, Correlation, Homogeneity Product of E(X) and D(X)} is used as a feature vector. For both, the experiment model is trained using a linear kernel with an initial learning rate of 0.001.
The hyperplane for the dataset of the proposed model is shown in Figure 4. This plane is plotted using values obtained from the features vector {Entropy, Energy, Gini Index, Skewness, Contrast, Correlation, Homogeneity Product of E(X) 3.3. Random Forest. Random forests are an ensemble learning classification process, a regression that operates by constructing multinode decision trees while evaluating, and class mode or mean estimate of forests. The random forest algorithm extends the usual technique of aggregating bagging or bootstrap to tree learners. Instead of a training set x = x 1, x 2 , ⋯x n with labels y = y 1 , y 2,⋯ y n regularly using (100 times) chooses a casual sample to supplement the set of training data and compares to these components. For b = 1, ⋯100.
Let R b ðxÞ be the class prediction of the bth randomforest tree. Then
Results
In the proposed work, we have performed two experiments one with hog feature sets and another without hog feature is applied on two machine learning models, i.e., random forest and SVM. Also, the performance of the models is evaluated using 5-fold cross-validation. Computational and Mathematical Methods in Medicine the skewness. The skewness of the cancerous bone is less compared to the healthy bone due to the asymmetrical distribution of pixels in the cancerous bone. The training image =∑ 65 i=1 C i . where i = 1 to 45, we consider cancerous bone image and i = 46 to 65 healthy bone images. The skewness value in the training image is shown in Figure 5. The skewness value of the test image is shown in Figure 6. The pattern of skewness value in the cancerous and the healthy bone is similar in the test and training image.
Performance
Evaluation with HOG Feature. The HOG feature plays a vital role in training and classification. It gives shape as well as the direction of a pixel in the image by extracting gradient and orientation. The HOG descriptor divides the image into smaller regions, and for each region, the histogram is generated. First, the image is resized to 25 × 25 pixels sizes. The window size per bounding box is set to 3, and the number of histogram bins is set to 6 after several experiments on the data set. The gradient in the x and y direction is calculated for every pixel to check the change in intensity of the image. The test result of the data set with the HOG feature is shown in Figure 7. Out of 20 cancerous bones, 1 is false-negative, and out of 20 healthy bone images, 2 are false positive. In Figure 8, the hog feature has not been applied for training and testing of the data set due to which out of 20 cancerous bones, 2 are false-negative, and out of 20 healthy bones, 3 are false positive.
The confusion matrix of test data with hog feature and without hog feature has been shown in Tables 1 and 2. The comparison of test data based on accuracy, precession, recall, and F1 score is shown in Table 3.
From Table 3, it can be suggested that hog is one of the important features for the identification and classification of healthy and cancerous bone. Similar researches have used GLCM-based texture features and other texture features for the identification and classification of healthy and cancerous bone. The training and validation loss curve of the proposed model is shown in Figure 11. Due to the small dataset, the loss curve is not saturated. We can see that the maximum loss is less than 1. This loss can be further reduced by training for more epochs on large dataset.
Box Plot Analysis.
To find the importance of the features, the box plot analysis is also performed. Figure 12 represents the box plot of the 9 features. The hog feature is represented by the ninth box plot in Figure 12, and the parallel box plot is a representation of all the features. The data of the HOG feature contains the data in a smooth form, and there is no fluctuation in the data therefore the classification is done using HOG is accurate, as compared to other features. It gives shape as well as the direction of a pixel in the image by extracting gradient and orientation. The HOG descriptor divides the image into smaller regions, and for each region, the histogram is generated. The gradient in the x and y direction is calculated for every pixel to check the change in intensity of the image.
Discussion
Bone cancer is growing day by day researches reported that it happens due to fluid, fat cells, and hematopoietic cells. This differentiation can be identified using texture analysis. The texture is represented by the intensity of the pixels. The intensity of the pixels in the healthy and the cancerous bone is different. Therefore, using texture features classification of the image can be performed. The texture of the cancerous and healthy bone is different. So, it is necessary to correctly identify texture (Reischauer et al., 2018) [26].
The healthy bone pixels are less scattered as compared to the cancerous region. The research of Reddy et al. (2016) [27] has used the mean pixel to segment the cancerous bone. But the classification of the bone image into the healthy and cancerous is not performed. The method calculated ROI (region of interest) from cancerous affected MRI bone image. The affected area's analysis is performed based on the number of pixels. After that, the mean intensity is calculated by the sum of the intensity of the pixel extracted. Finally, based on the mean intensity value, cancer stage is predicted. The research of Asuntha et al. (2017) [12] has used GLCM-based texture features to identify the cancerous bone. The research is not able to classify the bone into respective category.
The GLCM texture feature alone is not sufficient for bone cancer classification. Therefore, in the present research apart from the basic four texture features of the GLCM, other features like the hog have been used to identify and classify the cancerous bones. The hog feature gives the shape and direction of the pixels in the images on local cells. The hog featured can identify the cancerous region. In the paper of Bandyopadhyay et al. (2018) [2] has used a fusion of several techniques and texture features to identify and classify the cancerous bone and the healthy bone. The classification of the long bone is performed using SVM. The method is focused only on the long healthy and cancerous bone. The performance of models is 85%, which can be further improved. The present study is not only limited to the long bone. In all the measurement parameters, SVM is better than random forest machine learning. Therefore, in the present study, SVM has been chosen for the diagnosis of cancerous and healthy bone.
The cancerous region pixels are more scattered in the bone image (Oishila et al., 2018) [2]. The HOG feature calculates the shape and direction of pixels based on window size and the histogram bins. The ROI is extracted by a bounding box and the largest contour region, as shown in Figure 13.
Comparison of the Proposed Approach with Previous
Work. The proposed approach has been compared with the existing approach (Oishila et al., 2018) [2] based on texture features such as entropy, standard deviation. The existing approach could not lead the solutions for different types of human bone like flat bone and irregular bone. The proposed approach is using hog features along with entropy and standard deviation to identify the cancerous and the healthy bone for the different types of human bone. The proposed approach is better in terms of all the parameters. The F1-score of the proposed model, 0.94 is better than the F1-score 0.88 of the work [2] for cancerous bone classification. Table 5 shows the comparison of the present approach with the existing work.
The graphical representation of the proposed work with previous work is shown in Figure 14. The SVM model with hog feature set outperforms for all measures compared to work [2] without 5-fold cross-validation. With 5-fold crossvalidation performance measure like accuracy and recall is much higher than work [2] but precision and F1-score are slightly less than the work [2].
Conclusion
The proposed method is a combination of feature extraction and classification model that discriminate vitality of the cancerous and healthy bone identification and classification.
The median filter of size 3 × 3 is used to remove the noise. The object of interest is extracted by applying the canny algorithm. The textures of the cancerous and healthy bone are different in the cancerous region. In the cancerous region, the pixel of the cancerous bone is more scattered compared to the healthy bone. Therefore, it is important to select the texture feature which can differentiate cancerous region. The most appropriate texture feature used by researchers is the GLCM-based texture feature. But it found in the experiment that only GLCM-based texture feature is not sufficient. The entropy and skewness also play a vital role in cancerous region prediction. The value of entropy is low in the cancerous region and high outside the cancerous region. The hog feature gives the shape and direction of a pixel in images. The experiment found that using the hog feature with the GLCM texture feature gives an F1-score of 92.68%, better than without using the hog feature of 87.80%. According to the ground truth, the accuracy of 92.30% with hog features is obtained, which is better than previous work (Oishila et al., 2018) [2] of 85% for cancerous bone. The performance of the system can be further improved by selecting other texture features. In short, we can summarize, and proposed method can detect cancerous and healthy bone image with high precision. Our model performance is more sensitive towards cancers bone image compared to healthy bone image. That indicates it can be used in real time to provide second opinion to a doctor. In the future, we will create a large dataset to further evaluate the performance of the model. We will also work on the feature optimization techniques like monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO), moth search (MS) algorithm, slime mould algorithm (SMA), and Harris hawks optimization (HHO) for improvement of performance.
Data Availability
No data is available. Precision Recall F1 Score Previous Work [19] The proposed work without 5-fold The proposed work with 5-fold | 6,818 | 2021-12-20T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Equity analysis of Chinese physician allocation based on Gini coefficient and Theil index
Background Unequal allocation of medical physician resource represents one of major problems in the current medical service management in China and many other countries. This study is designed to analyze the current distribution of physicians in 31 provincial administrative regions in China, to estimate the fairness of the distribution of physicians and provide a theoretical basis for the improvement of the allocation of physicians. Methods This study took physicians from 31 provincial administrative regions in China as the study objects, and the data were obtained from the China Health Statistics Yearbook 2019 and the official website of the National Bureau of Statistics of China. Calculation of the Gini coefficient (G) and the Theil index (T) were carried out by drawing the Lorenz curve. The fairness of present physician location in 31 provincial administrative regions in China was analyzed from the perspective of distribution by both population and service area. Results The Gini coefficients of medical physicians in China are 0.003 and 0.88 by population and by service area, respectively. This shows that the distribution of medical physicians is fair basing on population, and there is little difference in the number of physicians per 1000 population in different regions. However, the physician distribution basing on service area is highly unfair and shows a large gap in the number of physicians per square kilometer between different regions. In general, Beijing, Zhejiang, Shanghai, Jiangsu, Shandong, and Tianjin are higher than the overall level of 31 provincial administrative regions. In addition, the number of medical physicians in Zhejiang, Shandong, Beijing and Jiangsu is over-provisioned. Conclusion Bridging the number of medical physicians in different regions is a key step to improve the equity of physicians’ resource allocation. Thus, findings from this study emphasize the need to take more measures to reduce physician quality differences between regions, balance and coordinate medical resources. This will increase the access of all citizens to quality medical services.
Background
As the aging population accelerates and the spectrum of diseases changes, the demand for medical services in China has surged. At the same time, due to the continuous occurrence of injuries and medical incidents, the safety of medical practice environment and other factors have caused the continuous losses of physician resources (PR) [1]. In the shortage of Chinese PR, the fair and effective distribution of medical resources has attracted more and more attention. Due to the desire for improved fairness of the distribution of human resources for health [2], the issue of equality of human resources for health also frequently appears in China's policy development agenda.
According to the law of the People's Republic of China on Practicing Physicians, medical students who have obtained the qualifications of practicing physicians or practicing assistant physicians must apply for registration with the health administrative department of local government at or above the county level, and then start to engage in the corresponding medical, prevention, and health care businesses. Thus, physicians must have both professional titles and qualifications for practicing medicine. In March 2015, the General Office of the State Council issued the "Notice on Printing and Distributing the Outline of the National Medical Service System Planning (2015-2020)" (Guobanfa [2015] No. 14), which was clearly planned that by 2020, the number of practicing (assistant) physicians per 1000 permanent residents in China will reach 2.5. During 2018-2019, the State Council successively issued the Opinions on Promoting the Development of "Internet + Medical and Health" (Guobanfa [2018] No. 26), and the "Notice on Printing and Distributing Key Tasks for Deepening the Medical and Health System in 2019" (National Banfa [2019] No. 28) and other policies. The purpose is to adopt measures such as the adoption of Internet technology, the construction of national medical centers and regional medical centers, This will speed up the implementation of medical resources, and improve the level of medical services in areas where quality medical resources are in short supply.
With a population of more than 1.4 billion, China has a large demand for physician resources. Especially in the case of the outbreak of the COVID-19 epidemic in China, sufficient PR become even more crucially important. There are a total of 34 provincial administrative regions in China. Of which, Taiwan, Hong Kong, and Macau implement their local-specific medical and health systems. This study mainly focuses on the physicians in the rest 31 regions of China who implement the same medical and health systems with the analysis on the current situation of the fairness of the allocation of physicians 'resources in China. Findings from this study will provide a theoretical basis for China and other regions to take effective measures to optimize the allocation of physicians' resources.
The issue of fairness in the allocation of medical and health resources has become a key research topic in the public health field in many countries [3][4][5]. In the early days, the polarization of China's medical and health services was serious. In 2000, the World Health Organization evaluated and ranked the fairness of health financing and distribution of its 191-member countries, and China ranked 188 th [6]. The unreasonable distribution of medical resources and low fairness in China have attracted more and more attention. Numerous studies have shown that inequality in the distribution of PR can be determined by many factors including: 1) Economic development will have a significant impact on the distribution of human resources [7,8]; 2) Population density has also been identified as a factor causing unequal distribution of human resources for health [9], and 3) There are also government health expenditures [10].
Unequal allocation of medical physician resource is one of well-known problems for the present management of medical resources and services in many countries, including China. In this study, the fairness of the current distribution of physicians in 31 provincial administrative regions in China was analyzed by using Gini coefficient (G) and Theil index (T) methods. Findings from this study may provide a theoretical basis useful for the improved management of the allocation of physicians in future.
Data collection
As of the end of 2018, the total number of practicing (assistant) physicians in China was approximately 3,607, 156, the average number of physicians per 1,000 population in the 31 provincial administrative regions is 2.62. The data for this study came from two sources: a). China Health Statistics Yearbook 2019, and b). China National Bureau of Statistics official website [11]. Based on the statistical description of Chinese practicing (assistant) physicians in 2018, the basic information of Chinese physicians was analyzed. Basing on the allocation of PR in the 31 provincial regions in China, fairness analysis and evaluation were performed using Lorenz curve (LC) and Gini coefficient.
Fairness assessment
The Lorenz curve is used to evaluate the fairness of the allocation of medical and health resources in the field of public health. The basic principle is that income or resources are divided into several levels according to different populations or regions, and they are accumulated according to the percentage from small to large, and are represented by the vertical axis; The corresponding percentage of the population remains the same, and is expressed on the horizontal axis after accumulation. The Lorenz curve is then generated by connecting the corresponding points. If the curve is closer to the ideal fair line, this means the smaller the income gap or the closer the distribution of resources is to equity. Conversely, the farther the curve is from the absolute fairness line, the worse the fairness is.
Gini coefficient (G) or Gini index is a statistical index reflecting fairness calculated based on the LC with various calculation methods. The general method for calculating the G of medical and health resource allocation in China is to directly apply the Lorenz curve formed by the cumulative statistical points, and then calculate the G according to the trapezoidal area formed by each segment.
The specific calculation formula of Gini coefficient is.
In the formula, n is the number of regions. This article was divided by provinces, autonomous regions, and municipalities in China, so n = 31. X is the cumulative percentage of the population in the corresponding area andY is the percentage of medical resources serving the corresponding area. The value of Gini coefficient is between 0 and 1. If the value is closer to 0, which indicates the much fair the distribution of income or resources; if it is closer to 1, which means the more concentrated the income or resources, indicating the more unfair the distribution. According to international practice, 0.4 is usually used as a "guard line" for the gap in the allocation of medical and health resources. The Gini coefficient is < 0.2, which means that the distribution of medical and health resources is highly fair; between 0.2 and 0.3, it is relatively fair; between 0.3 and 0.4, it is more reasonable; between 0.4 and 0.5 indicates a large gap; and >0.5 indicates a high degree of unfairness [12][13][14] Theil index is mainly used to analyze the difference contribution of resource allocation between different regions, and it can also decompose the overall difference. In this study, the Theil index was used to compare the relative differences in the distribution of the 31 regions in China. The calculation formula for T is: T is the Theil index, Pi is the proportion of the population in the area, R is a physician allocated by population (area) in the 31 regions, and R is the total population (area) in the 31 regions nationwide. The Theil index was calculated based on population (T -population ) and area (T -area ) in this study. The closer the T value is to 1, the worse the fairness is. T <0 indicates that the fairness in the province and city is higher than the overall fairness [15].
Physician distribution in China
As shown in Table 1, the average number of physicians per 1,000 population in the 31 regions of China is 2.62. Jiangxi has the least number of physicians per 1,000 population, with less than 2 physicians per 1,000 population. Shanghai, Zhejiang, and Beijing have the highest number of physicians per 1,000 population, all exceeding 3 (Table 1). China's 31 regions have an average of 1.3 physicians per square kilometer. Physicians per square kilometer in Tibet, Qinghai, Xinjiang and a few other places have fewer medical physicians, showing less than 0.1 physician per square kilometer. Shanghai has the highest number of physicians per square kilometer, with more than 11 ( Table 1).
Equity of physician distribution in China
The Lorenz curve of the distribution of physicians by population in the 31 regions in China was generated based on the number of physicians per thousand population. The cumulative percentage of population in each region was taken as the X axis, and the cumulative percentage of physicians per 1,000 population was taken as the Y axis. On the same axis, the Lorenz curve of the distribution of medical physicians in the selected provinces and cities in China was generated according to the area of service (the service range/area involved in the practice of physicians), with the cumulative percentage of each province and city as the X axis and the cumulative percentage of physicians per square kilometer as the Y axis (Fig. 1).
According to the Lorenz curves of the distribution of medical physicians in the provinces and cities of China basing on population and service area, the Gini coefficients were calculated to be 0.0003 and 0.88, respectively. This means that according to the population, the distribution of Chinese medical physicians' resources is currently in a fairly fair state. However, according to the area of service, the distribution of Chinese PR is highly unfair.
Fairness of physician distribution in China based on Theil index
As shown in Table 2, The overall distribution of physicians in the 31 regions in China is comparable basing on the calculation by population. Of which, Hunan Province is the most reasonable (Theil index is close to 0). The distribution of physicians in Anhui Province, Jiangxi and Yunnan is poor.
The allocation of physicians by service area revealed that is higher in Shandong, Zhejiang, Guangdong, and Beijing than the national level. The distribution of the physicians in Henan Province is the most reasonable one, slightly better than the overall national level, while Tibet is very unfair, followed by Xinjiang and Inner Mongolia.
Overall, analysis of the distribution of general physicians by both population and service area showed that Beijing, Zhejiang, Shanghai, Jiangsu, Shandong, and Tianjin are higher than the overall level of 31 provincial administrative regions across the country (T -population <0 and T -area <0). The number of medical physicians is over-provisioned in Zhejiang, Shandong, Beijing and Jiangsu.
Discussion
As data presented in this study, China had 2.62 practicing (assistant) physicians per 1,000 population by the end of 2018. This means that China has completed ahead of time the goal of 2.5 physicians per 1000 permanent residents by 2020 set by Chinese central population) yet. This could be due to the long training period for medical students, and it is difficult to significantly increase the number of regional physicians in a short period of time. It is common for physicians in the U. S, Australia, and other high-income nations to practice medicine in both public hospitals and private clinics at the same time [16]. However, physicians in China are a unit member of hospitals, and their salary, welfare, and career development are closely related to the hospital. The lack of protection for multi-point practice greatly reduces doctors' enthusiasm for their practice. Therefore, it is important for policy makers in the health sector to formulate detailed implementation rules for physicians working at different sites, performance distribution systems, and insurance policies in accordance with the health needs of the people in the region and the wishes of the physician group. In addition, it is This study showed that although the distribution of medical physicians by service area is highly unfair (G = 0.88), the co-efficiency by served population was 0.003, Our results of this study are consistent with the reports from other related studies, showing the fairness of physicians' distribution according to population are much higher than the distribution basing on service area [17]. Since the allocation of medical resources should be guided by human needs, the accuracy and feasibility of the fairness of physicians' distribution should be given more considered in future.
This study revealed that the fairness of medical physician distribution among the 31 provincial administrative regions is very high basing on served population. Although the fairness of quantity does not necessarily mean that the medical service level of physicians is the same, the fairness of quantity can allow people in different regions to get similar opportunity for medical services. Since the number of physicians in Zhejiang, Shandong, Beijing, and Jiangsu is over-provisioned, it may be attractive to more physicians from these provinces to the needed regions such as Anhui, Jiangxi and Yunnan by more incentive policy.
The distribution of physicians according to service area is highly unfair among the 31 regions in China. Tibet, Xinjiang, Inner Mongolia and other places are known to be vast and sparsely populated. The number of medical physicians per square kilometer in these areas is significantly less than that in Zhenjiang and Beijing. In addition, China's high-quality medical resources are mainly concentrated in large hospitals of eastern cities. The physicians at the community hospitals are obviously inferior to the tertiary hospitals in terms of education and experience [18,19]. This makes it more difficult for residents in remote areas to seek medical services. Existing regional differences in physician allocation may be further exacerbated by evolving urbanization.
Today, the internet to conduct video consultations or health management of patients has been used to alleviate the shortage of medical resources in some high-income countries including the United States, the United Kingdom, and Japan [20,21]. Internet medical services also started in China in 1990s, and online service has been developed and used in the areas of chronic disease management, maternal and child health, medical imaging, and medical education [22]. The Internet medical industry is gradually emerging in China and has been extended to the areas of chronic disease management, maternal and child health, medical imaging, and medical education. At present, it has developed to the online diagnosis and treatment stage represented by Internet hospitals [23]. The importance to strengthen the internet medical infrastructure in remote areas such as Tibet and Xinjiang is apparent. The use of the Internet for diagnosis and treatment can not only solve the problem of the highly unfair distribution of physicians by the service area and the difficulties of ordinary people in seeking medical treatment, but also promote the sinking of physicians' resources and make it easier for residents to obtain quality medical services. At the same time, the provision of medical services is greatly affected by public financial policies. Many low-and middle-income countries lack equal access to basic public health services due to lack of sustainable public financial support, and high rates of maternal and child mortality [24]. Therefore, the government also needs to promote the rational allocation of Chinese physicians' resources through a financing strategy that continuously improves basic public health equalization.
There are many statistical and quantitative research methods on the fairness evaluation of health resource allocation, such as Lorentz curve, Gini coefficient, Theil index, range method, difference index, concentration index method, etc. [25][26][27]. It is known that drawing the LC and calculating the G to judge the fairness is intuitive and easy to understand. However, the limitation is that it can only reflect the overall difference but not the fairness within the region. The Theil index can well reflect the contribution of intra-group gaps and inter-group gaps to the total gap, and is complementary to the Gini coefficient [28]. Since the limitation of individual method, more accurate assessment can be achieved with combined uses of more analysis methods. In this study, our analysis for the fairness of the physician's resource allocation in the 31 provincial administrative regions in China was conducted by calculating the Gini coefficient and Theil index. Thus, the findings of this study is more valid.
The distribution of physician resources is affected by many factors such as regional economic level and disease spectrum, and subjected to change over time. This study analyzed the fairness of the distribution of Chinese physicians basing on statistical data in 2019, which could provide the latest theoretical basis for the Chinese health department to formulate policies to allocate physician resources. In addition, the fairness in the distribution of physician resources is an important issue not just for China but also for many other countries. It would be beneficial for residents of any country, especially for the areas in the shortage of physician resources, to increase access to physicians by encouraging them to practice at multiple places and use the internet for medical service.
This study has some limitations. First of all, the research object of this article is the practicing (assistant) physicians in the 31 regions in China, and it cannot be distinguished whether the internal physician and other resource allocation of the 31 regions in China is fair. The future study needs to be conducted by focusing on specialized physician groups such as surgeons, pediatricians, obstetricians & gynecologists, and etc. for improved assessment of fairness distribution of physician resources. Secondly, the research results show that the distribution of Chinese physicians by service area is highly unfair. Whether this is directly related to the population density of various regions needs further analysis, such as Lorentz curve and different index methods.
Conclusion
More and more researches have been devoted to the allocation of medical resources in recent year, which emphasizes the rational allocation of physician resources to be a focus of the medical industry. Our statistical analysis of the basic distribution of medical physicians revealed that the current number of physicians per 1,000 population in China has reached the staged development goals of China's health service system. The continuous growth in the number of physicians can further meet the medical and health needs of the people. This study showed that the fairness of Chinese physician resources is significantly higher by population than by service area, which substantiates the results of previous studies. Current development of physician practice in multiple sites, internet-based diagnosis and treatment are very encouraging and will greatly improve the current shortage of physicians in terms of population distribution and alleviate the problem of medical treatment in remote areas. In addition, future management on medical resources and services can be improved by strengthening the effective balance and coordination of high-quality medical resources and reducing the difference in physicians' quality among provinces and cities. And the assessment for fair distribution of medical physicians and other resources needs to be performed by using multiple analysis methods. Authors' contributions HMY: literature search, data collection and analysis, and manuscript preparation; SYY: study design and guidance and financial support; DH: design of the study and study guidance; YL: Data presentation, manuscript preparation and revision. The author(s) read and approved the final manuscript.
Funding
This study was supported by Science and Technology Project of Jiangxi Health and Family Planning Commission of Jiangxi Province (Grant No. 20185518). The funding body played no role in the design of the study and collection , analysis, and interpretation of data and in writing the manuscript.
Availability of data and materials
The data for this study came from: a). China Health Statistics Yearbook 2019 (https://data.cnki.net/area/Yearbook/Single/N2020020200?z=D09) and b). China National Bureau of Statistics official website (http://www.stats.gov.cn/ tjsj/ndsj/) Declarations Ethics approval and consent to participate Not applicable. The data used in this study is publicly available and no permission is required to access the data. | 5,068 | 2021-05-12T00:00:00.000 | [
"Medicine",
"Economics"
] |
Phenazin-5-ium hydrogen sulfate monohydrate
The crystal structure of the title salt, C12H9N2 +·HSO4 −·H2O, comprises inversion-related pairs of phenazinium ions linked by C—H⋯N hydrogen bonds. The phenazinium N—H atoms are hydrogen bonded to the bisulfate anions. The bisulfate anions and water molecules are linked by O—H⋯O hydrogen-bonding interactions into a structural ladder motif parallel to the a axis.
The crystal structure of the title salt, C 12 H 9 N 2 + ÁHSO 4 À ÁH 2 O, comprises inversion-related pairs of phenazinium ions linked by C-HÁ Á ÁN hydrogen bonds. The phenazinium N-H atoms are hydrogen bonded to the bisulfate anions. The bisulfate anions and water molecules are linked by O-HÁ Á ÁO hydrogen-bonding interactions into a structural ladder motif parallel to the a axis.
Perhaps the most interesting aspect of the structure results from the hydrogen bonding between the bisulfate anions and the solvent water molecule. This results in the formation of a ladder motif that runs parallel to the a-axis (see Figure 3).
Each bisulfate ion serves as a hydrogen bond donor to one water molecule and a hydrogen bond acceptor from a second water molecule forming the rails of the ladder, of form C 2 2 (6). The rungs are formed via a second water-donor/bisulfateacceptor pair, which generates rings within the ladder structure (two rungs and two rail sections in each ring), R 4 4 (12). There are two chemically different rings formed in this case since one involves rail sections with water molecules serving as the hydrogen bond donor and the other involves the bisulfate ion serving as the hydrogen bond donor.
Experimental
Phenazine was dissolved methanol (90 ml) to which 40% aqueous sulfuric acid (2.5 ml) had been added. Small, prismatic, ruby red crystals formed over the course of two months of slow evaporation at room temperature.
Refinement
All H-atoms bound to carbon were refined using a riding model with d(C-H) = 0.93 Å, U iso =1.2U eq (C). Hydrogen atoms bonded to oxygen or nitrogen atoms were located in a difference map and their positions refined using fixed isotropic U values. There are two Level-B warnings in the checkCIF file for short intermolecular H···H distances. These result from the very strong hydrogen bond between the bisulfate ion and the solvent water molecule (d D···A = 2.5223 (16) Å.
Computing details
Data collection: CrysAlis PRO (Agilent, 2011); cell refinement: CrysAlis PRO (Agilent, 2011); data reduction: CrysAlis PRO (Agilent, 2011); program(s) used to solve structure: SHELXTL (Sheldrick, 2008); program(s) used to refine structure: SHELXTL (Sheldrick, 2008); molecular graphics: SHELXTL (Sheldrick, 2008) and Mercury (Macrae et al., 2008); software used to prepare material for publication: SHELXL97 (Sheldrick, 2008) Packing diagram showing the structure of the ladder motif formed by hydrogen bonding between the bisulfate ions and water molecules. Details of the hydrogen bonding may be found in Table 1. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 793.8 | 2013-03-02T00:00:00.000 | [
"Chemistry"
] |
A genetic algorithm for the project scheduling with the resource constraints
– The resource-constrained project scheduling problem (RCPSP) has received the attention of many researchers because it can be applied in a wide variety of real production and construction projects. This paper presents a genetic algorithm (GA) solving the RCPSP with the objective function of minimizing makespan. Standard genetic algorithm has to be adapted for project scheduling with precedence constraints. Therefore, an initial population was generated by a random procedure which produces feasible solutions (permutation of jobs fulfilling precedence constraints). Besides, all implemented genetic operators have taken sequential relationships in a project into consideration. Finally, we have demonstrated the performance and accuracy of the proposed algorithm. Computational experiments were performed using a set of 960 standard problem instances from Project Scheduling Problem LIBrary (PSPLIB) presented by Kolisch and Sprecher [1]. We used 480 problems consisting of 30 jobs and 480 90-activity instances. We have tested effectiveness of various combinations of parameters, genetic operators to find the best configuration of GA. The computational results validate the good effectiveness of our genetic algorithm. This paper presents a genetic algorithm for the Resource-Constrained Project Scheduling Problem with the objective of makespan minimization. The schedules were created using parallel or serial generation schemes with the activity list representation of chromosomes. Genetic algorithm was tested with various settings of parameters and operators on a set of problem instances from the PSPLIB. The proposed GA with the best configuration of parameters experimentally found generates good solutions compared to other approaches. The key to success is appropriate search for solution space with adapting known genetic operators to the RCPSP. The simulation results show that the adapted operators are good for all size problems.
Also, project scheduling is very important for the companies which steer production on client request so-called Make-To-Order (MTO) production. MTO is used for inventory products for the individual recipient. Every such a production order should be treated as a project developing in consultation with the customer.
Project scheduling is one of the most intractable domains for researchers as the theoretical models in this area are useful in practice and are not easy to solve. It has been shown by Błażewicz et al. [2] that the considered resource-constrained project scheduling problem, as the generalization of the job shop problem, is strongly NPhard. Therefore, exact solution procedures to solve RCPSP can be used only for small problem instances. For large projects it is justified to use heuristic algorithms, in particular metaheuristics e.g. genetic algorithm, simulated annealing (SA), tabu search (TS) etc. Metaheuristics are effective for many optimization problems, because of sampling promising areas from the space of possible solutions.
In this work, effectiveness of applying one of metaheuristic genetic algorithms for the RCPSP will be tested.
Problem description
Project scheduling, as a part of project management, is aimed at deciding the time to start and/or finish jobs (activities, tasks). All activities in a project have to be performed in accordance with a set of precedence and resource constraints, with the fulfilment of properly defined optimisation criteria. The considered resourceconstrained project scheduling problem with the objective function of minimizing time completion of all jobs in project (makespan) can be formulated as follows [3]: subject to: where s i -the planned starting time of the activity i (decision variable), d i -the nonpreemptable duration of the activity i, a k -quantity of available renewable resources of the type k (k = 1, 2, . . . K, where K is the number of resource types) at any point in time, r ik -the activity i requirement for the type k resource, A(t) -the set of activities being processed (in progress) at time period t. The problem deals with finding a schedule, taking into account the precedence and resource costraints to minimize makespan. A schedule is represented by the vector S = (s 0 , s 1 , . . . , s n+1 ) of the starting times of each activity s i (decision variables). The Pobrane z czasopisma Annales AI-Informatica http://ai.annales.umcs.pl Data: 06/04/2022 13:24:26 U M C S objective function (1) minimizes the start time of dummy project end activity n+1, which is equivalent to the considered objective of minimizing makespan of the project.
Constraints (2) enforce the precedence relations between jobs. The finish-start and zero-lag precedence relationships occur between the activities: the subsequent operation may start immediately after the completion of the previous one (sequential constraints).
The renewable resource of type k has a constant availability a k . Resource constraints (3) are described as follows: at each moment of time t, the resource consumption does not exceed the available quantities a k for every type of renewable resources k = 1, 2, . . . , K.
The project schedules in the activity networks will be represented by a non-cyclical, coherent and simple directed graph G(V, E), in which V means a set of nodes corresponding to the activities and E is a set of arcs which describe the sequential dependences between the activities. The set V is composed of n + 2 activities, numbered from 0 to n + 1, in a topological order, i.e. the predecessor has always a lower number than the successor. Two activities 0 and n + 1 are dummy: have no duration (d 0 = d n+1 = 0) and require 0 unit of resources (r 0,k = r n+1,k = 0 for all k = 1, 2, . . . , K). Activity 0 and n + 1 represent the "project start" and "project end", respectively.
The project network is a graphical representation of the precedence relationships between activities. We use Activity-on-Node (AoN) representation, which is more often used than Activity-on-Arc (AoA) notation scheme for time optimisation problems. In AoN the set V represents activities and the set E denotes relationships between jobs.
The analysis of all jobs durations and their demand for resources points that a minimum makespan of the project is 12 time-periods: n i=1 d i r i a = 45 + 8 + 12 + 6 + 24 + 10 + 6 + 2 10 = 112 10 = 12 A feasible (satisfying all the resources and precedence constraints) schedule with an optimal makespan of 12 is presented in Fig. 2. In the case of the RCPSP in order to find schedule (starting times of all activities), the decoding procedures, so-called Schedule Generation Schemes (SGSs), are used. SGS generates the sequence, based on the activity list or the priority list, taking into account the availability of the resources and the precedence relationships. SGS starts from an empty set of sequenced jobs and constructs a schedule by stepwise extension of a partial schedule. For the RCPSP two decoding procedures are used (introduced by Kelley [4]): serial SGS and parallel SGS.
Serial SGS performs activity-incrementation [5]. It consists of n steps. For one step, one job (the first non-sequenced and eligible job from the activity list or the priority list) is selected and scheduled at the earliest possible commencement time upon fulfilment of the sequential and resource constraints. Activities are so-called eligible if they can be started in actual step because all of their predecessors have been scheduled.
Parallel SGS performs time-incrementation [5]. Iteratively at subsequent moments of the time t (at the decision-making points), all the unscheduled and eligible activities (considered in the sequence arranged on the activity list or the priority list) are Pobrane z czasopisma Annales AI-Informatica http://ai.annales.umcs.pl Data: 06/04/2022 13:24:26 U M C S started, the ones which may be started upon fulfilment of the precedence and resource constraints.
Decoding procedures are the core of heuristics for the RCPSP. When we use SGS, we can apply algorithms of searching solutions using the permutation coding (e.g. metaheuristics GA, TS, SA), popular representation of many optimization problems.
Genetic algorithm
Genetic algorithm was first specified in 1975 (by Holland [6]). GAs are a kind of stochastic, multi-point, parallel search algorithms applied to a lot of optimization problems. Genetic algorithm is population based technique which is inspired by the biological evolution. The techniques of searching potential solutions mimic natural genetic inheriting and phenomenon of natural selection. Mechanisms of natural selection are applied as follows: only the strongest individuals survive and take part in a reproduction (crossover, mutation). Solutions are represented by chromosomes which are evaluated with the fitness function -the degree of adaptation of individuals in the environment.
In GA the strongest individuals pass the genetic information over to their descendants (in crossover and mutation operations). Next generations are better and better adapted for conditions of the environment. GA simultaneously considers a population of solutions instead of only one in contrast to the other popular metaheuristics like simulated annealing or tabu search. Genetic algorithm is easy to implement metaheuristics which produces the results whose quality is uncertain, but GA can be designed to execute in a given amount of time.
Many variants of basic idea GA in research works and applications may be largely different. A pseudo-code for our genetic algorithm is given below in Listing. 3. Every application of genetic algorithm should include the following elements which can distinguish variants of GA [7]: • coding and a method for decoding solutions to the problem, • way of generating of initial population, • evaluation (fitness) function which measures the quality of the solution, • methods of selecting individuals for the next generation, • crossover operators, • mutation operators.
Below we describe the key aspects of our genetic algorithm.
Coding
The chromosome representation of the RCPSP is an activity list < j 1 , j 2 , . . . , j n >permutation of non dummy jobs [8]. The activity list consists of numerals "1" to "n" where each numeral corresponds to a job in a project. This list is precedence feasible -it means that each activity must have a higher index in the activity list than each of its predecessors in the project network.
Decoding the activity list (solution of GA) to a schedule (solution of the RCPSP) is realized by parallel or serial SGS.
Initial population
The initial chromosomes will be generated by a procedure which creates precedence feasible solutions. It will be reached at random using the serial SGS decoding procedure for the random permutation list of jobs.
Evaluation -fitness function
Fitness function is used for evaluation of chromosomes. These evaluations are important for selection methods.
Fitness function is a particular type of objective function that measures the quality of an individual (chromosome) in a genetic algorithm. For the RCPSP the fitness function before scaling is equal to makespan of a project.
If we use the tournament selection operator, the only important thing is which one of the individuals is better or worse. The level of difference between chromosomes is important in the roulette wheel selection. Our conception of fitness scaling for the minimization criterion make use of a slight difference from the worst individual: where f i -fitness of individual i before scaling (equal to makespan of a project),
Methods of selection
Selection is a genetic operator that chooses individuals from the current population for inclusion in the next generation. It is an element of adaptive plan, purpose of which is to produce improved population of solutions from the current one. In this work, we implement the following selection operators: • roulette wheel selection (fitness proportionate selection) -developed by Holland (1975), the method is constructed in such a way that selection probability for each individual is proportional to the fitness value, • tournament selection -a few chromosomes, chosen at random from the actual population, take part in "tournament", one of them, with the best fitness, wins and goes to the next generation; the number of tournaments should be equal to the size of population (adding elite solutions).
In selection we use elitism, which ensures that the best found solutions are preserved and passed on to the next population.
Crossover operators
Crossover is a genetic operator that combines two parent individuals to produce a new child individual (offspring). In our work two parent chromosomes are replaced by children chromosomes The idea behind crossover is that new individuals may be better than both of the parents if they take the best characteristics from each of the parents. The parents taking part in crossover are established at random according to a user-definable crossover probability.
We consider various genetic operators fulfilling the requirement of permutation problems that each job (1 to n) should appear once in the generated children chromosomes. The following crossover operators will be used [9]: • 1PX (One-Point order Crossover), • 2PX (Two-Point order Crossover), • PPX (Precedence Preservance Crossover).
1PX, 2PX, PPX are used in GA for many optimization problems with the permutation coding. These crossover operators are used for the RCPSP because they generate children chromosomes which satisfy all precedence constraints (of course if parents respect the precedence relationships).
In 1PX, one crossover point (cut-point) is randomly selected for dividing parents. The set of genes, on the left side of this crossover point, is copied from the parent to the offspring (from parent 1 to child 1 and from parent 2 to child 2), and all the remaining jobs, on right the side of the cut-point, are placed in the order of their appearance in the other parent.
Pobrane z czasopisma Annales AI-Informatica http://ai.annales.umcs.pl Data: 06/04/2022 13:24:26 U M C S In 2PX, two cut-points are randomly selected for dividing parents. The genes outside the selected two cut-points are inherited (from parent 1 to child 1 and from parent 2 to child 2), and the other genes (the mid part of the chromosome) are placed in the order of their appearance in the other parent.
PPX was developed by Bierwirth et al. [10], specially for scheduling problems. In PPX at the beginning there is created a mask which consists of n-elements random 0 or 1, indicating which parent genes should be taken from. Child 1 is created by copying the available gene based on the next mask values: if the actual mask value equals 1, gene is copied from parent 1, otherwise from parent 2. Gene copied to the child is removed from both parents. This procedure is repeated using a new mask for child 2.
Mutation operators
Mutation is a genetic operator that changes one or more gene values in an individual. Using this operator prevents the population from stagnating at any local optima. Chromosomes to mutation are established at random according to a user-definable mutation probability.
The traditional mutation operators are not well suited for the RCPSP and will be modified because all precedence relationships in the project must be satisfied [9]. Modifications of mutations for the resource-constrained project scheduling problem are presented in Table 1.
Mutation Standard procedure
Modified procedure for the RCPSP Invert Reverse the order of the elements between randomly selected two positions.
Step 1: Select a gene g at random.
Step 2: Find a set of genes in chromosome from the left side of the gene g which can be exchanged with the gene g with satisfying precedence relationships.
Step 3: Exchange the gene g with randomly selected gene from the set of genes determined in Step 2.
Swap
Randomly selected two jobs are exchanged.
Step 1: Select a gene g at random.
Step 2: Find a set of genes in chromosome from the left and right sides of the gene g which can be exchanged with the gene g respecting the precedence relationships.
Step 3: Exchange the gene g with a randomly selected gene from the set of genes determined in Step 2.
Swap adjacent
Two adjacent randomly selected genes are exchanged.
Step 1: Select a gene g at random.
Step 2: Check if the swap mutation of adjacent gene from the left side of the gene g with the gene g can be realized with respect to sequential constraints, exchange these genes, otherwise go to Step 2.
Step 3: Check if the swap mutation of adjacent gene from the right side of the gene g with the gene g can be realized with satisfying precedence constraints, exchange these genes, otherwise end procedure. Insert A gene at one random position is removed and put at another random position with maintaining the relative order of all other genes.
Step 1: Select a gene g at random.
Step 2: Find a list of all positions in chromosome where the gene g can be inserted with satisfying precedence relationships.
Step 3: Insert the gene g at randomly selected position from the list found in Step 2 with maintaining the relative order of all other genes.
In this work, the author proposes hybridization GA with local search (LS). It will be realized by special construction of moves Insert all, Swap all and Swap adjacent all. Insert all. Swap all, Swap adjacent all are implemented similarly. At the beginning all possible moves, respecting the precedence constraints, of the given type (Insert all -all possible mutations Insert, Swap all -all possible mutations Swap, Swap adjacent all -all possible mutations Swap adjacent) are performed and evaluated. Next the best found solution replaces elite chromosome (in our approach LS is used only for actual best chromosome) if only its fitness is better than the actual best fitness.
For an example, in move Insert all a random gene g in the actual best chromosome (parent) is inserted in all possible positions with respect to the precedence constraints. After insertion each chromosome is evaluated and finally the best position for insertion the gene g is chosen. If fitness of the generated offspring is better than that of the parent, the offspring is included into the population.
Computational results
All algorithms were implemented in C# in Microsoft Visual Studio 2005. The tests were performed using a computer with a 1,7 GHz Intel Pentium CPU. To present the effectiveness of GA we considered 1080 instances of test problems from the two classes PSPLIB [1]: J30 (480 instances with 30 jobs) and J120 (600 instances with 120 jobs). The experiments were performed using the following configuration of genetic algorithm for all size problems: • population size -50, mating pool is also 50, • mutation rate -0.2, • crossover rate -0.7, • elite size is equal to 0 (no elitist) or 2 (two elite chromosomes), • maximal number of generations -100.
Stopping criterion of GA was a maximal number of all generated and evaluated schedules which equals 5000 schedules (counting schedules includes random solutions in the first generation), so hybrid algorithm with LS can stop before the 100th generation. This stopping criterion was chosen in order to enable us to compare our algorithm with other heuristics from the literature.
We decoded the activity-list (chromosome representation) into a feasible schedule with serial or parallel SGS. Better schedules were created using serial SGS (by 0.2% on the average greater values of fitness). Besides, parallel SGS took more computational time by 3.2 times on the average than serial SGS.
Each problem was solved employing two selection operators (roulette wheel, tournament), three crossover operators (1PX, 2PX, PPX) and four mutation operators (Invert, Swap adjacent, Swap, Insert). Local search in the elitist GA was performed with the three moves Swap adjacent all, Swap all, Insert all. The results of effectiveness of GA using serial SGS with different selection, crossover, mutation operators, are presented in Table 2. where a -number of solutions (among 480 possible) whose makespan is equal to the optimal makespan of project, b -average deviations (%) from the optimal makespan, c -number of solutions (among 480 possible) whose makespan is equal to the best known makespan, d -average deviations (%) from the critical path lower bound [8], * -the description of algorithm settings has the following construction: crossover type, mutation type, elite size, local search move type. The abbreviations used in the operator descriptions: SW -mutation Swap, ADmutation Swap adjacent, IS -mutation Insert, IN -mutation Invert, SWA -mutation Swap all, ADA -mutation Swap Adjacent all, ISA -mutation Insert all. The analysis of experimental results indicates that: • tournament selection is more effective than roulette wheel selection (other fitness scaling can improve results) in the majority of cases, • Swap adjacent is the worst mutation operator, Insert is the best mutation operator, • local search in elitist GA is not effective, better results are obtained without LS, • for 30-job problems the elitist strategy is better than selection without elite chromosomes, • for 120-job instances the elitist strategy is worse than selection without elite chromosomes, • 2PX is on the average the best crossover operator, 1PX is more effective than PPX, • the best settings of GA for J30: tournament selection, crossover 2PX, mutation Insert, 2 elite chromosomes, • the best settings of GA for J120: tournament selection, crossover 2PX, mutation Insert, without elite chromosomes.
The results obtained by our genetic algorithm are good compared to other algorithms from the literature [8,11].
Conclusions
This paper presents a genetic algorithm for the Resource-Constrained Project Scheduling Problem with the objective of makespan minimization. The schedules were created using parallel or serial generation schemes with the activity list representation of chromosomes. Genetic algorithm was tested with various settings of parameters and operators on a set of problem instances from the PSPLIB.
The proposed GA with the best configuration of parameters experimentally found generates good solutions compared to other approaches. The key to success is appropriate search for solution space with adapting known genetic operators to the RCPSP. The simulation results show that the adapted operators are good for all size problems. | 5,020.4 | 2010-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Broad-Spectrum Antiviral Activity of the Amphibian Antimicrobial Peptide Temporin L and Its Analogs
The COVID-19 pandemic has evidenced the urgent need for the discovery of broad-spectrum antiviral therapies that could be deployed in the case of future emergence of novel viral threats, as well as to back up current therapeutic options in the case of drug resistance development. Most current antivirals are directed to inhibit specific viruses since these therapeutic molecules are designed to act on a specific viral target with the objective of interfering with a precise step in the replication cycle. Therefore, antimicrobial peptides (AMPs) have been identified as promising antiviral agents that could help to overcome this limitation and provide compounds able to act on more than a single viral family. We evaluated the antiviral activity of an amphibian peptide known for its strong antimicrobial activity against both Gram-positive and Gram-negative bacteria, namely Temporin L (TL). Previous studies have revealed that TL is endowed with widespread antimicrobial activity and possesses marked haemolytic activity. Therefore, we analyzed TL and a previously identified TL derivative (Pro3, DLeu9 TL, where glutamine at position 3 is replaced with proline, and the D-Leucine enantiomer is present at position 9) as well as its analogs, for their activity against a wide panel of viruses comprising enveloped, naked, DNA and RNA viruses. We report significant inhibition activity against herpesviruses, paramyxoviruses, influenza virus and coronaviruses, including SARS-CoV-2. Moreover, we further modified our best candidate by lipidation and demonstrated a highly reduced cytotoxicity with improved antiviral effect. Our results show a potent and selective antiviral activity of TL peptides, indicating that the novel lipidated temporin-based antiviral agents could prove to be useful additions to current drugs in combatting rising drug resistance and epidemic/pandemic emergencies.
Introduction
One of the biggest public health challenges is emerging viral infections due to possible epidemic and pandemic risks. Furthermore, widespread viral infections pose serious problems due to the possible onset of resistance to available antivirals. This is looming due to the limited number of therapeutic options available against many viruses. The current armamentarium available for antiviral drugs has significantly expanded in recent decades and currently encompasses several viral families [1][2][3]. The current COVID-19 pandemic, and also the previous emerging viral outbreaks (swine flu, Ebola, other coronaviruses, Nipah, Zika and others), highlight the urgent need to develop broad spectrum antivirals [4][5][6]. In practice, most antivirals are designed with the aim of blocking the function of a specific viral protein crucial for a precise mechanism in the replication cycle, so this target is likely unique to a specific virus or viral family. In fact, the specific characteristics and peculiarities of the replication of each viral family act as an obstacle to the realization of broad-spectrum antiviral agents. Moreover, since viruses use the functional apparatus of the host cell for most of their activity, the number of putative direct antiviral targets is further reduced. In consideration of the current COVID-19 pandemic, it is unlikely that the traditional virus-specific paradigm of antiviral drug development can be implemented for the immediate availability of drugs, but it is essential to act promptly and effectively against new pathogens that cause unexpected but lethal infections. Within this perspective, the one-drug-to-one-target paradigm for antiviral drug discovery has proved inadequate for responding to an increasing diversity of viruses deadly in humans.
This underlines the urgency of designing broad-spectrum antivirals that can act on multiple viruses by intercepting some common steps of their life cycle rather than specific viral proteins [7]. In this context, it is important to promote the development of novel antiviral agents with a broad spectrum of activity based on alternative mechanisms of action.
Therapeutic peptides have become interesting tools in drug discovery, with antimicrobial peptides (AMPs) widely studied for their potential antiviral properties as evidenced by the wealth of data accumulated in recent years [8][9][10][11][12]. The study of the antiviral activity of AMPs has grown substantially in recent years to allow the compilation of a specific online database, such as the antiviral peptide (AVP) database (AVPdbhttp://crdd.osdd.net/servers/avpdb/, accessed on 31 December 2021) [13,14]. Even though the number of AMPs endowed with antiviral activity is still low, they have shown enormous potential for being translated into pharmaceutically available antiviral drugs. AVPs can derive from natural sources, such as those isolated from mammals and insects, or obtained by artificial means using bioinformatic tools. Their principal mechanisms of action are conveyed by directly acting on the virus particle by a virucidal effect or competing for the binding site on the host cell membrane and interfering with their attachment and entry. However, AVPs may also act in further stages of the viral cycle, such as viral replication or viral egress (for an updated comprehensive review on AVPs see reference [15]). Of importance are the interactions of AVPs with biological membranes, which lead to modification of rigidity and curvature of the viral and/or cell membrane and consequent reduction of cell susceptibility to infection.
Several AMPs have been reported to be derived from skin secretions of different amphibian species, but relatively few frog-derived peptides with antiviral properties have been described in the literature. Frogs produce AMPs in dermal glands and release them onto the skin by a holocrine mechanism upon stress of physical injury [16][17][18]. Generally, these peptides have common features, such as a net positive charge (due to the presence of basic amino acids), the presence of at least 50% hydrophobic amino acids, and an amphipathic α-helical secondary structure with a length from 10 to 50 amino acids [19][20][21]. The most known amphibian AMPs with antiviral activity are the magainins from frog Xenopus laevis. Both magainin1 and 2 showed an efficient virucidal effect on viruses belonging to the Herpesviridae family, probably by means of interaction with the viral envelope components and subsequent disruption of the envelope integrity [22,23]. The antiviral activity of dermaseptins (produced by frogs from the Phyllomedusa genus) and their derivatives has been described against herpes simplex virus type 1 and 2 (HSV-1, HSV-2) [24,25], human immunodeficiency virus type 1 (HIV-1) [26], and rabies virus [27] via a virucidal mechanism of action disrupting viral envelopes, but also affecting the early stages of the intracellular infection. In vivo studies using mice showed a strong protective effect within the range of 100-200 µg with a 75% increased survival of animals after a challenge with rabies virus [27]. A further amphibian skin-derived AVP is HS-1 from Hypsiboas semilineatus, which is active against Dengue virus 2 and 3. Both viruses were The the Indian frog Hydrophylax bahuvistara produces a peptide, urumin, with strong inhibitory activity on influenza virus replication both in vivo and in vitro [28]. Interestingly, some influenza subtypes (H1N1, H1N2) were mainly affected compared to other subtypes (H3N1 and H3N2), pointing out a preferential interaction with hemoagglutinin 1 [29].
One of the largest family of amphibian peptides is represented by temporins, which were first isolated from the skin secretions of the European common red frog Rana temporaria [30,31]. Temporins are among the smallest AMPs known, with a length ranging from 10 to 14 amino acids, a weak cationic character due to the presence of few basic residues in their sequence, and with an amphipathic α-helical conformation in hydrophobic environments. They are mainly active against Gram-positive bacteria [17,32]. Some temporins have proved their efficacy as AVPs, such as temporin B (TB) with a virucidal activity against HSV-1. In fact, preincubation of HSV-1 purified virions with 20 µg/mL of TB led to a 5-log reduction of virus titers. By transmission electron microscopy, a clear disruption of the viral envelope was observed. Moreover, TB could also alter other stages of the HSV-1 life cycle, including the attachment and the entry of the virus into the susceptible host cell [33]. The isoform A of temporin has also shown the ability to inhibit virus infection by reducing the replication of the channel catfish virus and the frog virus 3 [34]. Most recently, a further temporin has been investigated for its antiviral activity. Temporin-SHa (SHa) and its (K3) SHa analog (with substitution of the serine in position 3 with a lysine [35] to increase its net positive charge while retaining the α-helical structure), have been described to significantly inhibit HSV-1 replication in human primary keratinocytes at micromolar concentrations [36]. Finally, temporin G (TG) has been shown to significantly inhibit the early life-cycle phases of several respiratory viruses [37].
While most temporins are specifically active against Gram-positive bacteria, the isoform L (Temporin L, TL) (Phe-Val-Gln-Trp-Phe-Ser-Lys-Phe-Leu-Gly-Arg-Ile-Leu-NH2), is endowed of great efficiency not only against Gram-positive bacteria but also against Gram-negative bacteria and yeast strains [38]. On the other hand, while most temporins exert minor toxicity, especially against human erythrocytes at microbicidal concentrations, TL has a higher level of cytotoxicity [39]. Previous studies identified a direct correlation between TL hemolytic activity and its α-helical content [40]. By reducing the helicity percentage, it was possible to increase the therapeutic index of the peptide and to maintain its antimicrobial effectiveness against both bacteria and yeasts [40,41].
Considering the broad-range activity of TL as an antibacterial, we hypothesized that TL could also exert antiviral activity. It is, in fact, unlikely that frogs and other vertebrates would have developed a conserved and finely tuned mechanism in which a single peptide either works alone or is toxic to only one type of microorganism. The antimicrobial activity by peptides from amphibian skin, as an innate mechanism of defense, is unlikely to be caused by a single peptide but a combination of various peptides working in concert. This needs to be paralleled by the possible activity of each peptide on various targets [42,43]. Since, to date, TL antiviral properties have not been evaluated, we analysed the antiviral potential of TL and a set of peptidomimetic analogues against a large set of viruses comprising enveloped, naked, DNA and RNA viruses. Our results showed potent and selective antiviral activity of peptides derived from TL. This is the basis for the development of novel temporin-based anti-infective drugs to be used in the context of the arising drug resistance and epidemic and pandemic emergencies.
Native Temporin L and Its Analog ([Pro3, DLeu9]TL) Antiviral Activities
The antiviral effect of the TL peptide was investigated in vitro against several viral pathogens that may severely threaten human health. Several enveloped viruses were used in this study including the enveloped herpes simplex viruses (HSV-1 and HSV-2), as examples of DNA viruses, and the human coronaviruses (HCoV-229E, HCoV-OC43 and SARS-CoV-2), MeV, HPIV-3 and influenza virus (subtype H1N1), as well as nonenveloped viruses such as poliovirus (Sb-1) and CV-B3. As described in the introduction, most temporins have been considered primarily effective in the inhibition of Gram-positive bacterial growth within concentrations of 2.5 and 50 µM [44]. The exceptions are represented by temporin DRa [45] and TL [38,39], which are also active against Gram-negative bacteria. Gram-negative bacteria present an outer membrane surrounding the peptidoglycan shell, and despite the peculiar differences between outer membranes (OMs) and other biological membranes, our reasoning was that these two particular temporins may be more effective against lipid membranes, and therefore able to affect enveloped viruses. Therefore, in the effort to identify a real broad-spectrum lead compound that could be effective also against viruses, besides microbes, we focused our interest on TL. In comparison to other temporins, TL is also known for its disruptive hemolytic activity expressed in a high level of toxicity for eukaryotic membranes. Nevertheless, a strong correlation between the hemolytic activity and the α-helix propensity has been shown in recent studies evaluating the structure-activity relationships of TL and a set of synthetic analogues [40]. An interesting compound based on TL was identified, [Pro 3 , DLeu 9 ]TL, showing conserved antibacterial activity and a highly reduced cytotoxic effect in vitro. In this compound, the concurrent substitution of glutamine in position 3 (Gln 3 ) and leucine in position 9 (Leu 9 ) with proline and a D-enantiomer (DLeu), respectively, determined the advantageous consequence of maintaining a considerable inhibitory effect against both Gram-positive and Gram-negative bacteria, with an indicative reduction of toxicity, by disrupting the α-helical content of TL [46].
The two peptides selected in the present study were TL and its less toxic analogue [Pro 3 , DLeu 9 ]TL (from now on named TL1, as described in Table 1). [47,48].
In our initial topical screening assays, we focused on the Herpesviridae family. Both HSV-1 and HSV-2 were mixed with different concentrations (from 0.1 µM to 50 µM) of TL and TL1 and were directly added to the Vero cells in the experiment named co-treatment assay. Peptide-free controls (virus plus cell culture medium only) were inoculated in parallel on cell monolayers. After 2 h of incubation at 37 • C to allow virus adsorption and penetration, the mixture (virus/peptide) was removed by washing with PBS and the plates were left at 37 • C in 5% CO 2 after the addition of CMC for 48 h. After incubation, plaques were scored for measuring the amount of inhibition obtained compared to the peptide-free control. The results showed that both TL and TL1 were able to inhibit infectivity of the two members of the alphaherpesvirinae subfamily in a dose-dependent manner ( Figure 1). To better understand the inhibitory effect of TL and TL1 on the propagation of HSV-1 and HSV-2, we examined whether the peptides directly damaged virus particles or indirectly interacted with the host cells preinfection or postinfection. In practice, we performed three different "time-of-addition" experiments, namely (1) virus pretreatment, (2) cell pretreatment, and (3) cell post treatment experimental conditions, to elucidate the antiviral mechanisms of action of these two peptides. To find out whether TL and TL1 peptides exerted their inhibitory activity by interacting directly with HSV virions, we performed the virus pretreatment assay. Instead of simultaneously adding peptides and virus to the cells, viral particles were pretreated with peptides for 2 h at 37 °C. After this preincubation, the virus-peptide mixture was diluted so that the peptides were in a nonactive concentration and the virus MOI was 0.01 pfu/cell. After incubation for 48 h, residual infectivity was measured by plaque scoring. The two peptides showed strong inhibition of viral infectivity and were active at similar relative concentrations against both viruses ( Figure 1). Subsequently, further experiments were conducted to determine whether the peptides could function at post-entry into cells (curative) or preinfection (prophylactic) stages ( Figure 2). To better understand the inhibitory effect of TL and TL1 on the propagation of HSV-1 and HSV-2, we examined whether the peptides directly damaged virus particles or indirectly interacted with the host cells preinfection or postinfection. In practice, we performed three different "time-of-addition" experiments, namely (1) virus pretreatment, (2) cell pretreatment, and (3) cell post treatment experimental conditions, to elucidate the antiviral mechanisms of action of these two peptides. To find out whether TL and TL1 peptides exerted their inhibitory activity by interacting directly with HSV virions, we performed the virus pretreatment assay. Instead of simultaneously adding peptides and virus to the cells, viral particles were pretreated with peptides for 2 h at 37 • C. After this preincubation, the virus-peptide mixture was diluted so that the peptides were in a nonactive concentration and the virus MOI was 0.01 pfu/cell. After incubation for 48 h, residual infectivity was measured by plaque scoring. The two peptides showed strong inhibition of viral infectivity and were active at similar relative concentrations against both viruses ( Figure 1). Subsequently, further experiments were conducted to determine whether the peptides could function at post-entry into cells (curative) or preinfection (prophylactic) stages ( Figure 2).
Figure 2.
Four experimental schemes to study the virus pretreatment and the cell pretreatment, cotreatment, and post treatment effects of the TL-based peptides on viral infectivity. For cell pretreatment, each peptide was incubated with cells for 2 h, media was then removed, and virus inoculum added and incubated for 60 min, then inoculum was removed and replaced with fresh medium and cells were incubated. For virus pretreatment, the virus was incubated with peptide for 2 h then diluted to obtain ineffective peptide concentrations and added to cells. Cotreatment wells received compound and virus inoculum simultaneously, incubated for 2 h, then media was replaced, and cells were incubated. Post treatment wells were infected with virus for 2 h followed by inoculum removal and replacement with peptides in the media. In each experiment cells were incubated with fresh medium containing CMC from 48 to 72 h (depending on the virus used) and plaques were scored.
Vero cell monolayers were treated with peptides for 2 h before virus addition (cell pre-treatment assay) or after (post treatment assay) virus penetration into cells to assess the activity of peptides in a post entry stage of the virus replicative cycle. No or minor inhibitory effect of the two peptides on both HSV-1 and HSV-2 could be detected (the obtained data are shown in Figure 1). This implied that these peptides probably inhibited herpesviruses infection mainly by directly interacting with viral particles. The 50% inhibitory concentrations (IC50) were 8.55 μM and 9.99 μM for HSV-1, and of 8.28 μM and 8.86 μM for HSV-2 vs. TL and TL1, respectively (all IC50 were calculated for virus pretreatment assay). The 90% inhibitory concentrations (IC90) were 15.66 μM and 18.69 μM for HSV-1, and 16.04 μM and 16.71 μM for HSV-2 vs. TL and TL1, respectively.
To further characterize the inhibitory effect of TL and TL1, in vitro infection inhibition assays were performed with a collection of enveloped viruses that differ in genome content compared to herpesviruses (RNA versus DNA), namely: HCoV-229E, HCoV-OC43 and SARS-CoV-2, MeV, HPIV-3 and influenza virus (subtype H1N1) ( Figure 3).
Four different times of addition experiments were performed, and the results showed a clear inhibition of infectivity in the cotreatment assay and in the virus pretreatment assay, while a minor efficiency was reported for the remaining two time-of-addition assays. Surprisingly, this was not the case for both paramyxoviruses analysed in the present study ( Figure 3E,F), but possible differences in the lipid composition of their envelopes can possibly explain such a different behavior. MeV and HPIV-3 were not affected by both TL and TL1 in the virus pretreatment assay, and only minor evidence for a reduction of infectivity was found in the cotreatment assay. On the contrary, when the cell pretreatment assay was performed, a reduction of infectivity of over 40% with both peptides at the concentration of 25 μM was observed. No activity was recorded in the post treatment assay against tested paramyxoviruses. Four experimental schemes to study the virus pretreatment and the cell pretreatment, co-treatment, and post treatment effects of the TL-based peptides on viral infectivity. For cell pretreatment, each peptide was incubated with cells for 2 h, media was then removed, and virus inoculum added and incubated for 60 min, then inoculum was removed and replaced with fresh medium and cells were incubated. For virus pretreatment, the virus was incubated with peptide for 2 h then diluted to obtain ineffective peptide concentrations and added to cells. Cotreatment wells received compound and virus inoculum simultaneously, incubated for 2 h, then media was replaced, and cells were incubated. Post treatment wells were infected with virus for 2 h followed by inoculum removal and replacement with peptides in the media. In each experiment cells were incubated with fresh medium containing CMC from 48 to 72 h (depending on the virus used) and plaques were scored.
Vero cell monolayers were treated with peptides for 2 h before virus addition (cell pre-treatment assay) or after (post treatment assay) virus penetration into cells to assess the activity of peptides in a post entry stage of the virus replicative cycle. No or minor inhibitory effect of the two peptides on both HSV-1 and HSV-2 could be detected (the obtained data are shown in Figure 1). This implied that these peptides probably inhibited herpesviruses infection mainly by directly interacting with viral particles. The 50% inhibitory concentrations (IC 50 ) were 8.55 µM and 9.99 µM for HSV-1, and of 8.28 µM and 8.86 µM for HSV-2 vs. TL and TL1, respectively (all IC 50 were calculated for virus pretreatment assay). The 90% inhibitory concentrations (IC 90 ) were 15.66 µM and 18.69 µM for HSV-1, and 16.04 µM and 16.71 µM for HSV-2 vs. TL and TL1, respectively.
To further characterize the inhibitory effect of TL and TL1, in vitro infection inhibition assays were performed with a collection of enveloped viruses that differ in genome content compared to herpesviruses (RNA versus DNA), namely: HCoV-229E, HCoV-OC43 and SARS-CoV-2, MeV, HPIV-3 and influenza virus (subtype H1N1) ( Figure 3).
Four different times of addition experiments were performed, and the results showed a clear inhibition of infectivity in the cotreatment assay and in the virus pretreatment assay, while a minor efficiency was reported for the remaining two time-of-addition assays. Surprisingly, this was not the case for both paramyxoviruses analysed in the present study ( Figure 3E,F), but possible differences in the lipid composition of their envelopes can possibly explain such a different behavior. MeV and HPIV-3 were not affected by both TL and TL1 in the virus pretreatment assay, and only minor evidence for a reduction of infectivity was found in the cotreatment assay. On the contrary, when the cell pretreatment assay was performed, a reduction of infectivity of over 40% with both peptides at the concentration of 25 µM was observed. No activity was recorded in the post treatment assay against tested paramyxoviruses. Next, the effect of TL and TL1 on the infection of two nonenveloped viruses was tested. The infectivity of poliovirus Sb-1 and CV-B3 viruses was not altered in the presence of both peptides, as measured by plaque reduction assays following the four described experimental settings (Supplementary Figure S1). These results identify the TL and TL1 peptides as broad-spectrum antiviral peptides acting on enveloped viruses.
Cytotoxicity of Native Temporin L and Its Analog ([Pro3, DLeu9]TL)
We examined the effects of TL and TL1 on cell viability after incubating Vero cells with different concentrations of each peptide for 2 h, resembling the various treatments of cells used in our experimental protocols, and 24 h as a long exposure indicator. Cytotoxicity was evaluated monitoring cell viability using the 3-(4.5-dimethylthiazol-2yl)-2.5-diphenyltetrazolium bromide (MTT) assay and expressed as percentage of inhibition of MTT reduction to its insoluble formazan crystals by mitochondrial dehydrogenases, compared to the untreated control cells (Supplementary Figure S2A,B). The nonlinear regression analysis result indicated that the 50% cytotoxic concentration (CC50) of TL was 19.61 μM and the CC50 of TL1 was 42.18 μM (all CC50 were calculated after 2 h of peptide treatment). Because some direct-acting antivirals disturb the envelope structure, they may also damage the cell membrane at similar concentrations, making their use hazardous. Mature mammalian red blood cells have no nucleus and low cell membrane repair ability. To provide further evidence to support the development of TL Next, the effect of TL and TL1 on the infection of two nonenveloped viruses was tested. The infectivity of poliovirus Sb-1 and CV-B3 viruses was not altered in the presence of both peptides, as measured by plaque reduction assays following the four described experimental settings (Supplementary Figure S1). These results identify the TL and TL1 peptides as broad-spectrum antiviral peptides acting on enveloped viruses.
Cytotoxicity of Native Temporin L and Its Analog ([Pro3, DLeu9]TL)
We examined the effects of TL and TL1 on cell viability after incubating Vero cells with different concentrations of each peptide for 2 h, resembling the various treatments of cells used in our experimental protocols, and 24 h as a long exposure indicator. Cytotoxicity was evaluated monitoring cell viability using the 3-(4.5-dimethylthiazol-2-yl)-2.5-diphenyltetrazolium bromide (MTT) assay and expressed as percentage of inhibition of MTT reduction to its insoluble formazan crystals by mitochondrial dehydrogenases, compared to the untreated control cells (Supplementary Figure S2A,B). The nonlinear regression analysis result indicated that the 50% cytotoxic concentration (CC 50 ) of TL was 19.61 µM and the CC 50 of TL1 was 42.18 µM (all CC 50 were calculated after 2 h of peptide treatment). Because some direct-acting antivirals disturb the envelope structure, they may also damage the cell membrane at similar concentrations, making their use hazardous. Mature mammalian red blood cells have no nucleus and low cell membrane repair ability.
To provide further evidence to support the development of TL and its analog as potential therapeutics, we evaluated their ability to cause hemolysis of red blood cells. The results of the hemolytic activity shown in Supplementary Figure S2C indicated that both TL and TL1 were practically devoid of hemolytic activity at their antiviral concentrations, showing residual hemolytic activity only at concentrations equal or above 50 µM. At lower concentrations, a consistent lower hemolysis was recorded for TL1 compared to the parent peptide TL. This result is similar to the cytotoxicity analysed in Vero cells, where cell viability is higher when using TL1.
When toxicity data were evaluated in conjunction with antiviral results, we observed that TL1 retained appreciable levels of inhibitory activity against the virus infections along with moderate cytotoxicity; therefore, TL1 represents a potentially useful lead compound for further exploitation.
Cytotoxicity and Antiviral Activities of Gly10-Replaced TL1 Analogues
With a potential lead peptide (TL1) endowed with a specific antiviral activity in hand, we decided to investigate by single-point modification the possibility to improve its antiviral efficacy and further contain the low cytolytic features. In particular, we synthesized a set of derivatives, previously analysed in antibacterial assays [48]. The glycine (Gly) in position 10 was replaced with appropriate amino acids characterized by (i) the propensity to disrupt helicity (Pro, hydroxyproline (Hyp) and an unconventional amino acid, 2-aminoindane-2carboxylic acid (Aic)); (ii) a positive charge or an indole ring in their side chain (Lys and Trp, respectively); and (iii) hydrophobic side chain [norleucine (Nle)]. In all these residues, both L and D isomers were used (except for the non-chiral Aic), as described in Table 2.
Cell viability was measured for TL1 analogues at both 2 h and 24 h using the MTT assay. The reported results (Supplementary Figure S3A,B) are consistent with previous data previously obtained [48]. Some minor divergences were probably due to the different cell lines used between the two studies, but in the present experimental model we preferred to use Vero cells, since these are generally employed for most of the antiviral assays. As depicted in Supplementary Figure S3A, the MTT assay after 2 h incubation showed that TL2, TL3, TL4, TL5, and TL6 had lower toxicity profiles compared to the parent peptide, especially at the higher concentration of 100 and 50 µM. When considering the window of possible therapeutic concentrations (below 25 µM), TL9 and TL10 presented good toxicity profiles with over 65% of viable cells. These profiles were conserved when the cytotoxicity assay was extended to 24 h, and confirmed by hemolysis assay (Supplementary Figure S3C). In consideration of the fact that TL1 was only active against enveloped viruses, subsequent experiments of viral inhibition were only conducted against enveloped viruses and not against CV-B3 and poliovirus Sb-1. We first analysed the inhibitory effect of each TL1 analog against HSV-1 and SARS-CoV-2 (in consideration of the present pandemic). Results are shown in Figure 4A,B for HSV-1, and Figure 4C,D for SARS-CoV-2.
As for the parent peptide TL1, these analogs were able to inhibit viral infectivity in a dose-dependent manner. In detail, the analogs were effective when they were added simultaneously with viruses on cells, or when the viruses were pretreated with peptides, showing a strong propensity for a virucidal effect. The antiviral activity was negligible in the case of cell pretreatment and post treatment assays (data not shown). Table 2. Names, codes, sequences, and some properties (molecular weight and helical percentage) of TL1 and Gly 10 -replaced TL1 analogues.
Name
Code Peptide TL2, in which the residue of Gly 10 was substituted with a Pro residue, with the intention of reducing helicity of the C-terminal region of the parent peptide TL1, showed a similar activity of the parent peptide TL1 for both HSV-1 and SARS-CoV-2. By replacing Pro10 with the corresponding enantiomer DPro, generating TL3, a dramatic loss of antiviral activity was recorded for the two viruses with only a negligible effect at the highest doses in the virus pretreatment assay. TL4, presenting a Hyp residue in position 10 also showed a generally conserved pattern of antiviral activity like TL1, but again the use of the enantiomer DHyp abolished any antiviral activity. Replacing the residue of Gly 10 with Nle (TL6), characterized by an aliphatic side chain, led to a consistent improvement of the antiviral activity for both HSV-1 and SARS-CoV-2 which could be greatly appreciated for the latter virus. When Nle was replaced with its enantiomer DNle, the resulting peptide TL7 was not affected in the antiviral activity; on the contrary, it was even more effective against HSV-1 and SARS-CoV-2 when compared to TL6. In the other cases observed, TL3 and TL5, the insertion of D amino acids at position 10 had a profound disrupting effect. Replacement of the Gly 10 residue with a residue of Lys (TL8), which has a positively charged side chain, allowed a substantial improvement in the anti-SARS-CoV-2 activity and a conserved activity against HSV-1. As for the previous peptides (TL6-TL7), the switch to the D amino acid in TL9 was not able to produce a reduction in activity, suggesting that chirality is of minor importance compared to the characteristics of the side chains present in the key position 10 of the antiviral peptide. TL10 and TL11 were designed with a residue of Trp and DTrp in position 10, respectively, and showed good antiviral activity against the two viruses analysed. Finally, we analysed the behavior of TL12 that is characterized by the insertion in position 10 of an unconventional amino acid Aic, a dialkylglycine derivative devoid of chirality. TL12 displayed a strong antiviral effect against HSV-1 and SARS-CoV-2; however, it exhibited consistent cytotoxicity. At this point, the whole panel of enveloped viruses was tested and the results (IC 90 and IC 50 ), reported in Table 3, clearly show a strong activity of the TL peptides against enveloped viruses, except for viruses belonging to the Paramyxoviridae family. At least with the members of this family we could observe a minor activity of the peptides as virucidal agents, but some antiviral activity was reported in the cell pretreatment assay with both MeV and HPIV-3, indicating a difference in the entry mechanism of paramyxoviruses compared to other enveloped viruses (data not shown). In the virus pretreatment of MeV, peptides TL6, TL8, TL9, TL10, TL11, and TL12 at the highest concentration showed a consistent viral inhibition from 60 to 80% (data not shown). This inhibition dropped drastically at the lowering of the concentrations of the relative peptides. Among the members of the Coronaviridae family, HCoV-229E showed minor susceptibility to the action of the TL analogs, probably reflecting the fact that HCoV-229E represents a member of the alfa-coronaviruses while SARS-CoV-2 and HCoV-OC43 are members of the beta-coronaviruses. To identify the best analogs to be considered for further modification, it was important to establish that the antiviral activities observed could be useful at concentrations that were possible to achieve without inducing toxic effects to cells. Therefore, the relative effectiveness of TL analogs in inhibiting viral replication was compared to cell viability (CC 50 value/EC 50 value) to obtain a therapeutic index (TI) ( Table 3, Table 4, Tables S1 and S2). Cell viability was measured for TL1 analogues at both 2 h and 24 h using the MTT assay. The reported results (Supplementary Figure S3A,B) are consistent with previous data previously obtained [48]. Some minor divergences were probably due to the different cell lines used between the two studies, but in the present experimental model we preferred to use Vero cells, since these are generally employed for most of the antiviral assays. As depicted in Supplementary Figure S3A, the MTT assay after 2 h incubation showed that TL2, TL3, TL4, TL5, and TL6 had lower toxicity profiles compared to the parent peptide, especially at the higher concentration of 100 and 50 μM. When considering the window of possible therapeutic concentrations (below 25 μM), TL9 and TL10 presented good toxicity profiles with over 65% of viable cells. These profiles were conserved when the cytotoxicity assay was extended to 24 h, and confirmed by hemolysis assay (Supplementary Figure S3C). In consideration of the fact that TL1 was only active against enveloped viruses, subsequent experiments of viral inhibition were only conducted against enveloped viruses and not against CV-B3 and poliovirus Sb-1. We first analysed the inhibitory effect of each TL1 analog against HSV-1 and SARS-CoV-2 (in consideration of the present pandemic). Results are shown in Figure 4A,B for HSV-1, and Figure 4C,D for SARS-CoV-2. As for the parent peptide TL1, these analogs were able to inhibit viral infectivity in a dose-dependent manner. In detail, the analogs were effective when they were added simultaneously with viruses on cells, or when the viruses were pretreated with peptides, showing a strong propensity for a virucidal effect. The antiviral activity was negligible in the case of cell pretreatment and post treatment assays (data not shown).
Peptide TL2, in which the residue of Gly 10 was substituted with a Pro residue, with the intention of reducing helicity of the C-terminal region of the parent peptide TL1, showed a similar activity of the parent peptide TL1 for both HSV-1 and SARS-CoV-2. By The TI clearly showed that the best peptides for a therapeutic exploitation were ranked as follows: TL6, TL1, TL4, TL8, and TL9. To further analyse the most suitable analogue, we also performed a hemolytic assay using the concentration defined by the IC 50 for each peptide as shown in Supplementary Figure S3B. Considering all the obtained results, TL6 was selected as the most interesting analogue for further modification by the addition of lipidic tags in the attempt to intensify the antiviral peptide concentrations on membranes during inhibition experiments.
Cytotoxicity and Antiviral Activities of TL6 and Its Lipid-Conjugates
In order to exploit the potential and to enhance the peptides' antiviral activity, we chose the TL analog with the best TI, namely TL6, to add a cholesterol tag at its N-or C-terminal side (Table 5). Table 5. Names, codes, sequences, and molecular weight of TL6 and its lipid-conjugates.
Name
Code Lipidation of AMPs has a documented impact on improving their antimicrobial and antiviral effectiveness. The lipid tail can facilitate peptide insertion in lipid membranes and/or induce a self-organization into micelles that could provide a multimeric display of active peptides. As a starting strategy, we analysed the effect of cholesterol attached at both sides (the N and C termini) and with a small spacer (constituted by a PEG4 linker and one cys or a cys linked to two additional gly residues). The first result of note for cholesterolconjugated TL6 peptides was the observation of a considerable improvement of their safety profiles. CC 50 values are shown in Supplementary Figure S4 and are consistently higher than the counterpart parent peptide TL6 without lipidation. In particular, we observed that after 2 h of treatment, almost no reduction of cell viability was induced. Only TL6.2, where the PEG4 has been attached to a double gly and a cys residues, showed a marginal toxicity of about 40% at concentration above 50 µM. When moving to a 24 h treatment, the toxicity slightly increased for the following peptides: TL6.1, TL6.2 and TL6.4. Peptide TL6.3 represented the less toxic peptide, although the level of toxicity was considered of minor importance for any of the cholesterol-conjugated TL6 peptides, since it was constitutively well below the concentration that proved to be effective as antiviral. In order to reduce the number of experiments performed, we decided to test our lipidized peptides against a subset of viruses that had shown to be of interest in the previous part of the present experimental work, namely, HSV-1 and SARS-CoV-2 (Table 6), and MeV and influenza virus (Table 7). Virus suspensions were mixed with different concentrations (from 0.1 µM to 50 µM) of cholesterol-conjugated TL6 peptides and were directly incubated with target cells in cotreatment assay as previously described. The results showed that all lipidized peptides were able to inhibit infectivity of the two viruses in a dose-dependent manner (data not shown) and with high efficiency in the low µM range, reaching almost 100% inhibition at the concentration of 12.5 µM or lower. Peptides exerted their activity also in the virus pretreatment assay, showing an increase of the inhibition of infectivity compared to TL6. In a similar fashion to the activity of TL6, cholesterol-conjugated TL6 peptides showed good efficiency in the cotreatment and virus pretreatment experiments, but it was interesting to note that the cell pretreatment assay showed a strong inhibitory activity of peptides, especially TL6.3 (data not shown). This is of interest, since the parent peptide was active to a minor extent in this assay only against MeV and HPIV-3 (both Paramyxoviridae members) but was ineffective against any other virus tested. The post treatment assay was ineffective (data not shown). The ability of cholesterol-conjugated TL6 peptides to augment their antiviral efficiency in the cotreatment and cell pretreatment assays is probably due to the hydrophobicity increment of TL6 and its facilitated incorporation into lipid bilayers to create areas of higher antiviral peptide density on the cell surface. On the other hand, a different mechanism could be the reason for incremented activity of the lipidized peptides in the virus pretreatment assays. The peptides could self-organize in larger structured molecules, such as micelles, and also induce secondary structures that may interfere more efficiently with viral envelopes. Of note was a marked increase of antiviral activity without the addition of further Gly residues at the site of attachment and also the stronger activity when cholesterol was added to the N-terminus of the TL6 peptide. The IC 50 The last objective of the present work was to investigate if different-length fatty acids provide the best modifications to enhance antiviral activity. Four fatty acids were selected: undecanoic acid (CH 3 (CH2) 9 CO), tridecanoic acid (CH 3 (CH2) 11 CO), pentadecanoic acid (CH 3 (CH2) 13 CO), and hexadecanoic acid (palmitic acid) (CH 3 (CH2) 14 CO). Considering that the peptides modified with cholesterol tags exerted the best antiviral activity when the cholesterol was attached to the N-terminus, these other fatty acids were attached to the same side. The fatty acids conjugated-TL6 peptides were tested in cytotoxicity assays and inhibition assays with the following viruses: HSV-1, SARS-CoV-2, MeV and influenza virus, and the results showed an increase of the activity compared to the parent peptide up to 50-fold (Tables 6 and 7). The most active peptide was the one with the shortest fatty acid chain, namely undecanoic acid (TL6.5), while the antiviral activity decreased with elongation of the carbon chain, with the lowest inhibitory activity for palmitic acid. Fatty acids greatly reduced the level of toxicity as observed with cholesterol-conjugated TL6 peptides. Finally, in order to understand the feasibility of the use of our selected peptides as human antivirals we performed additional cytotoxicity assays on peripheral blood mononuclear cells (PBMCs). Five peptides were chosen as representatives and were selected according to their higher antiviral activity, namely TL, TL1, TL6, TL6.3 and TL6.5. Cytotoxicity was measured by MTT assays at two time points: 2 h and 24 h. Results (Supplementary Figure S4A,B) showed a minor damage on PMBC that was comparable to the data obtained on Vero cells.
Discussion
Outbreaks of severe pathogenic viral infections (Avian and swine influenza viruses, coronaviruses, Ebola virus, Zika virus, Lassa fever virus), have occurred in recent years, and many other viruses are still highly diffused and endemic in many places worldwide (HIV, herpesviruses, hepatitis viruses, diarrhea viruses, and many others). The most recent viral disease is pneumonia due to infection by the coronavirus SARS-CoV-2, which is currently circulating globally causing huge public health and economic problems. These outbreaks highlight the urgent need for new strategies and approaches to develop efficient antiviral drugs with broad-spectrum activities for prophylactic and therapeutic treatments. However, current antiviral efforts, usually based on biochemical principles, are mainly focused on one virus at a time, following the traditional "one bug-one drug" paradigm, and are very limited in the coverage of viruses they target. These single virus and single target strategies have also been hampered by the rapid mutagenic ability of many viruses, and changing viral antigenic specificity may easily create escape mutants resistant to the single target antiviral. Since the risk of future viral outbreaks will continue to grow everywhere, a broad-spectrum antiviral strategy seems better-suited for responding to an increasing diversity of highly pathogenic viruses in a timely and effective manner. Strikingly, over 85% of major viral epidemics and pandemics in the past decade have been caused by membraneenveloped viruses, which share their mechanism of fusion with the host cell membrane. This fusion mechanism is generally triggered by the virus encoded glycoproteins present on the envelope of these viruses. Fusion proteins are classified in three different classes according to their structural and functional domains, and among them, class 1 fusion proteins (present on viruses such as HIV, Influenza, measles and other parainfluenza viruses, Ebola, coronaviruses and some others) have been extensively studied. A powerful strategy using peptides that bind to the forming 6-helix bundle has proven to be very efficacious in stopping viral infectivity at the early stages of infection. This strategy has recently been widely used against SARS-CoV-2 [49,50], but again it is strictly dependent on the specific virus (or at least virus family) and antigenic variability. On the other hand, several studies have also shown that some antiviral peptides derived from classical AMPs are active against a wide range of enveloped and nonenveloped viruses [51][52][53][54][55][56][57][58].
The putative mechanism of action of AVPs for exerting their antiviral activity seems to be: (i) blocking early steps of viral entry by surface carbohydrate interaction, (ii) blocking viral attachment or penetration into the host cells by interactions with specific cellular receptors, (iii) interaction and inactivation of viral envelope glycoproteins, (iv) modulation of host cell antiviral responses, and (v) blocking intracellular expression of viral genes and/or production of viral proteins. However, no unequivocal correlation between AVP structures and viral inhibition has so far become obvious. In fact, striking differences from peptide to peptide are generally observed. Mechanistically, many AVPs exhibit their virucidal actions by direct disruption of the outer surface membrane of the virus particle.
Due to this unique membrane-targeting activity, AVPs may have the potential to control viral species that are resistant to currently used antiviral agents [43].
Searching for pan-antivirals acting as multitarget inhibitors has not yet been explored in a satisfactory manner. Therefore, we aimed at identifying a broad-spectrum antiviral peptide derived from TL with potent antiviral activity. A phenotypic-based study was performed and TL and TL1 showed a discrete potency in antiviral assays, and we confirmed minor toxicity of TL1 on mammalian cells and human erythrocytes. The fact that both peptides showed a greatly reduced or null activity against nonenveloped viruses, and also at prophylactic and curative stages, indicated that peptides need to act directly with the viruses and that the main activity of peptides was exerted against the viral membrane. Putative mechanisms could involve interference with the proper lipid bilayer organization of the envelope, interaction with viral glycoproteins, or merely a hindrance mechanism able to interfere with the attachment and fusion steps to occur. Moving on from the positive results of the antiviral activity of the TL1 analog, we proceeded to exploration of the role played by the Gly in position 10, as previously done for assessing antibacterial activity [48]. Eleven different analogues were produced and analysed. The amino acids chosen for substituting Gly10 were Pro, Hyp, and Aic to disrupt helicity, Lys for the positive charge in the side chain; Nle presenting a hydrophobic side chain and Trp with its indole ring. All substituted amino acids were inserted with their L and D configurations with the exception of the Aic residue ( Table 2). Most of the TL analogs maintained their antiviral activity and there was not a significant difference when considering the different viruses used in the experimental models. In analogy with the results obtained with TL and TL1, the Gly10 analogs of TL1 were effective in virus pretreatment and cotreatment assays, confirming a preference for targeting the viral envelope before or at the moment of encountering cell membranes. They exhibited minor or null activity when first incubating cells with peptides, with the exception of Paramixoviridae members (MeV and HPIV-3). The most interesting result is represented by peptide TL6, which has a Nle in substitution of the Gly10, increasing the hydrophobicity proved to increase antiviral activity and reduce cytotoxicity. Norleucine may be a key factor in driving peptide interactions with membranes. Other peptides with enhanced antiviral activity were TL8, TL10, TL11 and TL12, but the latter two also had highly accentuated toxicity both at 2 h and 24 h post-stimulation. The hemolytic activity was considerable compared to their template TL1, and especially compared to TL4, TL6, TL7 and TL9, which showed a clear increment of intact erythrocytes. Collectively, we observed that D enantiomers were much less active in any of the antiviral assays performed, even though a better toxicity profile was shown, except for TL11 with the DTrp in place of Gly10. The helicity of the peptide seems to be of uttermost importance since the three substitutions (Pro, Hyp, Aic), disrupting the helix, rendered the peptides inactive against all the viruses tested, except for the Aic insertion, but at detriment of the toxicity profile and TI. Assuming that the parent peptide TL1 exerted its antiviral activity by disrupting viral membranes, the poor antiviral activity of Pro and Hyp substitution may be attributed to the detrimental effect of the mutations on inducing α-helical conformation on interaction with the viral membranes. On the other hand, the insertion of Lys and Trp with a positive charge in the side chain and an indole ring, respectively, produced peptides with enhanced antiviral capacity but substantially increased toxicity; therefore, with poor TIs. Collectively, our results show that the effect of the peptide with the best TI, namely TL6, is mainly due to its direct interaction and damage of viral membranes. Damage to viral membranes could be similar to damage to cellular membranes, but a series of considerations can be drawn to explain the reduced toxicity. The lipids of enveloped viruses are derived from their host cell membrane, but the lipid composition often differs between cell membranes and viral envelopes. Indeed, some viruses are enriched with specific lipids. For example, the influenza A membrane is cholesterol-rich, while the dengue membrane lacks cholesterol, and the HIV envelope contains negatively charged phosphatidylserine (PS) [59]. L compositions can play important roles in the membrane/envelope properties, such as stiffness, fluidity, and line tension, leading to different membrane interactions with TL peptides. As a matter of fact, during the fusion step of enveloped virus penetration, a key structure is represented [60][61][62][63] by the formation of the lipid stem, requiring a shift of the outer leaflet of the bilayer from a positive to a negative curvature. Therefore, a limitation of possible movements forced by TL peptides, which may stabilize the positive curvature, is expected to reduce the possibility of membrane fusion, leading to putative broad-spectrum antivirals with low cytotoxicity and reduced ability to select for resistance. Furthermore, the eukaryotic cell membrane is continuously recycling itself, with a high degree of selfrenewal upon injury, while the lipid membrane loses this faculty once surrounding the viral nucleocapsid, becoming particularly prone to membrane damage. Extensive membrane damage may also hamper virus fusion with the host cell membrane by impacting the fluidity and curvatures of lipid membranes [64]. Finally, the curvature of the envelope itself is different compared to the curvature of the cell membrane, principally due to the relative sizes of viruses versus mammalian cells. TL peptides may be able to selectively induce pore formation in highly curved membrane structures (below~250 nm in diameter), resulting in membrane lysis once a critical number of pores is formed, with the consequence of viral infectivity reduction [65].
To further exploit the results so far obtained, TL6 was further modified by adding a cholesterol tag at its N or C-terminal side. Several previous reports have described that by attaching a cholesterol group to a peptide fusion inhibitor the antiviral potency is productively augmented [66]. In general, it has been proven that cholesterol-tagging of peptides, derived from the coiled-coil C-terminal heptad repeats of viral fusion proteins of class 1 [64,[66][67][68][69][70][71][72][73][74][75][76] produced infectivity inhibitors from 100 to 1000 more efficient than their parent peptides. Nevertheless, lipidation of AVPs, besides those from viral HR mimicking peptides, has not been deeply investigated yet. Our results are of interest since lipidation reduced toxicity of two logs and increased efficiency leading to infectivity reduction in the cell pretreatment assay. A main difference with known HR-derived lipidated peptides is that the most active peptide was TL6.3, which has the cholesterol tag attached to its N-terminal side, showing that the mechanism of action between the two different classes of entry inhibitors is profoundly different. When HR peptides interfere with the formation of the 6-helix bundle, their orientation from the cell membrane in which lipidated peptides are inserted is of uttermost importance, and C-terminal HR peptides show a better functionality when the lipid tag is attached to the C-terminus of the peptide. On the other hand, AVPs such as temporins may work in both orientations and probably in an almost planar position on the membrane's surface. Therefore, a preference for the N-terminal attachment of the lipid tag has been reported. Moreover, the length of the lipidic tail and the length of the linker seem to be of minor importance. In fact, as opposed to HR-derived peptides, TL lipidated peptides preferred a shorter hydrocarbon tail and a shorter linker. , and Fmoc-Aic was acquired by Chem-Impex International (Wood Dale, IL, USA). Undecanoic, Tridecanoic, Pentadecanoic and Palmitic acids were purchased from Sigma-Aldrich. Cholesterol-PEG4 was obtained by following the synthetic procedures elsewhere reported [66]. Coupling reagents such as N,N,N ,N -tetramethyl-O-(1H-benzo-triazol-1-yl)uronium hexafluorophosphate (HBTU) and 1-hydroxybenzotriazole (HOBt), as well as the Rink amide resin used, all were commercially obtained by GL Biochem Ltd. (Shanghai, China). N,N-diisopropylethylamine (DIEA), piperidine, and tri-fluoroacetic acid (TFA) were purchased from Iris-Biotech GMBH. Peptide synthesis solvents and reagents, such as N,N-dimethylformamide (DMF), dichloromethane (DCM), diethyl ether (Et2O), water and acetonitrile (MeCN) for HPLC, were reagent grade acquired from commercial sources (Sigma-Aldrich and VWR) and used without further purification.
Peptide Synthesis
The synthesis of peptides TL1-TL12 was performed by using the ultrasound-assisted solid-phase peptide synthesis (US-SPPS) integrated with the Fmoc/tBu orthogonal protection strategy [77]. Each peptide was assembled on a Rink amide resin (0.1 mmol from 0.72 mmol/g as loading substitution) as the solid support to obtain amidated C-termini. The peptide assembly was performed by repeated cycles of Fmoc deprotection (20% piperidine in DMF, 0.5 + 1 min) and coupling reactions using Fmoc-aa (2 equiv), COMU (2 equiv), Oxyma (2 equiv), DIPEA (4 equiv), 5 min in DMF by ultrasonic irradiations. The addition of Cholesterol-PEG4 was performed introducing a cysteine residue in C-terminus for TL6.1 and TL6.2 or in N-terminus for TL6.3 and TL6.4. The reaction between the bromoacetyl derivate of cholesterol-PEG4 and thiol group of cysteine was carried out as described elsewhere [78]. In particular, the peptide was dissolved in DMSO and Cholesterol-PEG4 reagent dissolved in THF was added by a syringe pump in the presence of DIPEA at room temperature overnight. The equivalents used were peptide/cholesterol-PEG/DIPEA in the ratio 1/1/2 mol/mol/mol and the product was purified by preparative HPLC using a linear gradient of MeCN (0.1% TFA) in water (0.1% TFA), from 10 to 90% over 30 min. 4.1.4. Conjugation of Fatty Acids to TL6.5-TL6.8 The introduction of fatty acids in the N-terminus of peptides TL6.5-TL6.8 was accomplished as described elsewhere [79]. Briefly, the conjugation was carried out adding three equivalents of undecanoic (TL6.5), tridecanoic (TL6.6), pentadecanoic (TL6.7), and palmitic (TL6.8) acids, respectively, preactivated with HBTU/HOBt (three equivalents) as coupling/additive reagents in presence of DIEA (six equivalents) in DMF/DCM (1:1 v/v), by ultrasonic irradiation for 15 min. Upon filtering and washings of the resin with DMF and DCM, the addition of fatty acid was ascertained by Kaiser test and LC-MS analysis. Thus, peptides TL6.5-TL6.8 were released from the resin and simultaneously cleaved by their protecting groups by using a cocktail of TFA/TIS/H 2 O (95:2.5:2.5, v/v/v) at rt for 3 h. Finally, the resins were removed by filtration and crudes were recovered by precipitation with cool anhydrous diethyl ether as amorphous solids. The peptides were purified by RP-HPLC (Shimadzu Preparative Liquid Chromatograph LC-8A) equipped with a preparative column (Phenomenex Kinetex C18 column, 5 mm, 100 Å, 150 21.2 mm) using linear gradients of MeCN (0.1% TFA) in water (0.1% TFA), from 10 to 90% over 30 min, with a flow rate of 10 mL/min and UV detection at 220 nm.
Cytotoxicity
Vero cells were seeded in 96-well microtiter tissue culture plates (5 × 10 3 cells /well) and incubated for 24 h at 37 • C in 5% CO 2 . PBMCs were isolated and then cultured at a concentration of 1 × 10 5 cells/well.
The cytotoxicity for both cell types was evaluated by the MTT (Sigma-Aldrich) assay based on the [81] reduction of the yellowish MTT to the insoluble and dark blue formazan by viable and metabolically active cells. Temporin-based peptides were tested at four different concentrations (0.1, 1, 6.25, 12.5, 25, 50 and 100 µM) after 2 and 24 h. At the end of incubation, 100 µL of an MTT solution (5 mg/mL) was added in each well and incubated for 3 h at 37 • C. The supernatant was discarded and 100 µL of DMSO 100% (Sigma Aldrich) was added (to dissolve the formazan salts) for 10 min with vigorous agitation at room temperature. Cytotoxicity was evaluated by spectrophotometric reading at 540 nm. The viability of Vero cells in each well was presented as a percentage of control cells. All experiments were performed in triplicate, and the means standard deviations are reported. Nonlinear regression analysis was performed using GraphPad Prism software (GraphPad Software, San Diego, CA, USA) to determine the CC 50 .
Hemolytic Assays
The hemolytic activity of the peptides was determined using fresh human erythrocytes from healthy donors as reported previously [81]. Briefly, the blood was centrifuged, and the erythrocytes were washed three times with 0.9% NaCl. Peptides were added to the erythrocyte suspension (5% v/v), at a final concentration ranging from 0.1 to 100 µM in a final volume of 100 µL. The samples were incubated with agitation at 37 • C for 60 min. The release of hemoglobin was monitored by measuring the absorbance (Abs) of the supernatant at 540 nm. The control for zero hemolysis (blank) consisted of erythrocytes suspended in the presence of peptide solvent. Hypotonically lysed erythrocytes were used as a standard for 100% hemolysis. The percentage of hemolysis was calculated using the equation: % hemolysis = [(Abs sample − Abs blank )/(Abs total lysis − Abs blank )] × 100 All experiments were performed in triplicate, and the standard deviations are reported.
Antiviral Activity
The antiviral activity of the peptides was evaluated through four different assays: cotreatment, virus pretreatment, cell pretreatment and post treatment assays, where the main difference is the timing of addition of TL and its analogues. Cells were plated in 12well (2.5 × 10 5 cells/well) and incubated for 24 h at 37 • C. In all assays, peptides were added to the medium without FBS and tested at noncytotoxic concentrations. All experiments were performed in triplicate. The inhibition rate of the infectivity was evaluated by plaque assay comparing the number of plaques obtained in the wells treated with the peptides to the plaques counted in positive control (cells infected with virus, without peptide). Cotreatment assay: Cells were treated simultaneously with virus at a multiplicity of infection (MOI) of 0.1 pfu/cell and the peptides at the concentrations described above for 2 h at 37 • C. Then the mixture (virus/compound) was removed and the complete medium (10% FBS), supplemented with carboxymethylcellulose at 5% (Sigma, C5678, C5013), was added and incubated for 48 h at 37 • C in 5% CO 2 . The cells were fixed with 4% formaldehyde (Sigma, F1635), stained with crystal violet 0.5% and the number of plaques scored. Virus pretreatment assay: Peptides were added to the virus (1 × 10 4 pfu/mL) and incubated for 2 h at 37 • C. After incubation, each mixture (virus/peptide) was diluted so that the peptides were in a nonactive concentration and the virus was at MOI of 0.01 pfu/cell. The dilutions were added to cell monolayers for 1 h, and then the cells were incubated with CMC for 48 h. At the end, the cells were fixed, stained and the number of plaques scored. Cell pretreatment: Cells were pre-cooled at 4 • C for 30 min and, subsequently, the peptides were added and incubated for 2 h at 4 • C. Each virus was added to a MOI of 0.1 pfu/mL for 1 h at 37 • C. Finally, the cells were incubated with CMC for 48 h at 37 • C. Cells were fixed, stained and the number of plaques was scored. Post treatment: Cells were incubated with viruses (MOI 0.1 pfu/mL) for 2 h at 37 • C, after that the peptides were added and incubated with CMC for 48 h at 37 • C. The cells were then fixed, stained and the number of plaques scored. The experimental assay recording a higher antiviral activity for most of the viral models used was chosen for performing a non-linear regression analysis using GraphPad Prism software to determine the IC 50 and IC 90 .
Calculation of Therapeutic Index
The therapeutic index (TI) is a widely accepted parameter to represent the specificity of antimicrobial reagents. To evaluate the margin of safety that exists between the dose needed for antiviral effects and the dose that produces unwanted and possibly dangerous side effects (cytotoxicity), the TI for each peptide was tcalculated from the efficacy and cytotoxicity data (CC 50 /IC 50 ). Larger values of TI indicate greater antiviral efficiency.
Statistical Analysis
All tests were performed in triplicate and expressed as mean ± standard deviation (SD) calculated by GraphPad Prism (version 8.0.1). One-way ANOVA followed Dunnett's multiple comparisons test was performed; a value of p ≤ 0.05 was considered significant.
Conclusions
In summary, we have described a set of modifications of TL peptides aiming to maximize the effects of new compounds to mitigate infectivity of enveloped viruses. We found that these TL-modified peptides are able to inhibit a wide range of enveloped viruses through direct nonspecific interactions with viral surface components. Emerging and re-emerging virus outbreaks remind us of the urgent need for broad-spectrum antivirals; therefore, further studies, together with a rational optimization of lead compounds, able to inhibit both DNA and RNA enveloped viruses by preventing virus-host cell early interactions with minor induction of drug resistance, will provide us with a stronger armamentarium to face emerging virus outbreaks in future.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. Authors can confirm that all relevant data are included in the article.
Conflicts of Interest:
The authors declare no conflict of interest. | 13,834.8 | 2022-02-01T00:00:00.000 | [
"Medicine",
"Biology",
"Chemistry"
] |
Environmental Science and Sustainable Development
Recently, the direct causal relationship between the built environment and well-being has been shown to affect the quality of life as well as the performance of the urban environment. While it is often difficult to establish, the urban built environment plays a major role in shaping the way people behave inside it. (Thwaites, Kevin, et al., 2016) While urbanization takes place in a transforming society, societal development leaves its impact on the urban spaces. When industries and development plans decline in some parts of the city, especially those with archaeological value, these parts become abandoned due to migration. Urban pockets or gaps inside the built environment are left behind to suffer from informality, deterioration, increasing crime and unemployment rates. The aim of that research is to find some possible solutions for improving the public abandoned spaces that accommodate dangerous buildings, high rates of unemployment and unsafe urban areas in the regional range of Meet-Ghamr, Dakhlia governorate. (Aggnieszka Lisowska, 2017). The targeted area of the study is "minaret el Amir Hamaad" in Meet-Ghamr, Dakahleia, Egypt. It is one of the most important archaeological sites in the Islamic heritage of Dakhlia governorate, which is well known for its unique mosques and mosque minarets.
Introduction
Without a doubt, accurately identifying and improving historical sites and buildings is extremely imperative as it allows communities to continue to exist while preventing history from fading away and being ruined. Abandoned sites may intentionally cause the deterioration of the assets and the surrounding urban area. (Jon M. Shane, July 2012).
The complex problems of abandoned historical sites pose a threat to the surrounding structures and present a direct risk to the type and rate of crime in a process known as "house stripping," "scavenging," or "urban mining". In these cases, the offender destroys the asset and then sells it away. Abandoned urban areas are highly linked to naturally increasing and developing crime rates and types, which increases the value of the asset. (Jon M. Shane, July 2012).
This empirical research demonstrates the importance of involving the preservation of archaeological sites that should be linked to a long-term conservation plan to protect them from being abandoned and demolished. In addition, it discusses the term "heritage crime" and the relation between crime and abandoned heritage areas.
pg. 2
The importance of involving the preservation of archaeological sites is discussed through studying the local area of "El Amir Hamad Corner," which is considered to be one of the extensive archaeological areas at Meet-Ghamr Dakahleia, Egypt. Additionally, the study highlights the problems affecting the area such as; negligence, demolition, abandonment, fewer amenities and services caused by the new dwellers of criminals or displaced people.
Research aims and objectives
The study aims to properly prepare a development plan for the archaeological site of El Amir Hamad Corner, taking into consideration the area's built environment on an urban scale through: -Examining the current situation of El Amir Hamad Corner, and carefully investigating the physical and social conditions of the local area.
-Studying crime rates, types and distribution inside the historic area to carefully evaluate the situation and properly establish an operational plan.
-Preparing several planning alternatives and reasonably achieving the potential target related to conserving the historic site by exploiting the extensive archaeological history of the area.
Research methodology
A quantitative and qualitative field study for El Amir Hamad Corner is conducted to determine the number of residents, building conditions, and types of economic activities in the area. From both practical and theoretical perspectives, an analytical study of the existing case of historic buildings and street network is conducted to investigate the behavioral attitude of active users, sufficient strengths, and critical weaknesses. A sustainable development plan is carried out for the historical region within the comprehensive framework of the strategic concept of a social and alternative relationship with complex nature and regional levels.
Finally, a conclusion is drawn based on the analytical data and research results to create an optimum solution for El Amir Hamad archaeological site.
El Amir Hamad Corner, Dakahlia, Egypt (case study)
Tourism is a common phenomenon in Egypt as a modern country that encompasses 1/3 of world-historical sites. Therefore, the necessity of developing and conserving these sites in a way that improves the social, sociological, and psychological aspects of the district has become evident. Besides, it is greatly significant not to abandon them so that they can be perceived as an explicit and healthy image for the space. (Mohamed Kamal Mohamed Ghodya, 2021).
Tourism leads to a temporarily increased number of people from outside the local district to enter into a relatively compact area. It can easily change a completely hazardous site into a safe one by introducing tourists (allowed stranger) to the local district which may transform the area's character to better one; especially, if it is already suffering from certain types of crime like drug dealing, theft, prostitution, or vandalism. ( Postma,A. and Schmuecker,D. (2017).
The unused urban space and abandoned urban gaps between buildings and other physical built areas with no community attention; specially for a city that has many archeological and historical sites, can cause many problems concerning social, psychological and physical spaces. These spaces have high potential for reconstruction and conservation. Therefore, integrating these spaces into the surrounding community gradually with well-designed plans of conservation can help in developing the area throughout the time.
El Amir Hamad corner is selected as the case study to make an economic and social development plan to improve the regional security and reduce the local crime rate as it consists of some historical buildings (El Ghamry Mosque-El Amir Hamad minaret-El Arwam church). In addition, it suffers from neglect and low conservation and maintenance level. It accommodates certain types of criminal acts which make it an unsafe area for local citizens.
pg. 3
The city was constructed by El Amir Hamad Ben Mekled Beik in the year 1615. It was considered as an Islamic an archaeological site on 21st November, 1951 with an official declaration No. 10357. It typically consists of El Amir Hamad minaret which is close to El Grand Ghamry mosque founded on Mamluk Period. The historic mosque occupies a significant position with a Nile view of Meet-Ghamr, Egypt.
Problem Definition
The gradual deterioration of certain historical areas leaves a negative impression about the cultural aspect and safety of that social space which carries Islamic and Coptic history. In the study area of El Amir Hamad Minaret, the gradual deterioration and low conservation rates separate the area gradually from the surrounding fabric, leaving the inner space with no definite building shapes, archaeological remains, high unemployment and crime rates, and concentrated fabric in the middle of the local area. The ruined mosques and the graceful minaret are left for ruthless destruction and are left abandoned. Thus, this area acts as an adequate space for potential criminals and disordered behavior to appear.
One of the major problems inside the proposed site is that the surrounding buildings are typically considered to be a slump informal area with no specific shape or social structure. It was announced to be an unsafe urban area by the governorate of Dakalhia, unit of informal settlements.
The study area of El Amir Hamad Corner naturally suffers from condensed building fabric in the middle of crumbling used buildings, which shows the gradual deterioration of the historic site around the graceful minaret, ruined Ghamry mosque, and most of the iconic buildings typically found to be owned by the Endowment "Awqaf". It is also neglected by the private owner, which leads the abandoned area to be a suitable host for illegal criminals, disordered acts, and urban crime. The local area in the middle with low-rise buildings typically causes a marked decrease in the viable option of natural surveillance and decreases direct contact with the surrounding fabric as well. On the other hand, the visible outline of the area includes the most towering buildings, which merely make it a suitable option to typically leave the middle severely deteriorated and less connected to the outer district. According to the most recent surveys on that area, which suffers from drug dealing, theft, prostitution, and vandalism, the two great mosques have been left for demolition with no maintenance. Thus, it suffered fron neglect and developed a high rate of crime.
El Amir Hamad Corner and safety
The area lies on the distinguished shore of the Nile River in the city of Meet-Ghamr, at the corner of El Amir Hammad, which is considered a unique archaeological architectural work. The area suffers from harsh environmental conditions, as the surrounding area which includes 104 old dilapidated houses is classified as a slum area and entered the government development program several years ago without progress or achievement until it became a breeding ground for piles of dirt and turned into an unsafe zone (Government slump development unit).
The site acquired great importance as a religious, spiritual, and commercial symbol. It was a port for transporting goods and was supported by its proximity to the Nile River. Besides, it is one of the individual models of archaeological angles in Lower Egypt and needs to be highlighted for development and inclusion in tourism programs. (Archaeological Awareness and Cultural Development, Dakahlia Archeology).
Among the most important Islamic shrines: Al-Mowafi Mosque
One of the most famous mosques in Mansoura, founded by King Al-Saleh "Nagm El-Din Ayoub" in 583 AH -1187 AD.
Al-Ghamry Archaeological Mosque and Minaret
It is located in Meet Ghamr city. This unique minaret dates back to the reign of Mamluks
Muhammad Ibn Abi Bakr Al-Sedik Mosque
It is located in Meet Demsis at Markaz Aga. It is the Mosque of Mohammad Ibn Abi Bakr Al Sedik, son of the Companion Sahabi of Prophet Mohammad (PBUH) and the first Caliph of the Muslims. His tomb was discovered in 1950.
Prince Hammad
It is located next to Al-Ghamry Mosque; and it dates back to Mamluks era. El Amir Hamad mosque is one of the historical monuments at Meet-Ghamr. It suffers severely from the potential danger of demolishing and erasing the cultural history of a significant period where a number of owners of the surrounding buildings demolished their rented houses from the endowments and started constructing modern, undefined buildings. Consequently, they damaged the local area's history and architectural heritage for the purpose of building their own homes away from the regulations and government rules. Most of the demolished buildings are extremely damaged. They are owned by the endowment that was severely neglected for years, and many acute problems are caused by that exact reason.
The mosque is considered to be one of the rare suspended mosques in the eastern delta region in the Ottoman period, thanks to the establishment of "wedoa" places at the bottom of the corner, as well as the creation of an external corridor under the southern area and lower rooms with intersecting domes. (https://gate.ahram.org.eg/daily/News/808123.aspx)
Site analysis
The area's condition is objectively analysed to depict the difference between positive and negative points inside the area, starting from the solid and void maps, which show the real, condensed, used fabric of the private buildings with a very appalling condition inside the area near to El Ghamry mosque and El Amir Hamad Minaret, which caused the demolishing of the rest of the surrounding buildings in bad condition and leaving the area for potential criminals and prostitution.
A detailed study for the building types and occupation is carefully carried out to generate the right design decision for the used area. The analysis shows that most of the land ownership irretrievably goes to the endowment of the authoritarian government and the rest is citizen ownership "which caused the problem at first". Thus, by calculating the proper ratio, it is found that 65% goes for the endowment and 35% are citizen ownership.
Corner Among the most important Coptic shrines Saint Mar-Gerges Church
It lies in Meet Demsis at Markaz Aga. It is composed of two buildings; one of which dates back to more than 1,600 years.
Monastery of St. Damiana:
It is located in Damiana village at Markaz Belqas. It includes five churches; one of which is an archeological church in Gothic style that was discovered late in 1947.
Building number
Floor no R F O
Understanding Heritage Crime for the Study Area
Heritage crime is a concept attached to any heritage site. It means "any offence which harms the value of heritage assets and their settings to the present and future generations or which impairs their enjoyment." In other words, it could be said that it is a punishable offence in which any individual intentionally harms the moral or physical value of heritage assets and their historical settings. (Robin Bryant (2020).
According to Yaron Gottlieb, there are many different types of crimes against our heritage, including theft of lead and other metals from churches and other historic buildings; architectural theft; illegal metal detecting; unlawful alteration and damage to listed buildings; unlawful demolition of buildings and structures in conservation areas; damage to monuments; arson; graffiti; and other forms of antisocial behavior in proximity to heritage assets (Yaron Gottlieb, 2020).
Most heritage assets are properly protected by specific legislation to typically prevent harm caused by possible damage or unlicensed alteration on the international level. However, other crimes such as apparent theft, criminal damage, aggravated arson, and anti-social behavior offences can also damage and harm heritage assets and interfere negatively with the allowed public's social enjoyment and personal knowledge of their cultural heritage (Korsell et al., 2006). In Egypt, touristic sites are carefully secured. Most of them are protected by the established rules and local legislation. However, in some smaller cities that typically have a great deal of local historical heritage and valued assets, touristic sites suffer severely from being away from the excessive regulations and political rules and are left for gradual deterioration, as in the proposed area of Meet Ghamr, Dakhalia, Egypt. It could easily be affected by heritage crime. (H. Tavakoli and M.H. Marzbali (2021).
The proposed study area typically contains one of the famous and valued minarets of El Amir Hamad in addition to the ruined Gamry mosque, which is similar in a unique design to "Azahar Al Sharif". However, they suffer from some types of heritage crime that include architectural theft of some ornaments from the mosque and the church (particularly of wood and stone); criminal damage (e.g. graffiti on a scheduled monument); illegal remain detecting; anti-social behavior (most particularly occupying and living inside historic sites along with some other activities); unauthorized changes to historic buildings and the illegal trade in cultural objects (Robin Bryant, 2020).
Abandoning this historic site, without any sort of maintenance, results in an increase in violent crime rates through time with some specific types of punishable crimes that intentionally harm the abandoned properties and local inhabitants. Consequently, these crimes take place and increase by time (Han, Hye-Sung, and Helm, Scott. August 1 st , 2010 -This entry was published).
Tourism Industry and Safety for the Abandoned Historic Sites
The potential tourist and resident users of a certain space simply propose an economic diversity of possible options to some more economic improvements to typically increase the nominal income for both stack holders and local habitats of that local area. The idea of the local tourism industry on the economic scale naturally involves many alternative options. Accordingly, it offers and generates continuous occupations in many altered areas. (WTO, 2002) By encouraging local tourism through the introduction of a suitable development plan, the local economic value of the urban space increases, which helps to bring investments to the area and local hotels, cafes, food courts, gathering pedestrian points, tour guide agencies, retail, and souvenir giveaway shops can typically help to reliably deliver many diverse levels of gainful employment for local people in the neighborhood. (OECD, 2020) According to a study conducted in Philadelphia, which is considered to be one of the heritage cities that suffer from vacant and abandoned buildings, the city is using an abandoned building remediation strategy to reduce blight and crime, stabilize real estate value, and encourage economic development. The results indicate that the low-cost method of possible renovation could possibly be an effective means to reduce crime. Moreover, this study provides useful evidence of the potential effect of abandoned building remediation policies on decreasing crime rates in cities. (Kondo MC et al., 2015).
Proposed Development Plan for the Study Area
Rasha Ali ,Yasmin Moanis/ Environmental Science and Sustainable Development pg. 8 Cultural tourism positively brings about a massive consequence in flourishing any archaeological site by its vast impact of radically increasing pedestrian safety, social balance, positively enhancing criminal attitude, and economic balance to the place, so it was humbly proposed to conserve the historic site with all the great archaeological buildings inside it (ElAmir Hamad Minaret, El Local Ghamry mosque, and El Arwam Church) and to widely establish a current path for tourists to instantly get inside the site to enjoy the magnificent scenes offered by the historic buildings and mosques inside after carefully restoring it to the past situation. From the outside, tourists can enjoy the broad view of the Nile and El Arwam Church. The solution proposal indicates precisely the following steps of a considerable improvement: The touristic path proposed according to the arrangement of the significant archaeological building inside the site with profound respect to the main streets, secondary streets, car pathways, pedestrian walkways, and empty land distribution.
First step: carefully explore all the vacant land to properly manage to solve the problem related to considerable space. Second step: state the archaeological lots that can be involved in the touristic path. Third step: start rearranging the local inhabitants inside the area at the visible edge of the historical site to conserve all the historic buildings. Fourth step: use all empty lands inside the area to make amenities and services for the tourists and manage the sites belonging to the endowment to refresh the history of those buildings. The fifth step: properly plan a touristic path for the typical tourist to naturally enter the site as indicated in the leading figure (8 and 9). Three possible pathways are clear to the specific plan proposed as follows:
First Proposed Path:
The first route starts from one of the main streets of the area that looks over the River Nile, "El Horya Road", leading directly to the middle of the historic site towards the Arwan church and then through "Ez El Dein Road", which faces the "Ghamry Mosque" with its fantastic architecture. Then, the tour is led to the proposed pedestrian area that starts from "Zawiya Street" and ends by the main gathering point to end the official trip as illustrated in the following diagram. Source: Authors
Second Proposed Path:
It starts from the main street "El Horeya road" and the proposed main waiting and gathering point, and then inside the historic site towards the Arwan church, then inside the site from "Eez el Dein road" which faces El Ghamry mosque, and then to a pedestrian area starting from "Hamad street" and ending at the proposed main gathering point to end the official trip as illustrated in the following diagram.
Third Proposed Path:
It starts from the main street "El Dakhakny road", reaching to El Ghamry Mosque and then to Prince Hamad Minaret, passing through the other conserved buildings and finally to the nearest gathering point proposed, which includes all the amenities that any tourist may need.
According to previous research, the rate of potential crime decreases gradually over time with proper maintenance and management. The development of the area and the gradual introduction of tourism effectively increase the activity inside the area and bring a new economic factor to the area.
This humble effort can help in developing the surrounding area and stimulate the local governorate to integrate the idea of developing abundant historical sites into their management plan. Furthermore, it helps other Middle Eastern government agencies to propose integrated management tools for the stimulation of historic cities.
Conclusion
The potential problem of abandoned buildings and vacant lots lies in their being devastated structures seen daily by urban residents. They may create physical opportunities for violence by sheltering illegal activity, ignoring the historical assets and areas and enabling them for destruction, abundance, and neglect. This could lead to severe radical changes on the minor scale of urban fabric and in crime typology and rates as well.
A complete management plan should be applied by the government to improve those sites to restore them gradually, rather than barely keeping as they are. An essential part of the potential problem is the endowment policies and how they improperly treat the old assets of the local area.
By introducing the historical tourist resources to the site, many problems can be easily rectified by time. It also contributes to the elimination of crime and fear of crime rates, which ensures the continued development plan proposed to the captive population of those areas. The economic aspect of the site comprises a crucial factor that affects the development of the site. Urban planning studies for heritage areas that rely on tourism as their main source of income in order to benefit from their development, the creation of new economic value that is supportive to the wellbeing and lifestyle of the inhabitants and users of the area.
Improving the behavioral, social, psychological conditions and wellbeing of the local community represents a significant challenge to typically minimize the crime rate and fear of crime
Recommendations
-The proposed management plan intensely supports the successful completion of the work and appropriately regulates cultural tourism at such archaeological sites. The alternative relationship between crime rates and cultural tourism undoubtedly remains a sensitive spot of debate for a long time. Some researchers suggest that tourist destinations can clearly affect the safety perception of visitors the. On the other hand, incorporating a modern type of active users from different cultures inside any space can definitely affect crime rates as they gradually decrease. By the time, the local area is improved as a recent economic source for work will take place and also a new lifestyle will be developed under the supervision of the government and will seriously affect economic and social level.
-Taking into consideration the image of the local area, which naturally makes it a unique space to explore despite the heritage site itself, such as (culture, hospitality, infrastructure, and local attractions), uniform management of the security of the site, introducing the site to the local tourism that makes it safer by time.
-The authorized public and privet sectors should voluntarily undertake the civic responsibility to properly maintain and instantly improve the historic site from all practical sides of possible crime and demolition.
-Promptly introduce alternative activities and historical events related to the most common heritage buildings to be gradually held in the local area, such as celebrating Islamic and Christian religions.
-Carrying out public meetings with local users and local stakeholders to inform them about the historical importance of radically improving the standard of living inside the area and to identify the importance of safety and security problems.
-Promptly introduce a long-term plan of possible threats that could be critically involved by introducing cultural tourism inside the protected area which should be positively related to the emergency plans of the used site.
-An authorized public or private force should be applied inside the local area to instantly solve and interfere whenever necessary.
Empirical data on the area of crime types and sufficient quantity should be appropriately cleared from time to time to identify hot spots and initiate the exact action at the right time. Additionally, crime data should be generously offered by the local government for the general public to ensure the key role of local awareness and encourage inhabitants to help in solving crime problems.
Finally, it is highly recommended to increase the inhabitant's ownership feeling which leads to enhancing the perception of safety and reducing crime rates. | 5,598.6 | 2022-12-30T00:00:00.000 | [
"Engineering"
] |
Accelerated knowledge discovery from omics data by optimal experimental design
How to design experiments that accelerate knowledge discovery on complex biological landscapes remains a tantalizing question. We present an optimal experimental design method (coined OPEX) to identify informative omics experiments using machine learning models for both experimental space exploration and model training. OPEX-guided exploration of Escherichia coli’s populations exposed to biocide and antibiotic combinations lead to more accurate predictive models of gene expression with 44% less data. Analysis of the proposed experiments shows that broad exploration of the experimental space followed by fine-tuning emerges as the optimal strategy. Additionally, analysis of the experimental data reveals 29 cases of cross-stress protection and 4 cases of cross-stress vulnerability. Further validation reveals the central role of chaperones, stress response proteins and transport pumps in cross-stress exposure. This work demonstrates how active learning can be used to guide omics data collection for training predictive models, making evidence-driven decisions and accelerating knowledge discovery in life sciences.
Overview
We have designed the Optimal Experimental Design Framework (OPEX) to identify optimal set of transcriptomic experiments for maximizing prediction power in unobserved culture conditions in three steps ( Fig. 1, steps 1-3). In the first step, we use the available transcriptomic data to Build Predictive Model of gene expression using culture condition as the model input. In the second step, we Calculate Utility Scores for unobserved culture conditions using the predictive model from the first step. In the third step, we Select Optimal Conditions amongst all unobserved culture conditions given their utility scores from the second step. In its general form, OPEX is the following optimization problem: where the matrix s denotes the culture conditions for the next batch of experiments, the matrix denotes the culture conditions for the observed experiments (with each row of the matrix being an experiment), matrix o contains the gene expression profiles that map to the corresponding experiments of o and scalar denotes the batch size (i.e. number of conditions to run for the next batch). The optimality of a batch of candidate conditions in matrix is determined using the function ℎ_ and the optimal batch is returned by ArgMax .
General Mathematical Formulation
The following describes the three-step algorithm of OPEX algorithm for finding s . The modular design of the OPEX algorithm (Algorithm 1) allows different methods to be used in each step. The vector u contains u real valued utility scores, one for each unobserved condition that is encoded by a corresponding row of u . Next, we describe the methods used in our implementation and results.
Build Predictive Model
For each gene of E. coli, we used Gaussian Processes (GP) to build a predictive model (i.e.
to predict the expression level of the gene under a culture condition characterized by a horizontal vector . In our real-data results, is a 14-bit binary vector representing presence/absence of ten biocides and four antibiotics which characterize a given culture condition. In our synthetic-data results, is a real The covariance matrix = [ 1 2 ⋮ ] represents all pairwise correlations for the given gene. The parameter represents the amplitude of overall correlation along all dimensions in the SE kernel while parameter is used for automatic relevance determination [1]. A larger value of represents a smaller influence of the ℎ independent variable of a culture condition on the gene expression. These parameters are learned by maximizing the marginal likelihood of the observed data given the parameters. For a detailed derivation of the equations related to GP, see [1].
Given the selection of GP as our model in this work (from equations (1-4)), for each gene the trained ( new ) is well defined by new , and which are used to predict the gene expression using ( new ) from equation 2 and to calculate utility scores as described next.
Calculate Utility Scores
We evaluated OPEX using three different utility score calculation methods described here. The utility score new for a new unobserved condition is calculated using the utility function. The utility scores of all unobserved conditions are represented by vector = [ u 1 , u 2 ,…, u ], where u is the utility score of the ℎ unobserved condition for a given gene. Mutual Information (MI). In the setting of MI, the idea is to select the most representative culture condition amongst all possible unobserved culture conditions. The representativeness of n observed culture conditions is quantified by the mutual information (MI) between the observed and the to the unobserved ones. One sequential design implementation is to select the culture condition which provides the highest increase in covariance between the observed datapoints and unobserved datapoints [3]. The covariance matrix can be calculated by the following equation for a given gene.
Entropy (EN
u,o is a covariance matrix composed of the pairwise correlation between the unobserved conditions and the observed conditions. Each entry in the matrix is calculated by the kernel function of the GP. o,o a covariance matrix composed of the pairwise correlation between the observed conditions. The covariance utility function is equal to the increment in the trace of the covariance matrix calculated by the following equation: where {u−new},{o+new} is the same as u,o except that a given unobserved condition new is removed from the set of unobserved and added to observed conditions for a given gene.
Select Optimal Conditions for each gene
Following the general optimization equation (1) and using the terminology above, the next condition to select is the one that has the maximum utility score according the ℎ gene is: where new is the utility score corresponding a given unobserved new condition calculated by one of equations 8, 10, and 12, depending on the utility function used. For example, when we use mutual information (i.e. equation (10)) as the utility function, the most informative condition for ℎ gene is selected by solving the following OPEX optimization problem: where new is a 14-bit binary vector representing presence/absence of ten biocides and four antibiotics for a culture condition.
Finally, [ 1 , 2 , … ], where is the number of genes. With the optimal unobserved conditions selected for each gene, we count the frequency of each selected batch and select the most frequent one from For a larger batch size (i.e. > 1), the next condition in the batch was selected by greedy, constrained or adaptive sampling, as described in the respective results. In greedy sampling, the conditions are ranked based on their utility scores. The top conditions with highest utility scores are selected for the next batch. In constrained sampling, the condition with the highest utility score is selected and added to the batch. Then we iterate through the remaining conditions ordered by their utility scores and calculate their Euclidean distance to selected items in the batch. Conditions with a minimum distance (based on a predefined threshold) are added until the batch-size limit is reached. Finally, in adaptive sampling, the condition with the highest utility score is selected and added to the batch. Then, the predicted gene expression profile of the newly selected condition is considered as observed leading into the newly trained model and updated utility scores. The condition with the highest updated utility score is then added to the batch. This process is repeated until the batch-size limit is reached.
Select Optimal Conditions for most of genes
With the optimal unobserved conditions selected for each gene, we count the frequency of each selected batch and select the most frequent one from [ 1 , 2 , … ], where is the number of genes.
Alternative Optimal Experimental Design Methods
For benchmarking, we used three other optimal experimental design approaches, query by the committee using different types of models [4][5], query by committee using bootstrapping [6], and D-optimal experimental design [7]. Compared to OPEX, all approaches differ in the utility function used while the last approach also employs a different predictive model.
Query by Committee Using Different Types of Models.
We used feedforward neural network (FNN), linear regression, Gaussian process and Support Vector Regression (SVR). For training an FNN and SVR, we used the packages, neuralnet and e1071, respectively. The number of hidden nodes of FNN and the two hyperparameters of SVR were optimized by grid search. In each iteration, the condition with the highest disagreement amongst different models (i.e. highest variance) was selected for the next iteration. When generating a learning curve, GP model was used.
Query by Committee Using Bootstrapping. Here we used one type of model (GP), but changed the training set using bootstrapping to build a committee of four GP models. Likewise, the condition with the highest disagreement amongst different models (i.e. highest variance) was selected for the next iteration. When generating the learning curve, the GP model was trained without bootstrapping.
D-optimal Experimental Design.
Here, we used a linear model to predict gene expression (linear models were trained by the built-in implementation for linear regression in R). The condition that increased the determinant of information matrix (X T X) to the most extent was selected at each iteration.
X is a matrix consisting of the vectors each of which represents a culture condition.
Expert Sampling
We designed three strategies for expert sampling by consulting one chemist and three biologists to evaluate the effectiveness of OPEX compared to human experts. These are: First strategy: Structural similarity. The first strategy relied on comparing the pairwise structural similarity among 10 biocides and 4 antibiotics. The least similar culture condition was selected in each iteration. Specifically, a 1024-bit topological fingerprint was generated for each chemical using the Python package, rdkit. The pairwise Tanimoto similarity among the biocides and antibiotics were calculated [8]. The similarity between the two culture conditions was defined as the sum of the similarity between the two biocides and that between the two antibiotics. When exploring the space defined by the biocides and antibiotics, we looked up the similarity of each unobserved culture condition to all the observed culture conditions and picked the least similar one.
Second strategy: Mechanism of action. The second strategy was based on similarity in the mechanism of action of each antibiotic and biocide. In each iteration, we sampled the antibiotic and biocide that were most different from the observed ones. For the mechanism of action of each antibiotic and biocide, see Supplementary Table 1.
Third strategy: Effect size. For the third strategy, three experts first ordered the four antibiotics based on their expected dominant impacts on transcription in the central dogma of molecular biology, Ampicillin < Norfloxacin < Kanamycin < Rifampicin. Rifampicin is known to inhibit RNA polymerase hence directly impacting transcription [9]. Kanamycin interferes with translation hence indirectly impacting transcription through transcription factors [10]. Norfloxacin and Ampicillin are known to impact DNA replication and cell wall hence ordered last with respect to their impact on overall transcription profile [11][12]. If an antibiotic is dominant, the choice of biocide would be expected to have a smaller impact on gene expression of E. coli. We rationalized that if we have the gene expression under a culture condition that has a more dominant antibiotic, we are likely to do a good prediction for the culture conditions that have the same antibiotic but different biocides. Based on such reasoning, we grouped the unobserved culture conditions into four groups based on the antibiotic. The group that has a less dominant antibiotic was sampled before the group that has a more dominant antibiotic. In each group, we split the culture conditions into 5 buckets based on the mechanism of each biocide. Among the 5 buckets, we randomly selected one in each iteration while making sure two consecutive datapoints were not from the same bucket.
Random Sampling
For random sampling, we randomly selected a datapoint (an experimental condition in our setting) from all the unobserved datapoints as the next datapoint to collect. The default random function in R was used.
Exploration-Exploitation Tradeoff
The calculated utility scores have a potential myopic bias, therefore relying on them for selecting the next batch of experiments (i.e. exploitation of the model) can lead to overfitting. To avoid this, a portion of the selected conditions for the next batch can be selected randomly (i.e. exploration of the sample space). The exploration-exploitation trade-off is fundamental in optimal experimental design [2].
Exploration refers to switching to a strategy different from the predefined strategy based on one of the utility functions. The role of exploration is similar to simulated annealing [14]. Exploitation means exploiting the information learned from the collected data and selecting the next datapoint based on the prediction of a model trained on the collected data. We used exploration frequency to control the tradeoff.
The effect of the exploration frequency parameter was evaluated on synthetic datasets and the RNA-Seq dataset.
Synthetic and Real RNA-Seq Datasets
OPEX was evaluated on a synthetic dataset and real RNA-Seq datasets.
RNA-Seq Dataset
We measured the gene expression profiles of E. coli under 45 culture conditions defined by 4 antibiotics and 40 combinations of 10 biocides and 4 antibiotics, and an untreated control. Out of all the genes of E. coli, 1,123 genes had a count per million (CPM) larger than 100 in at least half of the samples. The fold changes of the genes that have a CPM less than 100 in at least half of the samples are expected to be sensitive to the sequencing depth [13]. To exclude the effect of those genes, we tested OPEX only using the 1,123 genes. We also tested OPEX and variations of OPEX on all the 4,391 genes, which resulted in similar results (Section 2.5).
Validating OPEX on Synthetic Datasets
When validating the performance of OPEX on the synthetic datasets, we first randomly split a synthetic dataset into three datasets, a training dataset, a pool of candidate conditions, and a benchmark set for evaluating the prediction performance of a trained predictive model. Then we evaluated OPEX in 30 iterations. In each iteration, we trained a GP model using the training dataset, calculated the utility score of each candidate condition that remained in the pool and selected a batch of conditions for adding to the training set. Finally, we evaluated the predictive performance by the mean absolute error of predictions on the benchmark set. After running OPEX for 30 iterations, we visualized the prediction accuracy at each iteration as a learning curve and compare the learning curve of OPEX and that of the baseline. When selecting a batch of conditions in a batch, we tested three approaches as outlined in section 1.2.3. Random sampling was used as the baseline for evaluating the performance of OPEX.
Validating OPEX on an RNA-Seq Dataset
Since the RNA-Seq dataset does not have as many records as in the synthetic datasets, we slightly changed our validation method for it. We first split the whole dataset into two parts instead of three. One part served as the starting point of the training set. The other part served both as the pool of culture conditions for selection, and as the benchmark dataset. The initial training set consisted of 15 randomly selected culture conditions where each antibiotic and biocide were selected at least once. In each iteration, we trained a GP using the current training set, selected a candidate culture condition from the pool and moved it to the training dataset, then evaluated the prediction performance of the retrained GP on the benchmark set. Note that the size of the benchmark was reduced at each iteration. We run 50 times the whole process with a different random seed at each time.
Two types of methods were used for comparison, random sampling and expert sampling (for details, see the section entitled Expert Sampling). Each sampling method was evaluated against random sampling using the MAE of gene expression predictions in a given iteration.
Cluster Analysis on 40 Culture Conditions
We ran Principal Component Analysis (PCA) [15] and t-Distributed Stochastic Neighbor Embedding (t-SNE) [16] on the gene expression profiles of all 40 conditions using prcomp and Rtsne respectively in the R programming language, and projected the first two dimensions.
Hierarchical clustering was also performed on all 40 gene expression profiles (Fig. 3B).
OPEX Accelerates Knowledge Discovery
To test whether OPEX can accelerate knowledge discovery (Fig. 3C
Antimicrobials
10 biocides and 4 antibiotics were used in this study. Biocides were selected based on their widespread use in hospitals and households [17], and antibiotics were selected based on their unique cellular targets (Supplementary table 1
Strains and Culture Conditions
Escherichia coli MG1655 was used in all experiments, excluding the experiments performed to validate the genes involved in cross-protection and cross-vulnerability where wild type Keio strain BW25113 and its derivative single gene knock out (KO) strains [18] were used. Since, KO strains had kanamycin resistance gene, which might influence the validation experiment, it was removed by a method described elsewhere [19].
Synthetic Datasets
Seven datasets whose skewness in the distribution of the output varies from 1.17 to 7.86 were generated.
In a highly skewed dataset, in most cases the output was close to zero and only several sharp peaks exist in the space. See Supplementary Figure 1, for a visualization of the datasets, distribution of the output of each dataset and statistics of the datasets. The data for the synthetic datasets is in Supplementary Data 2.
The Performance of OPEX on Synthetic Datasets
We evaluated the performance of OPEX using seven synthetic datasets with respect to five factors including: skewness in the distribution of the output, the noise level in the measured output, frequency of exploration, initial dataset size, and batch size. We did not find such a systematic analysis of these factors in another study.
The Effect of Skewness and Noise. Interestingly, the advantage of OPEX advantage concerning the baseline was inversely proportional to the skewness of the dataset (p-value < 10 -3 by t-test; Supplementary Figure 2A). OPEX was found to be robust when noise is present in the training set, outperforming the baseline even at very high noise levels (for the entropy utility function, 16% better than the baseline at 90% white noise on the 1st synthetic dataset, p-value < 10 -6 by t-test; Supplementary Figure 2D), and that window varied given the dataset skewness (Supplementary Figure 5). When the dataset size was too small, the benefit of OPEX methods was generally limited until more samples were collected. When enough information is not initially available, we don't expect OPEX to effectively drive experimentations. Similarly, when the initial dataset size was large relative to the size of the experimental space, the experimental space has been largely explored.
Therefore, increasing the dataset size with more experiments does not impact the information content of the dataset regardless of the underlying method sampling method (e.g. OPEX vs. Random sampling).
OPEX Performance on the Biocide-Antibiotic Transcriptional Profiling
OPEX with entropy as the utility function, outperformed expert sampling and random sampling significantly for exploring the interaction between biocides and antibiotics (Supplementary Figure 9).
The gap between the learning curves of OPEX and random sampling kept expanding until 23 more datapoints were collected and the MAE achieved by OPEX was 22% smaller than random sampling at that point. To reach the same prediction accuracy by random sampling, OPEX needed 50% fewer datapoints.
Surprisingly, the performances of the three expert sampling strategies was worse than that of random sampling. Among the three expert sampling approaches, the one based on the chemical structure of antibiotics and biocides is slightly better than the other two (Supplementary Figure 10). When adding more and more exploration, the performances of expert sampling strategies became close to that of random sampling but never surpassed (Supplementary Figure 10).
Retrospective Analysis of the OPEX Strategy
To analyze the effectiveness of OPEX in exploring the space of unexplored culture conditions, we plotted the distance between the gene expression profiles of consecutively selected datapoints (the consecutive distance) over 30 iterations. Not surprisingly, the consecutive distance fluctuated, and no pattern was observed in the case of random sampling (Fig. 2D). However, the consecutive distance for OPEX with an even tradeoff between exploration and exploitation increased gradually in the first 10 iterations (pvalue = 0.05), and finally kept decreasing (p-value < 10 -6 , Fig. 2E), indicating that OPEX can capture the similarity of gene expression profiles under different culture conditions. This, reveals the underlying strategy of OPEX in progressive exploration of the condition space, first at a coarse granularity and then on a finer granularity, which was confirmed by the fact that the distance in the first 15 iterations was above the median distance of all the 30 iterations, and the distance in the latter 15 iterations fell below the median.
In more detail, the distance between adjacent points in the gene expression space increased in the first 10 iterations and decreased afterwards, showing that OPEX explored the space on an increasingly higher level of granularity and then decreased the level of granularity. The impact of exploration percentage used by OPEX on the sampling strategy, is illustrated in Supplementary Figure 11.
In the case of expert sampling based on structure similarity, the consecutive distance first fluctuated and increased sharply at the end (Supplementary Figure 12A). For the other two expert sampling approaches, the consecutive distance in the first 10 iterations was flat and then increased slightly and finally decreased slightly (Supplementary Figure 12B-C). Not surprisingly, it gets closer to the random sampling curve as additional exploration percentage is added (Supplementary Figure 12D-I).
Sensitivity Analysis of the OPEX Method
Here, we investigated the impact of the exploration frequency, skewness and noise level on the performance of OPEX evaluated using RNA-Seq dataset.
The Effect of Exploration. When the space to explore is of low complexity (e.g. convex fitness functions, few parameters/dimensions) following a single sampling strategy with zero percentage of exploration used by OPEX is sufficient, as it was the case with the synthetic dataset (Supplementary Figure 4). However, for a complex space as in the case of RNA-Seq data with 14 independent variables and thousands of genes to predict, OPEX with zero exploration can overfit (Supplementary Figure 11A We analyzed the diversity of the selected condition at each iteration among all the OPEX runs. Shannon index was used to quantify the diversity of the sampled conditions at each iteration (Supplementary Figure 15A). The diversity of the selected condition at each iteration among all the OPEX runs for OPEX were very low compared to that of random sampling, which is indicative of the tendency of OPEX in selecting particular conditions at each iteration, suggesting that OPEX tended to sample this outlier condition (i.e. peracetic acid + kanamycin) regardless of the starting training datapoints. We confirmed this by visualizing the distribution of the culture conditions selected by OPEX at the last two iterations.
At the 27 th and 28 th iteration, OPEX chose the peracetic acid + kanamycin condition 33 times among the 50 OPEX runs (Supplementary Figure 15B-C). Note that for the initial dataset (i.e. 15 randomly selected conditions) the peracetic acid + kanamycin condition was only selected in 11 OPEX runs (amongst 50 total runs). The expected number of conditions that include a specific condition in 50 OPEX runs is 11.2.
Thus, OPEX chose to sample peracetic acid + kanamycin condition at the end in 84% of the 39 OPEX runs (39=50-11). When adding 50% of exploration, the diversity was increased (Supplementary Figure 15D-E) and the performance of OPEX was optimal (Supplementary Figure 14). Similarly, the diversity can be increased by adding more exploration in the case of expert sampling (Supplementary Figure 15F), but since the sampling strategy of expert sampling was not effective, the performance could not surpass that of random sampling (Supplementary Figure 10). impacting the topology of the space (e.g. number of genes and independent variables). We also performed gene ontology (GO) enrichment analysis on the list of 969 genes using DAVID [20]. The resulting enriched biological process GO terms were related to translation, glycolytic processes, cell division, peptidoglycan biosynthetic processes, and regulation of cell shape (with a threshold for the adjusted p-value of 0.01). No enriched biological process GO terms were observed for the 154 genes (14%) for which OPEX did not outperform the expert sampling.
Peracetic Acid and Kanamycin Condition as an Outlier
We investigated further why OPEX deprioritizes the selection of peracetic acid + kanamycin condition until later iterations while having a poor performance for predicting its gene expression profile. We hypothesized that the GP model predicted a gene expression profile for this condition that was similar to the ones in the training dataset. Thus, we visualized the predicted gene expression profile of those conditions with the measured gene expression profile for other conditions in 2d space by t-SNE (Supplementary Figure 17). The predicted gene expression was close to other conditions in which peracetic acid or kanamycin was present. Since the gene expression was predicted based on the culture conditions, it is reasonable that the model made such a prediction. However, the true gene expression under the peracetic acid + kanamycin condition was close to another antibiotic (the cluster in the top right of Fig. 3A, the green cluster in Supplementary Figure 18), which suggests that the model could not determine the gene expression of that condition based on the antibiotic and biocide used.
Validating OPEX on all 4,391 Genes
We have shown the superior performance OPEX when evaluated using 1,123 genes which meet minimum sequencing coverage criteria (count per million >100, as described under RNA-Seq data analysis in Methods section of the main manuscript), following recommended guideline for RNA-Seq analysis. This raises a question on whether OPEX also performs well in the case that the set of inactive genes are unknown beforehand. Therefore, we also assessed the performance gain of OPEX compared to random sampling without removing any genes (i.e. used all 4,391 genes), and achieved similar results.
OPEX needed approximately 40% datapoints to reach the same accuracy compared to random sampling (See Supplementary Table 2 and Supplementary Figure 19), similar to results of Fig. 2B where only 1,123 were used. We compared OPEX runs that consider all 4,391 genes versus OPEX runs that rely on the 1,123 active genes and found that they provide similar improvement of performance relative to Figure 19A versus Fig. 2B). This can be explained by the higher variance in the measurements among replicates when non-active genes are included (Supplementary Figure 21).
OPEX Comparison to Other OED Approaches
We compared the performance of OPEX with three alternative approaches, query by committee using different types of models, query by committee using bootstrapping, and D-optimal experimental design (see section 2.1 for more information in this file). The performance of each approach was evaluated based on the average MAE of predictions for the expression of all the 4,391 genes (Supplementary Table 1 and Supplementary Figure 19). OPEX with entropy as utility function reached a better performance compared to query by committee using different types of models (QBC-Mixed-Models) when compared based on the maximum percentage of data points saved relative to random sampling (13 versus 14 iterations to reach MAE that random sampling achieves at iteration 27). With respect to the overall improvement of MAE relative to random sampling in all iterations, OPEX with entropy achieved 12.7% while QBC-Mixed-Models achieved 11.0% showing a slight advantage for OPEX with entropy (p-value = 9 × 10 −9 ). OPEX with mutual information as the utility function performed similar to QBC-Mixed-Models. The query by committee using bootstrap QBC-Bootstrap and D-Optimal methods did not show a consistent advantage over random sampling. The prediction performance of the four types of models from query by committee was ranked in this order: Support Vector Regression, Gaussian process, linear regression and feed-forward neural network (Supplementary Figure 22).
Exposure to Biocides and Cross-protection to Antibiotics
Our fitness measurements demonstrated that biocide treated E. coli cells, in majority of cases, exhibited cross-protection in antibiotics, excluding a couple of cases of cross-vulnerability. In 29 out of 40 treatment conditions, biocide treatment increased the fitness in antibiotic, while in 4 cases, treatment reduced the fitness. Cross-protection between biocides and antibiotics has been brought up to attention by researchers [21][22][23] and regulatory agencies [24][25][26] before. Biocides are regularly used as a sanitizer in hospitals, houses and food industries, and such study could help guide regulations to reduce the emergence of antimicrobial resistance.
Interestingly, pre-exposure to all biocides conferred protection against the antibiotic rifampicin ( Fig. 3B), which was also the group that formed a cluster in the t-SNE/PCA analysis for transcriptomics of biocide/antibiotic pairs ( Fig. 3A and Supplementary Figure 16). The highest fitness value was observed for the povidone iodine/kanamycin combination, and the lower for chlorophene/norfloxacin. These two extreme cases were selected for further investigations.
Three distinct clusters for all conditions
We examined cross-resistance of wild-type E. coli to each of the four antibiotics after pre-exposure to one of the ten biocides. Although there are 40 pairs of biocides and antibiotics, the gene expression (GE) profile was often dominated by only one factor (biocide or antibiotic). The dominating factor can be explained using three rules as evident by the three clusters in Fig. 3A. First, the alcohol biocides (Ethanol, Isopropanol and Chlorhexidine) had a dominating effect on GE profile regardless of the antibiotic they are paired with. Second, apart from the alcohols, in the majority of cases, rifampicin had a dominating effect on GE profile regardless of the biocide that it was paired. Third, the choice of biocide determined GE profile except when Rifampicin was used. This is evident by the proximity of points related to each biocide Benzalkonium chloride, Chlorhexidine, Chlorophene, Ethanol, Glutaraldehyde, H2O2, Isopropanol and Peracetic acid on the t-SNE plot ( Fig 3A). The same clusters were also detected by the PCA plot (Supplementary Figure 17). The kanamycin/ peracetic acid pair is particularly interesting since it did not follow the general pattern. We further asked whether these clustering patterns can be explained
Supplementary Figures
Supplementary Figure 1 Supplementary Figure 3: The effect of noise on the performance of the OED methods compared to random sampling on synthetic datasets 2-7 (A-F), whose skewness are 2.07, 3.05, 4.13, 5.29, 6.55 and 7.06, respectively. The setting for other hyper-parameters are as follows: starting size=300, exploration frequency=1/6, batch size=3, the number of iterations=50. The error bar denotes standard deviation (number of datapoints=50). The bar represents the mean of 50 runs.
Supplementary Figure 4 A-F:
The effect of exploration frequency on the performance of the OED methods compared to random sampling on synthetic datasets 2-7, whose skewness are 2.07, 3.05, 4.13, 5.29, 6.55 and 7.06, respectively. The y-axis is the MAE of random sampling minus the MAE of OPEX divided by the MAE of random sampling. A positive value means OPEX is more effective. The setting for other hyper-parameters are as follows: starting size=300, noise level=20, batch size=3, the number of iterations=50. The error bar denotes standard deviation (number of datapoints=50). The bar represents the mean of 50 runs.
Supplementary Figure 5:
The effect of starting size on the performance of the OED methods compared to random sampling on datasets 2-7 (A-F), whose skewness are 2.07, 3.05, 4.13, 5.29, 6.55 and 7.06, respectively. The setting for other hyper-parameters are as follows: noise level=20, batch size=3, minimum distance=0.2, exploration frequency=1/6, the number of iterations=50. The error bar denotes standard deviation (number of datapoints=50). The bar represents the mean of 50 runs.
Supplementary Figure 6: Performance of adaptive sampling, constrained sampling and greedy sampling on the data whose skewness is 1.1. The setting for other hyper-parameters are as follows: starting size=300, noise level=20, exploration frequency=1/6, the number of total additional datapoints sampled=160. For Panel B, the minimum distance between datapoints in a batch is 0.2. As the batch size, k, goes beyond 16, we cannot select k points with pairwise distances greater than 0.2 hence the xaxis is from 2 to 16 in Panel B. The number of datapoints for each box in the boxplot is 50. Each box represents an interquartile range which consists of data points between the 25th and 75th percentiles. The whiskers extend to the maximum and minimum values but no further than 1.5 times of the interquartile range for a given whisker. The horizontal line within each box represents the median.
Supplementary Figure 10: The performance of three expert sampling approaches. We used expert sampling to sample one datapoint every k iterations and used random sampling otherwise to introduce exploration. The percentage of exploration in the legends is equal to k/(k+1). E.g. 67%=2/(2+1). (A) Structure similarity means that the culture condition that was most dissimilar to all the observed conditions was selected in each iteration. The similarity between two culture conditions is quantified by the structure similarity between the biocides used in the two conditions and the antibiotics used. (B) Mechanism similarity means the mechanism of each antibiotic and biocide was considered when selecting the most dissimilar culture condition. (C) Dominance of antibiotic means the more dominant an antibiotic is, the later a culture condition that has that antibiotic condition was selected.
Supplementary Figure 12:
The distance between the select datapoints in every two adjacent iterations by expert sampling with various percentage of exploration. The number on the right in each row of panels represents the percentage of exploration used. We sampled one datapoint every k iterations based on expert sampling and used random sampling otherwise to introduce exploration. The percentage of exploration in the legend is equal to k/(k+1). E.g. 66%=2/(2+1). The number of datapoints for each box in the boxplot is 50. The definition of box plot is defined the same way as in Supplementary Figure 11.
Supplementary Figure 13: The performance of OPEX which used mutual information (A) or entropy (B) as the utility function. The effect of the tradeoff between exploration and exploitation on the performance of OPEX was visualized. We sampled one datapoint every k iterations based on the entropy or mutual information and used random sampling otherwise to introduce exploration. The percentage of exploration in the legends is equal to k/(k+1). E.g. 67%=2/(2+1). | 7,922.6 | 2020-10-06T00:00:00.000 | [
"Biology"
] |
Optimization of formulation for enhanced intranasal delivery of insulin with translationally controlled tumor protein-derived protein transduction domain
Abstract Intranasal delivery of insulin is an alternative approach to treat diabetes, as it enables higher patient compliance than conventional therapy with subcutaneously injected insulin. However, the use of intranasal delivery of insulin is limited for insulin’s hydrophilicity and vulnerability to enzymatic degradation. This limitation makes optimization of formulation intranasal insulin for commercial purpose indispensable. This study evaluated bioavailability (BA) of various formulations of insulin intranasally delivered with protein transduction domain (PTD) derived from translationally controlled tumor protein. The therapeutic efficacy of newly formulated intranasal insulin + PTD was compared in vivo studies with normal and alloxan-induced diabetic rats, to those of free insulin and subcutaneously injected insulin. BA of insulin in two new formulations was, respectively, 60.71% and 45.81% of subcutaneously injected insulin, while the BA of free insulin was only 3.34%. Histological analysis of tissues, lactate dehydrogenase activity in nasal fluid, and biochemical analysis of sera revealed no detectable topical or systemic toxicity in rats and mice. Furthermore, stability analysis of newly formulated insulin + PTD to determine the optimal conditions for storage revealed that when stored at 4 °C, the delivery capacity of insulin was maintained up to 7 d. These results suggest that the new formulations of intranasal insulin are suitable for use in diabetes therapy and are easier to administer.
Introduction
Intranasal administration is a non-invasive route which has been applied to effective delivery of broad range of drugs including small molecule drugs and macromolecular drugs (Hussain, 1998;Fortuna et al., 2014;Avgerinos et al., 2018). A number of small molecule drugs (e.g. butorphanol, estradiol, naloxone, ands sumatriptan) have already been marketed as nasal formulations (Mathias & Hussain, 2010). Nasal small molecule drugs can reach general circulation a few minutes after the administration, thus, being effective when needed urgently. Nasal peptide-or protein-based drugs (e.g. desmopressin, oxytocin, and nafarelin) have also been marketed or are under development. Most parenteral drugs for biomacromolecule are extremely susceptible to enzymatic degradation in gastrointestinal tract. Thus, the main administration route remains invasive subcutaneous injection. However, recent studies attempting to find various formulations for effective delivery of those drugs focused on nasal administration as an alternative route for biomacromolecules. Advantages of intranasal administration center on the anatomical features of nasal mucosa. When drugs are administrated by intranasal route, they mainly entered through respiratory region around the inferior turbinate. Respiratory nasal mucosa is highly permeable and vascularized and is lined with columnar epithelium consisted of various cell types which confer large surface area for systemic drug absorption (Pires et al., 2009;Grassin-Delyle et al., 2012). The intranasal route avoids proteolytic digestion that occurs in the gastrointestinal tract and first-pass hepatic metabolism, thus, avoiding problems which limit drug absorption. Among various drugs which are under the development for intranasal administration, insulin ranks high (el-Etr et al., 1987;Frauman et al., 1987b;Chandler et al., 1994;Dyer et al., 2002;D'Souza et al., 2005). Current insulin formulation for subcutaneous injection has the weakness of the patient compliance because of the inconvenience of the injection, which frequently leads to non-compliance. Both increased patient compliance and exhibiting pharmacokinetics identical with that of subcutaneously delivered insulin (Frauman et al., 1987a) make this route of administration very attractive in the treatment of diabetes. However, there remain several issues that should be addressed before accepting the suitability of intranasal delivery of insulin in diabetes therapy. These include the need for absorption enhancers and for increasing the residence time of the drug in the nasal cavity (Hinchcliffe & Illum, 1999;Arora et al., 2002;Owens et al., 2003). Thus, the use of intranasal insulin still awaits development of optimized formulations that overcome the noted limitations.
Protein transduction domains (PTDs), also known as cell penetrating peptides, are short peptides that transduce cellular membranes without intervention of specific receptors (Vives et al., 1997;Schwarze et al., 2000;Jarver & Langel 2004;Gupta et al., 2005). PTDs translocate into cellular compartment without damaging cellular membrane, which makes them promising carriers to deliver macromolecules. It has been reported that broad ranges of molecules, such as DNA, small interfering RNA, proteins, and nanoparticles, translocate into cells mediated by PTDs (Guidotti et al., 2017). Furthermore, delivered molecules maintain their biological effects both in in vitro and in vivo (Morris et al., 2001;Meade & Dowdy 2007;Kim et al., 2011b;Bae et al., 2016). Recently, hydrophilic peptide drugs which are limited to intranasal administration were effectively delivered into systemic circulation without damage of nasal mucosal membrane (Choi et al., 2006;Khafagy et al., 2009;Sakuma et al., 2010). Especially effective intranasal delivery of insulin was reported to have been achieved by mixing with PTDs (Khafagy et al., 2013). This study showed that L-forms of octagonarginine, significantly increased delivery of insulin across the nasal membrane, pointing to the potential of PTDs as absorption promoters in the development of intranasal insulin.
We previously reported that the 10 amino acid sequence (MIIYRDKLISH) at the N-terminal of human translationally controlled tumor protein (TCTP) acts as a PTD (Kim et al., 2011b). Further studies have been conducted to develop or identify other peptides derived from TCTP-PTD with improved transduction activity (Kim et al., 2011a;Bae & Lee, 2013;Bae et al., 2018). Thus, we identified L-TCTP-PTD 13 (MIIFRALISHKK) and L-TCTP-PTD 13M2 (MIFRLLASHKK) as modified TCTP-PTDs with enhanced delivery capability for insulin when administered by nasal route. Although we confirmed therapeutic effectiveness of the intranasally delivered insulin with PTD, there remained several needs including sustained drug effect and improving storage conditions that avoid protein aggregation. In this study, we tried to address these needs and design and optimize TCTP-PTD-based formulations for nasal delivery of insulin.
Materials and animals
Modified TCTP-PTD with N-terminal acetylation and C-terminal amidation was synthesized by Peptron Co., Ltd. (Daejeon, Korea). All other chemicals were purchased from Sigma-Aldrich (St. Louis, MO). Male Wistar rats and ICR mice (5-week old) were purchased from Young Bio Co., Ltd. (Seoungnam, Korea). They were housed under a 12 h-h light/ dark conditions, controlled humidity and temperature with free access to food and water. For intranasal administration, animals were anesthetized by intraperitoneal injection of sodium pentobarbital and formulated insulin þ PTDs were administered into the right nostril of the mice, using a pipette. All animal experiments were approved by Ewha Womans University's Institutional Animal Care and Use Committee.
Measurement of blood glucose in rats
Male Wistar rats were fasted overnight with free access to water. Those with fasting glucose level range of 90-120 mg/ dL were selected for further experiments. Formulated insulin þ PTDs were intranasally administered to the anesthetized rats. Blood samples were collected from the rat tails and their glucose levels were measured using an Accu-Chek glucose meter (Roche Diagnostics, Seoul, Korea). For generating the rat model of diabetes, alloxan (100 mg/kg, dissolved in 10 mM sodium citrate buffer, pH 3.2) was intraperitoneally injected into normal rats. After 5 d, rats with fasting blood glucose level range of 230-300 mg/dL were fasted overnight with free access to water, then anesthetized followed by intranasal administration. Blood glucose levels were measured in the same method.
Lactate dehydrogenase (LDH) activity analysis
Fifteen minutes after intranasal administration of each insulin formulation or 5% sodium taurodeoxycholate (NaTDC), the nasal cavities were flushed out with 1 mL PBS. LDH activity in the washings was measured using a CytoTox 96 assay kit (Promega, Madison, WI) according to the manufacturer's protocol.
Plasma insulin measurement
After intranasal administrations, bloods were collected from the rat tails at desired time points and centrifuged at 5000 g for 25 min to obtain plasmas. The plasma insulin concentrations were measured by enzyme-linked immunosorbent assay (ELISA, Mercodia, Uppsala, Sweden) according to the manufacturer's protocol.
Toxicity test using mice
Male ICR mice were intranasally administered with insulin þ PTD formulation once a day for 10 d. LDH activity was measured as described previously. The collected bloods were centrifuged for 25 min at 5000 g to obtain the plasma. Blood urea nitrogen (BUN), creatinine (CRE), aspartate aminotransferase (AST), and alanine aminotransferase (ALT) levels were determined using a biochemical analyzer (AU680; Beckman Coulter, Tokyo, Japan). After sacrifice, tissues were dissected and fixed in 4% formaldehyde overnight. Paraffinembedded tissues were sectioned with microtome then stained with hematoxylin and eosin (H&E) solution and histological analysis was performed under the microscope.
Statistical analysis
Data were analyzed using Prism 5 software (GraphPad Inc., La Jolla, CA) and presented as mean 6 standard deviation (s.d.). p < .05 by Student's t-test or one-way analysis of variance (ANOVA) followed by the Newman-Keuls multiple comparison test was considered statistically significant.
Selection of formulation for intranasal delivery of insulin with PTD
Modified peptides derived from TCTP (L-TCTP-PTD 13, MIIFRALISHKK) have been reported to be an effective carriers for intranasal delivery of insulin. Double modification of residues at positions 6 and 8 of TCTP-PTD 13 (L-TCTP-PTD 13M2, MIIFRLLASHKK) improved intranasal delivery without toxicity in mice (Bae & Lee, 2013;Bae et al., 2018). In addition, optimized formulation for TCTP-PTD-based nasal delivery of insulin has been identified and this formulation which used arginine hydrochloride (ArgHCl) as an aggregation suppressor and sucrose as an osmolyte showed improved bioavailability (BA) and significant blood glucose-lowering effects in mice (Kim et al., 2019). However, ArgHCl has been reported to exert dual effects on protein aggregation (Smirnova et al., 2015;Borzova et al., 2017), thus further design for optimization considering concentration of sucrose and PTD with various pH was conducted (Table 1).
To evaluate the pharmacodynamics of each formulation, blood glucose levels in normal rats after intranasal administration was monitored within 180 min (Figure 1(a)). For comparison of blood glucose lowering effect of each formulation, blood glucose at 0 min were set to 100% then relative blood glucose levels were calculated (Figure 1(b)). Among eight formulations, 3-3 and 3-5 were the two formulations which showed blood glucose-lowering effects in rats. Both formulations exerted up to 60% reduction which is comparable to efficacy of subcutaneously injected insulin, the conventional route for insulin therapy. To determine whether these formulations induce nasal membrane damage, LDH activity in the nasal fluid (Figure 1(c)) was measured. This intracellular enzyme activity has been used as an indicator of leakage of cytosolic constituent (Shao et al., 1992). While 5% NaTDC, positive control for nasal membrane damage, showed significantly increased LDH activity, 3-3 and 3-5 formulations did not show any detectable increase in LDH activity, indicating no toxicity from both formulations. Thus, we selected two formulations (3-3 and 3-5) which showed comparable efficacy to subcutaneously injected insulin with marginal toxicity for intranasal delivery of insulin and performed further analysis
In vivo BA of insulin in formulated insulin 1 PTDs
To evaluate whether the formulations improve intranasal delivery of insulin, we measured plasma insulin levels after intranasal administration in normal rats (Figure 2(a) and Table 2). When insulin (5 IU/kg) alone was administered intranasally, insulin was hardly detectable in plasma but when mediated by PTD, plasma insulin markedly increased in normal rats even in low insulin dose (1 IU/kg). Furthermore, both 3-3 and 3-5 formulations more effectively delivered insulin through the nasal epithelium when compared to free insulin þ PTD. The insulin BA in each group was calculated relative to subcutaneously injected free insulin (taken as 100%). BA of insulin from 3-3 and 3-5 formulations was 60.71 6 7.48% and 45.96 6 6.12%, respectively, while the value of free insulin þ PTD without formulation was 38.66 6 4.51%. Next, we analyzed hypoglycemic effect of each formulation in the rat model of alloxan-induced diabetes to confirm the therapeutic effect of formulated insulin þ PTD (Figure 2(b) and Table 3). The dose of insulin (2 IU/kg) of intranasally administered was based on our previous study that studied measurable hypoglycemic effect (Bae et al., 2018). Intranasally administered insulin showed likewise no blood glucose lowering effects, but formulated insulin þ PTD remarkably lowered blood glucose after intranasal administration. Hypoglycemic effect of each formulation was maintained for 240 min. Pharmacological availability (PA) value in each group was calculated relative to subcutaneously injected free insulin (presented as 100%). PA values of 3-3 and 3-5 formulations were 49.33 6 2.71% and 37.52 6 5.64%, respectively, while the value of free insulin þ PTD was only 4.01 6 1.99%. Collectively, these results indicate that 3-3 and 3-5 formulations improve intranasal delivery and therapeutic efficacy of insulin þ PTD.
In vivo toxicity analysis of formulated insulin 1 PTDs
The use of intranasal delivery has been also limited by its toxicity to nasal membrane by enzyme inhibitors and nasal permeation enhancers (Fortuna et al., 2014). To evaluate the damage to nasal membrane, normal mice were intranasally administered insulin once in a day for 10 d, and sacrificed, and histological analysis of each nasal mucosa conducted after H&E staining (Figure 3(a)). While the NaTDC-treated group showed destruction of nasal mucosa, both 3-3 and 3-5 formulations showed histology similar with that of the 0.9% NaCl-treated group, indicating that these formulations did not induce topical toxicity. To evaluate the systemic toxicity in mice, other major organs were examined and no unusual pathological changes were observed in all experimental groups (Figure 3(b)). Moreover, low LDH activities of both formulations treated groups also confirmed minimal (a) Blood glucose in normal rats after intranasal administration. Data from insulin s.c. of both graphs originated from the same experiment. (b) Relative blood glucose level relative to 0 min was calculated as shown as graph. As in (a), data from insulin s.c. of both graphs originated from the same experiment. (c) OD 490nm means LDH activity from nasal wash solution. 5% NaTDC was used as positive control known to be toxic. ÃÃÃ p < .001 by Student's t-test compared to untreated group. Data are presented mean ± s.d. Free insulin, nasal Free insuling, s.c.
3-3 formulation 3-5 formulation
Time after administration (min) Blood glucose (mg/dL) Figure 2. Pharmacokinetic and pharmacodynamic analysis of intranasal insulin þ PTD formulations. (a) Normal rats were administered as indicated, followed by plasma insulin measurement. Insulin dose was 5 IU/kg for nasal route, 0.25 IU/kg for s.c. injection, and 1 IU/kg for administration with PTD (n ¼ 5-8). (b) Blood glucose in alloxan-induced diabetic rats following each administration. Insulin dose was 2 IU/kg for nasal route, 1 IU/kg for s.c. injection, and 2 IU/kg for administration with PTD (n ¼ 5-7). Data are presented mean ± s.d. toxicity on nasal membrane (Figure 3(c)). Serum biochemical analysis was also performed from blood samples from the same experimental sets (Figure 3(d)). AST and ALT, indices of hepatocyte damage, were not altered, by formulated insulin þ PTD. BUN and CRE, indicators for kidney damage, also did not change in the cases of 3-3 and 3-5 formulations, suggesting that mice maintain normal health during the intranasal administration. All these results confirm the safety of both formulations in mice, indicating their suitability for use in therapy.
Stability analysis with in vivo study
We also assessed the stability of formulations and also requirements for their storage until use. We focused on stability studies of 3-3 formulation because the pharmacological and BA of insulin in alloxan-induced diabetic rats treated with 3-3 formulation were higher than those treated with 3-5 (Tables 2 and 3). Since the length and the temperature of storage are important factors that affect the viability of the formulation, we evaluated stability of insulin þ PTD in 3-3 formulation at two temperatures: room temperature and at 4 C. We intranasally administered normal rats with insulin þ PTD in 3-3 formulation stored at the two conditions, and determined plasma insulin levels by ELISA and evaluated the levels of insulin delivered (Figure 4(a)). At both temperatures, prolonged storage resulted in decreased delivery of insulin in rats. Storage at room temperature maintained the levels of insulin delivered up to 48 h, while 4 Cstorage extended the period up to 7 d. As a pharmacokinetic parameter, area under curves (AUCs) for each group of rats were calculated (Figure 4(b)). Consistently, AUC value for the plasma insulin level-time graph remained similar up to 48 h at room temperature. However, 72-and 96-h storages decreased AUC value by 45% and 77%, respectively. When stored at 4 C, the AUC value was maintained at more 85% level up to 7 d. But storage for more than 28 d showed drastic decrease. Thus, stable insulin delivery can be expected when insulin þ PTD in 3-3 formulation is stored for 48 h at room temperature or for 7 d at 4 C. We also confirmed timedependent decrease in amount of insulin in 3-3 formulation with PTD by coomassie brilliant blue staining of supernatants from stored formulation (Figure 4(c)). Unlike insulin, the amounts of PTD did not change for up to 96 h at room temperature and 42 d at 4 C. At the longest storage time at each temperature, we observed the formation of opaque gel on the surface of the container. To identify constituents of the gel, we performed the Coomassie brilliant blue staining on sonicated gel samples and found to be mostly insulin. This suggests that insulin aggregates on prolonged storage which results in reduced delivery of insulin. Based on these findings, insulin þ PTD in 3-3 formulation should be stored at 4 C rather than room temperature.
Conclusions
In this study, we identified optimized formulation for insulin þ PTD for enhanced intranasal delivery of insulin to rats. Two formulations (3-3 and 3-5 formulations) proved most suitable based on measurement of plasma insulin levels in normal rats based on pharmacokinetic, pharmacodynamic and toxicity studies. We also confirmed that both formulations improved delivery of insulin and blood glucose-lowering effects with minimal toxicity in diabetic rat models. We also established optimal storage conditions for the formulations. These findings should promote the nasal insulin delivery for the treatment of diabetes and also using TCTP-PTDs as a suitable carrier for delivery of a variety of other macromolecule.
Disclosure statement
No potential conflict of interest was reported by the authors. | 4,144 | 2019-01-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Short review on the aggressive behaviour: genetical, biological aspects and oxytocin relevance
. In this mini-review we were interested in describing the main genetic, biological and mechanistic aspects of the aggressive behaviour in human patients and animal models. It seems that violent behaviour and impulsive traits present a multifactorial substrate, which is determined by genetic and non-genetic factors. Thus, aggressivity is regulated by brain regions such as the amygdala, which controls neural circuits for triggering defensive, aggressive or avoidant behaviour. Moreover, other brain structures such as the anterior cingulate cortex and prefrontal cortex regions could modulate circuits involved in aggression. Regarding the genetic aspects, we could mention the mutations in the monoamine oxidase or the polymorphisms of the genes involved in the metabolism of serotonin, such as tryptophan hydroxylase. Also, besides the low levels of serotonin metabolites, which seem to be associated with impulsive and aggressive traits, there are good evidences that deficiencies in glutamate transmission, as well as testosterone, vasopressin, hypochloesterolemia or oxytocin modifications could be related to the aggressive behaviour. Regarding oxytocin we present here in the last chapter the controversial results from the current literature regarding the various effects exhibited by oxytocin administration on the aggressive behavior, considering the increased interest in understanding the role of oxytocin on the main neuropsychiatric disorders.
Moreover, the ethological vision on the concept of aggression is also interesting. From this perspective, aggression refers to the interaction and evolution of animals in natural environments. The ethologists analyze human aggression from the perspective of animal aggression. Thus, biologically, people are a highly evolved animal species. In the animal world aggression includes three functions: equal distribution of the species, the selection of the fittest and the protection of the young and defenceless [4].
In such cases in animals, aggression can involve physical violence, hitting, biting or pushing, and most conflicts are resolved through menacing and intimidation stances or strokes that do not cause physical damage. The stereotype signals may include threatening and hostile facial expressions; vocalizations, such as birdsongs and the release of chemicals [5]. In animals, aggressive behaviour might confer biological advantages. Moreover, aggression can ensure gaining territories, resources of food and water, provides opportunities for mating, for self-defense or protection of offspring and leads to the natural selection of more vigorous animals [6]. Also, an individual from a group is more likely to become aggressive when the other members exhibit aggressive behaviours that are similar [7].
In analogy with the aggressive behaviour in animals, the human aggressiveness retains an important role in the survival instinct and exhibits. Still, a variety of the roles of the aggressive behaviour are no longer valid today. Also, people can turn aggressive energy by sublimating it into work, play, sports competition or art. Given the specific energy of this pulsing, aggression seems to be unavoidable, as it may manifest spontaneously, regardless of situational particularities. Aside from its obvious internal nature, aggression can be decreased or maintained by the course of development in the context of culture and environment [8,9].
In this way, aggression is a characteristic of all living beings throughout their evolutionary scale, having its origin in the central nervous system primitive structures. Moreover, it is a form of destructive behaviour intended to cause damage, either material, psychological, moral or mixed. Thus, the act of aggression may be directed against objects such as a house, furniture, kitchen utensils, against human beings or against one's ownself as autolytic behaviour. Thus, aggression can take several forms: aggression against other persons is referred to as hetero-aggression, which can be physical or verbal. The milder variant of this form of aggression is hostility and the extreme variant is murder. Classic aggression is directed against objects and things from the environment and is aimed to destroy the objects, while directing the autolytic behaviour against one's ownself is called self-aggressiveness, presenting a broader spectrum of manifestations, from self-hostility and self-sabotage to self-mutilation and reaching suicidal tendencies and/or completed suicide. When it occurs as an impulsive reaction, the manifestation of aggressiveness has the role to release the tension, to psychologically defuse an individual with an excessively high psychoenergetic burden. In some cases, the planned hetero-aggressiveness can indicate the presence of psychopathic traits of personality and self-aggression could be an indicator of depression [10].
Also, certain sources of aggression are related to the personality traits of the individual, while others to external conditions, such as frustration, which is one of the most common triggers of aggression along with the attack or direct provocation, verbal and physical, pain, in its physical and moral forms, heat and crowds. The most common and well-known forms of aggressive behaviour that have a social and community impact are particularly delinquency and crime. Thus, aggression is a behaviour oriented to produce damage, injuries, prejudice and physical harm to objects, people or one's own self. Also, aggressiveness can be manifested in different forms and intensities, from simple ideas and thoughts, physical arousal, anger, competitive type traits and dominance to verbal aggression and serious violence acts.
In this way, the researches on adult aggressive behaviour demonstrated two types of aggression: proactive aggression usually calculated and making use of tools, aimed to obtain rewards and reactive aggression, which is generally impulsive as a stress adaptation response for unexpected events and can be a potential hazard [11].
Epidemiology
There are relatively few studies in the literature regarding the general aggressive behaviour. In this way, in a study from 2000 performed on 1269 patients with various psychiatric disorders, an overall rate for aggressive behaviour of 13.7% was reported. The highest rates in terms of aggressive behaviour occurred more often in patients with bipolar disorders (2.81%) and schizophrenia (1.96%). Moreover, the patients at increased risk were those under 32, with episodes of psychosis or substance abuse [12].
Regarding the way people manifest aggressive behaviour, it is known that this specific behaviour is exhibited in various degrees in individuals varying by gender, age, cultural aspects, as well as by biological and genetic peculiarities or presence of certain affections.
Thus, in children there is an almost constant degree of aggressiveness, manifested either as a healthy assertive, competitive behaviour or as a pathological trait frequently involving violent behaviour, delinquency and criminality. Also, it is known that in boys and males in general, the level of aggressivity is higher than in women and is primarily geared towards persons of the same age. Also, the predisposing precipitating factors for aggressive behaviour in children are different, depending on their age. Thus, in little children the lack of attention and physical discomfort can be causes of violent explosions of anger. Later, insults, criticisms or social comparison are triggers for aggressive behaviour, while in adolescents, the frustrations may be hidden under a rather masked form as breaking rules, stealing, lying, cheating or the need for social dominance [13].
Genetical aspects
The genetic substrate has also a particularly important role in the expression of aggressive behaviour and in the presence or absence of the personality traits associated with aggression. In this way, the studies on twins or adoptions suggested that heredity is involved in aggresivity, in varying proportions (e.g. from 44% to 72%) [14].
However, not a single gene has been identified as to be clearly associated with this type of behaviour, but rather a polygenetic substrate formed from multiple genes that regulate the activity of some neurotransmitters such as serotonin or genes responsible for the structural components of brain areas critical for aggression. Moreover, this genetic polymorphism may contribute to individual differences and susceptibility to aggressive behaviour. Thus, the mutations in the monoamine oxidase (MAO) gene which are associated with the alteration of catecholamines metabolism, or polymorphisms of the genes involved in the metabolism of serotonin, such as tryptophan hydroxylase of the 5-HT1B, the 5-HT2A and 5-HT1A receptor have been identified [15]. Also, one allele of the tryptophan hydroxylase gene was associated with the suicide attempts of violent delinquents and with aggressive behaviour in some patients with personality disorders [16].
Also, the genetic predisposition for aggressivity appears to be deeply affected by genetic polymorphic variants of the serotonergic system affecting the level of serotonin in the central and peripheral nervous system, the biological effects of this hormone, the serotonin production rate, the synaptic release and degradation. In this way, some functional polymorphisms of monoamine oxidase A (MAOA) and the serotonin transporter (5-HTT) are of particular importance considering the connections between the aformentioned polymorphic variants and anatomical changes in the limbic system of aggressive persons. Furthermore, the functional variants of the 5-HTT and MAOA genes can intervene in how the environmental factors influence the aggressive traits [17].
Biological mechanisms for aggressiveness
The neurobiological bases of aggressive behaviour consist from a complex of molecules and neural circuitry designed to convert motivation into action. Thus, the exposure to various frustrating stimuli such as abuse, frustration or hostility can stimulate certain brain region that process emotional and cognitive stimuli and increase psychic excitability. Moreover, it has been shown that impulsiveness and violence are associated with specific brain regions, such as the limbic system. In this way, brain structures which are considered essential in triggering aggressive behaviour are represented by the amygdala, the ventromedial hypothalamus, the limbic system, the motor cortex and orbitofrontal cortex [18]. Moreover, in patients with dementia the level of agitation and aggression are directly proportional to the level of atrophy in certain brain key areas for aggressive behaviour, such as the frontal lobe, amygdala, cingulate gyrus or the hippocampus. Also, the amygdala responds to threats and provocative stimuli by stimulating the motor cortex which further initiates the motor component of the aggressive act [18].
The emotional component is also associated with the cingulate cortex, which analyzes the negative emotional stimuli. Additionally, the amygdala has connections with the hippocampus and is involved in releasing certain factors that have the role of changing the homeostasis of the body (e.g. as to prepare it for action). Also, the system limiting the aggressive behaviour has its origin in the prefrontal cortex and in particular in the orbital prefrontal cortex, inhibiting the limbic regions involved in the generation of the aggressive behaviour [19]. Moreover, in experimental animals, stimulating ventromedial hypothalamus causes aggressivity and inhibits the natural structures responsible for natural aggressivity inhibition [20].
Further evidence is also provided by studies of borderline personality disorder manifested through aggressive behaviour, impulsivity, physical aggression directed towards others, acts of selfmutilation, and family violence, showing changes in the serotonergic system of these patients. Also, a link between temporal lobe epilepsy and violent and impulsive behaviour was described, and an association between aggressive behaviour in patients who have a history of head injury and brain organic changes [21].
Of course, the way in which aggressive behaviour is expressed is also based on the various specific neurotransmitter systems. In this way, the most studied system involved in aggression is the serotonin system. In this way, a decrease in the serotonergic transmission, which can be induced by the inhibition of serotonin production or by antagonizing its effects, determines a decrease of the negative consequences or of the relevance of punishment for a certain type of behaviour. Thus, restoring serotonin through the administration of L-tryptophan (e.g. a serotonin precursor) or drugs that increase serotonin level could strength the behavioural effects of punishment. Moreover, the restoration of serotonergic activity by administration of L-tryptophan or drugs that increase serotonin could perhaps help recover control over violent tendencies [22]. Also, as serotonin seems to facilitate the inhibition of prefrontal cortex, insufficient serotonergic activity can lead to increased aggression. The decrease of serotonin levels, as demonstrated by identifying low levels of a serotonine metabolite, has been associated with impulsiveness and violent behaviour. In this way, studies on the serotonergic neurotransmitter system show that serotonin metabolite, 5-hydroxyindoleacetic acid (5-HIAA) is found in low concentrations in the cerebrospinal fluid in depression and could be accompanied by violence and suicidal behaviour [23].
Moreover, other authors demonstrated that there is a correlation between the level of 5hydroxyindoleacetic from the cerebrospinal fluid and impulsive and violent behaviour. Thus, a low concentration of the serotonin metabolite is found in people with aggressive behaviour. It also seems that a low level of 5-HIAA is present in delinquents or people with a history of violence [24].
In addition, it is believed that low levels of serotonin metabolites seem to be associated with impulsive and aggressive traits. As mentioned before, it seems that serotonin depletion is associated with increased aggressiveness and impulsivity. In this way, a 2013 study on transgenic mice showed that a chronic reduction in the levels of serotonin is associated with increased aggresivity. Moreover, pharmacological intervention on serotonergic neurons, aimed to suppress the neurotransmitter discharge, resulted in increased levels of aggression [25]. These data confirmed the fact that low serotonin activity is decreasing the threshold for aggressive behaviour and supports the idea of a direct association between low serotonin levels and increased aggressiveness.
Another relevant aspect in this matter could be represented by some childhood experiences such as trauma or abuse, in relation to the emergence of serotonergic system abnormalities. In this way,
ILNS Volume 52
some studies showed that sexually abused women are experiencing genetic changes associated with a low activity of monoamine oxidase A allele, a gene involved in serotonin synthesis [26]. Moreover, these women have subsequently increased rates of antisocial traits. Thus, changes in the serotonin system may be actively contributing to strengthening hostile, aggressive and impulsive personality traits, especially when exposed to negative experiences [26]. Also some researchers reported an interaction between genetic, environmental and gender factors, especially during the critical early stages of development, which causes pathological manifestations that reflect changes in serotonin homeostasis. Additionally, the serotonin system involvement in aggressive behaviour could be the outcome of various homeostatic imbalances for the 5-HT system [27].
Also, some clinical studies suggested that the increased reactivity of noradrenergic and dopaminergic system may facilitate aggression. Thus, reduced levels of norepinephrine may be responsible for triggering excessive irritability in response to a stressful, unpredictable factor. Biological, biochemical and genetic investigations of the enzyme responsible for the metabolism of catecholamines, the MAO-A, have also shown that low levels of MAO-A activity are associated with susceptibility to react violently and with impulsive behaviour [28,29]. Moreover, it appears that in males the antisocial characteristics are negatively correlated with the activity of MAO [30].
In addition, the involvement of glutamate in aggressive behaviour has been investigated in several studies, as some theoretical models indirectly link impulsivity and aggression to glutamate. As know, glutamate is the most abundant excitatory neurotransmitter in the vertebrate nervous system and is released from presynaptic vesicles after stimulation of the presynaptic neurons, acting on specific receptors, the N-methyl-D-aspartate (NMDA) receptor, and α-amino-3-hydroxy-5methyl-4-isoxazolepropionic acid (AMPA) [31]. Also, beside the preclinical studies which are suggesting that central glutamate receptors stimulation could increase the aggressive behaviour [32], in some experimental animals the administration of glutamate directly into the central gray matter induces defensive hostility, while the treatment with a glutamate antagonist such as the kynurenic acid results in the same response, of even defensive aggressiveness [33]. In fact, there is also a considerable number of glutamatergic neurons within the projections between the anteromedial hypothalamus and central gray matter, which could represent the structural support for the link between glutamate and aggressiveness [32]. These data are in fact supported by a study published in 2013 showing that there is a positive correlation between the CSF glutamate levels and levels of impulsive aggressivity in human patients [34]. Regarding the endocrine system involvement in the aggressive behaviour, it is believed also that testosterone and mainly its most active metabolite dehydroepiandrostenedione (DHEA) could be implicated. Thus, it was showed that testosterone levels are higher in people with aggressive behaviour, as in the case of the convicts which have committed violent crimes [35]. Also, high levels of testosterone occur in sport teams that have an aggressive, dominant component or in various confrontations [36]. The testosterone acts centrally through the activation of amygdala for example, triggering aggressivity, while peripherally it increase muscle mass to achieve specific motor behaviour. Also, a large number of receptors for androgens and estradiol are found in the neurons from the prefrontal area, hypothalamus and especially the amygdala. Moreover, the effect of testosterone on the brain begins in early embryonic life, leading to anatomic and organizational changes that produce, in fact, the masculinization of the brain. Also, the antiandrogenic agents appear to reduce the level of aggressivity [37].
We should also mention in this context the relevance of the hypothalamic-pituitary-adrenal (HPA) axis, as well as the importance of cortisol and their relations to serotonergic system, which antagonize the effects of testosterone [38,39]. Moreover, it seems that a major role in the increase or decrease of the impulsivity is played by the imbalances between testosterone and serotonin or testosterone and cortisol (e.g. high levels of testosterone and low concentrations of cortisol), which is explicable considering the reduced activity of the control and emotion self-regulating neural circuitry.
International Letters of Natural Sciences Vol. 52
Moreover, hypochloesterolemia has been associated with aggressive behaviour and aggressive suicidal attempts, while lipid-decreasing drugs administration correlates with increased irritability states [40]. Also, experiments on primates have shown that reducing cholesterol through diet could actually lead to increased aggressiveness and reduced activity for the central serotonin, which is associated with the risk of violence in both humans and animals [41].
In addition, there are also studies demonstrating that vasopressin increases aggression behaviour in most mammals including humans. It appears that this effect could be linked to the aforementioned serotonin system effects, manifested through a modulating effect on the aggressive behaviour [42]. Also, some data supports the correlation between the level of CSF vasopressin and a personal history of aggressivity [43]. As already mentioned, vasopressin exerts uninhibitory actions via the serotonin system, aspects which is somehow supported by the fact that vasopressin antagonists can reduce aggressive behaviour [44].
Oxytocin and aggression
Another peptide related to vasopressin is oxytocin (e.g. the vasopressin/oxytocin signalling systems are involved in a variety of functions such as reproduction, immunity and thermoregulation, but with focus on the social manifestations connected with affiliation and aggression [42]) and lately, there is increased interest in understanding the role of (especially intranasal) oxytocin on the main neuropsychiatric disorders such as autism [45], schizophrenia [46], anxiety [47], depression, Prader Willi syndrome [48] or even psychopathy [49] and the variety of behaviours exhibited by the relevant central areas, including aggression [50][51][52].
In this way, it was demonstrated several times, by authors such as Bosch et al., in 2005 that oxytocin is critically involved in the regulation of maternal aggression [53]. Moreover, the aforementioned author did manage to find a very significant correlation between the aggressive behaviour showed as a part of maternal defence in specific behavioural testing and oxytocin release from both the paraventricular nucleus and the central nucleus of the amygdala. Also, mechanistically speaking it seems that the most important aspect in oxytocin modulation for the maternal aggressive behaviour is represented by the differences in the central release patterns of oxytocin [53]. Moreover, in a subsequent publication in 2013 the same group is stressing the aforementioned connection between oxytocin and arginine vasopressin in modulating maternal aggression in rats, developing also a further hypothesis on the commune role of these neuropeptides in anxiety perception and how this can be correlated to maternal aggression [54], considering also the recent reports regarding the significant effects of intransal oxytocin administration in anxiety patients [47].
Interestingly, it was also recently showed that in the high trait aggressive people the administration of oxytocin can result in an increased aggression towards a close person (e.g. intimate or romantic partner), possibility as a way of maintaining the current status/relationship [55]. In this way, in 2014 the group of Nathan DeWall showed in a in a double-blind, placebocontrolled study of high trait aggressive subjects that oxytocin (24 International Units) is actually stimulating aggression only in subjects prone to physical aggression (for example exhibiting behaviours such as hitting or throwing objects etc.) [55].
Similar aspects were also showed in various other experimental species such as dogs, fish or piglets. In this way, the Topal group showed very recently in 2015 that dogs receiving intranasal oxytocin showed less friendly first reaction and individual differences in aggression to an unfamiliar experimenter, as compared to placebo, in a specific design behavioural task for dogs called Threatening Approach Test [50].
Also, in piglets, which are considered to be much more closer in brain anatomy, growth and development to the human brain, as compared for example to the classical rodent models [56], the group of Rault et al. in Australia showed that neonatally oxytocin-treated piglets received and performed more aggressive behaviours, then the controls, possibly by reducing the HPA axis [51].
Moreover, in fish such as Neolamprologus pulcher, a cooperatively breeding cichlid fish, Hellmann group showed also in 2015, that after temporarily removing a subordinate individual, it
ILNS Volume 52
was more likely for this to receive increased aggression, when returned back to the group, if it was treated with isoctocin, an analogue version of oxytocin, versus the treatment with saline [52]. However, there are still a lot of controversial results in this area of research, since for example other groups failed to find any significant effects for the administration of the intranasal oxytocine on aggressive behaviour for example in subjects with antisocial personality disorder. In this way, in a recent study from September 2015 it was found that intransal oxytocine generated little effects in aggression and anyway not related to dose of oxytocin administrated, as judged by a well-validated laboratory task of human aggression called point subtraction aggression paradigm [57].
Even more, it was showed that in non-lacting female rats (so outside the already classical and well known perspective that oxytocin is implicated in defensive maternal aggression [53,54]) there is surprising potential for an anti-aggressive effect of synthetic oxytocin administration, as determined through an original behavioural approach such as novel female resident-intruder test for spontaneous female aggressive behaviour [58]. In fact, there are previous correlative studies showing a significant correlation between reduced oxytocin concentration in the cerebrospinal fluid of some patients and aggressive behaviour [59], while some authors are strongly believing that the aforementioned intranasal administration of oxytocin is exhibiting pro-social behaviours [60].
There are authors stating that these different effects of oxytocin on aggression, but also on other related superior behaviours, could be explained by the different ways of oxytocin administration (peripheral vs. intranasal), different dosage or by different experimental setup (e.g. looking at outor in-groups members or which is the basic level of aggressive responding in that individual) [50,[61][62][63].
Conclusions
Thus, it seems that violent behaviour and impulsive traits present a multifactorial substrate that is determined by genetic and non-genetic factors. In this way, aggressivity is regulated by brain regions such as the amygdala, which controls neural circuits for triggering defensive, aggressive or avoidant behaviour, while the dysfunction of certain neural circuits responsible for emotional control seems to induce violent behaviours. Moreover, besides the amygdala, other brain structures such as the anterior cingulate cortex and prefrontal cortex regions seem to modulate circuits involved in aggressive behaviour. Regarding the genetic aspects, we could also mention the mutations in the monoamine oxidase or the polymorphisms of the genes involved in the metabolism of serotonin, such as tryptophan hydroxylase. Also, besides the low levels of serotonin metabolites which seem to be associated with impulsive and aggressive traits it seems that reduced levels of glutamate, as well as testosterone, vasopressin hypochloesterolemia or oxytocin modifications could be related to the aggressive behaviour. | 5,492.2 | 2016-03-01T00:00:00.000 | [
"Biology",
"Psychology"
] |
Mobility Management of Unmanned Aerial Vehicles in Ultra–Dense Heterogeneous Networks
The rapid growth of mobile data traffic will lead to the deployment of Ultra–Dense Networks (UDN) in the near future. Various networks must overlap to meet the massive demands of mobile data traffic, causing an increase in the number of handover scenarios. This will subsequently affect the connectivity, stability, and reliability of communication between mobile and serving networks. The inclusion of Unmanned Aerial Vehicles (UAVs)—based networks will create more complex challenges due to different mobility characterizations. For example, UAVs move in three–dimensions (3D), with dominant of line–of–sight communication links and faster mobility speed scenarios. Assuring steady, stable, and reliable communication during UAVs mobility will be a major problem in future mobile networks. Therefore, this study provides an overview on mobility (handover) management for connected UAVs in future mobile networks, including 5G, 6G, and satellite networks. It provides a brief overview on the most recent solutions that have focused on addressing mobility management problems for UAVs. At the same time, this paper extracts, highlights, and discusses the mobility management difficulties and future research directions for UAVs and UAV mobility. This study serves as a part of the foundation for upcoming research related to mobility management for UAVs since it reviews the relevant knowledge, defines existing problems, and presents the latest research outcomes. It further clarifies handover management of UAVs and highlights the concerns that must be solved in future networks.
Introduction
The rapid growth of wireless technology has caused a dramatic shift in people's daily lives. Mobile-connected devices, connected applications, Machine to Machine (M2M), Internet of Things (IoT), and other services are steadily increasing. IoT connects almost everything throughout numerous environments. With its evolution, it will be the most utilized technology and the largest telecom market. IoT marks a new era of total automation and offers efficient solutions for several fields. Since it has become extremely simple to connect several devices in different locations, its impact on daily life has been tremendous. Various industries are currently demanding wide-area communication, especially for numerous operations that are performed indoors [1][2][3][4][5]. These factors will further lead to the massive growth of mobile data traffic.
The transmission and reception of signals by antenna systems are critical components of wireless technology (such as IoT, autonomous aerial vehicles, and wireless communication systems). Antenna systems are used for transmitting and receiving signals in wireless In conclusion, previous research [28][29][30][31][32][33][34] did not thoroughly examine the issue of UAV organization in cellular networks.
UAVs are aircraft that can autonomously fly without human guidance. This type of aircraft employs radio waves to navigate and present a route map. UAVs range in size, weight, shape, and engine. They are employed for specific purposes such as surveillance, gaming, spying, warfare, and presentations. As a result, they are furnished with technical gadgets such as cameras and Global Positioning System (GPS) sensors, both of which are necessary for monitoring and tracking. UAVs have a significant advantage in this area since they can immediately register and monitor any region or item without requiring additional infrastructure.
Based on the 3rd Generation Partnership Project (3GPP) TS 22-261, governments and corporate sectors are expected to use UAVs in a wide range of applications. The key issues of the future 6G network will be latency and dependability. UAVs will require more precise position information as well as protection against theft and fraud. The information transferred between UAVs and their control units must be secure. The next-generation mobile network must also be resistant to spoofing and non-repudiation to fully integrate UAVs. Unmanned Aerial System Traffic Management (UTM) is a centralized system for identifying, tracking, and authorizing UAVs and controllers. The UTM stores all identifications and metadata for UAVs and UAV controllers. The data interchange protocols used by UTM and mobile network centers, particularly Allied Telesis Management Framework (AMF), have permitted the confirmation and authorization of UAVs within the zone. Including UAVs in this flexible network will increase the AMF's computational load. The use of UAV-mounted BS (UxNB) to extend the scope range is specified in 3GPP references. The UxNB may connect to a 5G core as a BS on the ground via a wireless backhaul link. The UxNB can be used in various situations (such as in emergencies, the temporary scope for UEs, and hotspot events) due to its quick setup and vast range of capabilities. When acting as a BS, the UxNBs must be validated by the center setup. Since UAVs have limited power, one condition for utilizing a BS is to consume as little energy as possible. The use of UAVs is limited due to their flying time and energy requirements. In conveyance administrations, for instance, using a single UAV results in a waiting period for the vehicle to return to base. As a result, UAVs should be used in swarm mode. Group management is the most basic requirement for a swarm of UAVs. Group management entails collecting confirmation and guaranteeing secure communication within a group.
This research focuses on the HO of UAV communication through wireless communication. A smooth HO is difficult to achieve while using traditional wireless networks. When compared to cellular networks, UAV wireless communications have less communication coverage and a longer HO procedure. The conventional HO technique further assumes that the coverage area for different cells is the same, which is not the case with UAVs due to their varying heights. The HO of UAVs should be more closely and efficiently monitored than that of terrestrial UEs. The use of traditional HO methods and strategies may not be suitable for UAVs. Although numerous relevant arrangements have been discussed throughout the literature, the problem remains unaddressed. Since future mobile networks are expected to be self-sufficient, node mobility forecasting may be a critical technique for optimizing the benefits of UAV systems. A large number of contemporary arrangements follow distance-based assumptions [2,35].
The objective of this study is to highlight the mobility management of connected UAVs in future mobile networks (5G and 6G). The article covers current research efforts devoted to addressing the inherent difficulties of using UAVs. The main research goal is to answer the most important questions in wireless communications. For example, why is HO difficult for UAVs, even more so during UAV mode when they can move freely in 3D? What are the current solutions to this problem? What are the future research directions in this field? This paper includes an assessment of the most significant practical solutions for resolving these problems. The central issues are outlined, and the recent research is highlighted and discussed. This paper extensively reviews the necessity of integrating UAVs into modern wireless communication networks, providing scholars with abundant knowledge in this field.
The remainder of this work is organized as follows. Section 2 provides an overview of the relevant literature. Section 3 highlights important achievements in the field and presents background research information. Section 4 focuses on the research challenges. Section 5 reviews the published works related to this research. Section 6 provides the proposed solutions. Section 7 discusses future research directions. Finally, Section 8 concludes this paper.
UAV Technology in Wireless Communication
Connected UAVs will be a revolutionary invention that will provide a wide range of services throughout various settings. The requirement for constant connection while on the move is a key issue that must be addressed. Defining the concept of UAVs and HO management is essential. This section provides an overview of UAVs, UAV communication networks, the HO concept, and 3D parameters. The following subsections present an extensive summary of the various subtopics.
Overview of UAVs
The use of UAVs has skyrocketed in recent years and continues to do so across multiple industries and services. UAVs present low-cost solutions in several industries, such as healthcare and marketing. They can provide a wide range of solutions for different scenarios. At this stage, it is crucial to employ cutting-edge technology to ensure the safe functioning and administration of this developing innovation. For decades, billions of devices have been linked together on the ground. Now, they are ready to be linked in the sky. Currently, UAVs can serve as wireless communication BSs to connect mobile users. However, several challenges will arise with connected UAVs before achieving reduced latency, enhanced connection dependability, real-time data transfer, and remote installations. The widespread adoption of contemporary developments, such as IoT and machine-to-machine communication (MTC), has significantly increased the number of UEs and MTC devices that interfere with mobile systems. As the number of UEs inside a BS scope increases, the quality of service (QoS) decreases. UxNB can be a viable solution in regions with a high concentration of UEs, such as stadiums. UxNB is a promising technology that can be applied in future for capacity injection due to its fast transmission. However, this new technology also possesses several security risks. When using UxNB for capacity injection, common verification, the development of a communication link between terrestrial BS and UxNB, and quick HO procedures may all raise security problems. This new protocol also suggests that the UE transition from earthbound to UxNB should be accomplished in groups.
UAV operations are primarily conducted at low altitudes in uncontrolled airspace. This airspace, which is regularly used for a range of existing flying exercises, contains critical infrastructure and is susceptible to changing conditions. In 5BG, the AMF, the radio access network (RAN), and the UE are the most important components. The AMF is in charge of registration, managing connections, ensuring that UEs can be reached, and managing their mobility. With 5G networks, the speed can reach up to 500 km/h, and with 6G networks, it will be even faster. This network function makes it possible to handle the mobility of nodes. Radio transceivers are used by RAN to connect to cellular networks. The BSs connect the UE to the New Radio (NR) user plane and control plane protocols.
3GPP defines UE as a device used by an end user to communicate with another user or service.
Most pilots employ Visual Flight Rules (VFR) when flying in low-altitude airspace, as shown in Figure 1. Under VFR, each pilot is responsible for avoiding other aircraft or obstructions by maintaining a steady view of the region and other airspace users. Significant dangers associated with UAV movements are present in unclassified airspace if airframes are not monitored and human pilots are not present. The risk of bird collisions, building collisions, or accidents with other unmanned vehicles can cause significant issues among national aviation authorities. Collision avoidance frameworks will enhance the safety of unmanned aircraft. However, they are not designed to handle complex activities or movements of other planes and objects within the area. A new perspective is required to organize and monitor activities in low-altitude and unclassified airspaces. Several researchers are currently examining various methods to tackle the UTM challenge. Figure 1 presents the problem that administrative authorities must confront as well as the tasks required for a complete UTM framework. UTM is a traffic management ecosystem for movements that are not monitored by the Federal Aviation Administration's (FAA) Air Traffic Management (ATM) system. The UTM will be improved and developed to define the services and responsibilities assigned to UAV operations when flying at low altitudes without supervision. Information exchange protocols and other technical details will also be specified in control and communication operations.
3GPP defines UE as a device used by an end user to communicate with another user or service.
Most pilots employ Visual Flight Rules (VFR) when flying in low-altitude airspace, as shown in Figure 1. Under VFR, each pilot is responsible for avoiding other aircraft or obstructions by maintaining a steady view of the region and other airspace users. Significant dangers associated with UAV movements are present in unclassified airspace if airframes are not monitored and human pilots are not present. The risk of bird collisions, building collisions, or accidents with other unmanned vehicles can cause significant issues among national aviation authorities. Collision avoidance frameworks will enhance the safety of unmanned aircraft. However, they are not designed to handle complex activities or movements of other planes and objects within the area. A new perspective is required to organize and monitor activities in low-altitude and unclassified airspaces. Several researchers are currently examining various methods to tackle the UTM challenge. Figure 1 presents the problem that administrative authorities must confront as well as the tasks required for a complete UTM framework. UTM is a traffic management ecosystem for movements that are not monitored by the Federal Aviation Administration's (FAA) Air Traffic Management (ATM) system. The UTM will be improved and developed to define the services and responsibilities assigned to UAV operations when flying at low altitudes without supervision. Information exchange protocols and other technical details will also be specified in control and communication operations. UTM is the mechanism that manages airspace to facilitate and permit UAV operations performed outside the beyond visual line of sight (BVLoS) where standard air services are unavailable. As a result, UAV operators and the FAA will work together to determine and report the state of the airspace in real-time. The FAA now imposes several restrictions on UAV operators to ensure safe management operations. FAA and UAV operators mostly communicate through a distributed network of highly automated systems UTM is the mechanism that manages airspace to facilitate and permit UAV operations performed outside the beyond visual line of sight (BVLoS) where standard air services are unavailable. As a result, UAV operators and the FAA will work together to determine and report the state of the airspace in real-time. The FAA now imposes several restrictions on UAV operators to ensure safe management operations. FAA and UAV operators mostly communicate through a distributed network of highly automated systems via the Application Programming Interfaces (APIs). They do not coordinate through verbal communication, as pilots and air traffic controllers do.
UAV Communication Network
The IEEE 802.11 Wireless Local Area Network (WLAN) and radio technology both conduct command and control activities for most commercial UAVs. However, due to the UAV's speed and fluctuating altitudes, IEEE 802.11 is unable to meet the required conditions. Command and control activities can be accomplished in a non-licensed range; Sensors 2022, 22, 6013 6 of 32 however, numerous security and reliability issues would arise. Cellular networks are the only option. Cellular networks are stable, secure, and capable of covering wide areas with acceptable data speeds. However, they are not designed to support flying devices despite substantial standardization efforts. The most pressing issues continue to be interference and radio coverage. Certain limits must be met when a cellular network is linked with a UAV to improve coverage and capacity. UAVs are used as relays or mobile BSs to enhance coverage, connectivity, and capacity. RANs are also simple to install in regions where no established network architecture is available. This implementation is a configuration style that can be set up in the event of a disaster to avoid investing time and money on new infrastructure. It is also beneficial for increasing capacity and coverage during significantly crowded gatherings, such as concerts and sporting events [2,[36][37][38].
Antenna Tilting and Cell Association
To provide the best service to ground users, cellular BS antennas are tilted downwards. Aerial coverage has recently received significant attention, mostly for connecting airline passengers on domestic flights. Only a small number of BSs with upgraded antennas are necessary to ensure extensive coverage and continuous connectivity during the flight. However, due to construction and regulatory constraints, these methods cannot be used for commercial UAVs which frequently fly at lower altitudes, such as 50-300 m, as illustrated in Figure 2. UAVs are fundamentally different from terrestrial users since the assumptions that apply to terrestrial users are not applicable to aerial users. Consider the following example, two BSs (A and B) have antennas tilted downwards with the primary lobes facing down towards the earth. The ground user connects to the BS. If the signal strength from both BSs is equal, the user will stay connected to the previous one. In the case of UAVs, side-lobe antennas are useful. Figure 3 shows that despite being closer to BS A than BS B, the UAV at Y1 is served by BS B. This will cause excessive HOs and ping-pong effects. This issue also applies to horizontal locations. For locations Y2 and Y4 in Figure 3, the picture can be expanded to include a large number of BS s, signifying that the rate of HO for UAVs will be more excessive than conventional or traditional networks. Increasing the UAV's height will decrease the competitiveness of its service via the main lobes as long as the terrestrial BS antennas are slanted downwards. As a result, the service given to UAVs at high altitudes will be via the side lobes, which is not at the same level offered by the main lobes. Due to the increased potential of line-of-sight (LoS) at such high altitudes, UAV communication will suffer from uplink (UL) and downlink interference. This will create severe interference and navigation Sensors 2022, 22, 6013 7 of 32 management issues. Increasing the altitude will allow the side lobes of the BS antennas to have more than one connection possibility depending on the location of the UAV. This raises the possibility of LoS communication links, which increases interference in neighboring cells when compared to UE ground equipment [39][40][41][42][43]. For locations Y2 and Y4 in Figure 3, the picture can be expanded to include a large number of BS s, signifying that the rate of HO for UAVs will be more excessive than conventional or traditional networks. Increasing the UAV's height will decrease the competitiveness of its service via the main lobes as long as the terrestrial BS antennas are slanted downwards. As a result, the service given to UAVs at high altitudes will be via the side lobes, which is not at the same level offered by the main lobes. Due to the increased potential of line-of-sight (LoS) at such high altitudes, UAV communication will suffer from uplink (UL) and downlink interference. This will create severe interference and navigation management issues. Increasing the altitude will allow the side lobes of the BS
UAV Communication Scenarios
From a wireless perspective, a UAV in a 3D environment could potentially act as a mobile BS and mobile EU. Detailed consideration of both of these scenarios is provided below.
Flying Base Stations
A flying BS that connects backhaul and access networks can be a UxNB. The socalled fly ad-hoc network (FANET) is formed when more than one UAV is included in a transmitting apparatus. FANETs are air-borne frames for remote wireless ad hoc networks (WANETs) or mobile ad hoc networks (MANETs). An innovative aspect of the 5G network is "network from the sky". UAV have the ability to provide on-demand systems to specific regions due to their built-in mobility features, flexibility in three-dimensional space, adaptive elevation, and symmetric revolution. Ground users can benefit from premium services such as high-quality wireless connections, seamless connection, large data capacity, and low degradations thanks to these unique characteristics. UAV integration with distant cellular systems serving as aerial communication platforms will open up previously unconsidered foundations, new perspectives, and numerous possibilities [44].
When compared to their earthly counterparts, several differences are unquestionably present. The average height of earthbound BSs in an urban setting is about 10-20 m. UAVs can hover up to 100-120 m. This allows the UAV to have a longer range than traditional terrestrial BS s, further reducing interference from nearby terminals. Ground terminals are easily visible from various measured altitudes and points with the UAV. UAVs can track users in 3D with high mobility. Traditional ground-to-ground communications suffer from higher route loss attenuation and blurring. UAV s can provide a better LoS channel probability. In such situations, a few key areas must be considered. Millimeter waves (mm-wave), for instance, are used in 5G systems. LoS is essential for delivering high recurrent transmission capacity to the network. Since the LoS condition allows for effective beamforming in 3D space, UAVs are good candidates for 3D Multiple Input Multiple Output (MIMO). The idea of using UAVs as BSs is represented in Figure 4. nel probability. In such situations, a few key areas must be considered. Millimeter waves (mm-wave), for instance, are used in 5G systems. LoS is essential for delivering high recurrent transmission capacity to the network. Since the LoS condition allows for effective beamforming in 3D space, UAVs are good candidates for 3D Multiple Input Multiple Output (MIMO). The idea of using UAVs as BSs is represented in Figure 4.
Normal User
Due to obstacles in the coordinate LoS path, the signal to and from the BS for a terrestrial UE is regularly deflected or diffracted. As a result, the UE's gained signal quality will be significantly reduced. BSs are typically located at high elevations, such as cell towers or building tops. The likelihood of obstacles obstructing the LoS path dramatically decreases as the UE ascends to a higher altitude, as in the case of a hovering UAV. The signal quality improves as the path loss decreases since signal propagation through the sky is close to free-space propagation. The UAV can have LoS access to a number of nearby non-serving BSs. The increased likelihood of LoS paths to numerous non-serving cells will increase the UAV's obstacles since the cells share the same radio assets. The signal-to-interference-plus-noise ratio (SINR) may be low due to the high number of obstacles, making it difficult for the roaming UE to quickly receive and translate adaptable management-related signals (for instance, HO commands). Figure 5 presents the normal user scenario of UAVs in wireless communication.
Normal User
Due to obstacles in the coordinate LoS path, the signal to and from the BS for a terrestrial UE is regularly deflected or diffracted. As a result, the UE's gained signal quality will be significantly reduced. BSs are typically located at high elevations, such as cell towers or building tops. The likelihood of obstacles obstructing the LoS path dramatically decreases as the UE ascends to a higher altitude, as in the case of a hovering UAV. The signal quality improves as the path loss decreases since signal propagation through the sky is close to free-space propagation. The UAV can have LoS access to a number of nearby non-serving BSs. The increased likelihood of LoS paths to numerous non-serving cells will increase the UAV's obstacles since the cells share the same radio assets. The signal-to-interference-plus-noise ratio (SINR) may be low due to the high number of obstacles, making it difficult for the roaming UE to quickly receive and translate adaptable management-related signals (for instance, HO commands). Figure 5 presents the normal user scenario of UAVs in wireless communication.
UAVs in 5G Networks
As commonly known, 5G will transform multiple aspects of society. UAVs will likely be a significant tool used to demonstrate the full potential of 5G technology. UAV connection may even be possible with 4th generation (4G) LTE, which would be advantageous. UAVs are currently used as flying sensors linked to 4G networks. These sensors can convey data over great distances while remaining securely outside the pilot's line of sight.
The International Telecommunication Union (ITU) provided an overview of the differences between 4G and 5G networks. This agency developed the capabilities that differ-
UAVs in 5G Networks
As commonly known, 5G will transform multiple aspects of society. UAVs will likely be a significant tool used to demonstrate the full potential of 5G technology. UAV connection Sensors 2022, 22, 6013 9 of 32 may even be possible with 4th generation (4G) LTE, which would be advantageous. UAVs are currently used as flying sensors linked to 4G networks. These sensors can convey data over great distances while remaining securely outside the pilot's line of sight.
The International Telecommunication Union (ITU) provided an overview of the differences between 4G and 5G networks. This agency developed the capabilities that differentiate broadband cellular network generations. When discussing wireless networks, two definitions must be highlighted: upward and Verizon. In 2009, telecommunication operators deployed 4G and continued its management until 2019. During that time, 4G was widely employed, allowing users to download movies and use the GPS in cars. In 2019, Verizon pioneered 5G, launching a commercial 5G ultra-wideband mobile network in different sections of two cities. 5G enables rapid data transmission speeds due to the massive amounts of data acquired from simulating linked devices. Overall, 5G includes high data rates, low latency, energy, cost efficiency, increased system capacity, and widespread device connectivity.
The rate at which data is successfully transmitted across a network is referred to as throughput. Peak data rates of up to 10 Gbps can be achievable with 5G. At this level, driverless vehicles, fabrication, and virtual reality (VR) can rapidly advance. This further indicates that UAVs will be capable of transmitting large volumes of data. 5G technology allows devices to communicate at speeds of up to 500 km/h. Commercial UAVs will be able to inspect vast lengths of highways in minutes while maintaining network connection in such a way that data can be promptly transmitted. The 5G network can service millions of devices in a single square kilometer. Numerous organizations can be completely transformed and developed, ranging from home parcel delivery to search and rescue operations. The energy efficiency of 5G ultra-wideband will also be enhanced. Delays will further reduce, signifying the impact of lower latency. It is not uncommon that audio and visual images lag from time to time. 5G data transmission speeds will be at much faster magnitudes than the blink of an eye, with an end-to-end reaction time of roughly 10 milliseconds. This process provides UAVs and sensor operators with a near-real-time experience. With low latency, autonomous UAVs can navigate with tremendous precision due to instant communication.
Mobility with UAV Technology
The term "handover" refers to the process of switching from one cell to another while maintaining connectivity. Which is a core part of mobility management, if not the most important part. Beginning with the mobility management and HO concepts, which are regarded as the most important terms for understanding wireless technology in general, this section provides the reader with a comprehensive organized overview of mobility management with UAV technology. Following this conceptual review, we introduce mobility and HO in 3D for a variety of scenarios that correspond to UAV flights through space. This will provide the reader with a solid foundation for understanding the key components of the wireless network infrastructure that supports UAVs [45][46][47].
Mobility Management Concept
In the ideal case, a mobile UE's connection to the serving wireless network should remain stable even as the UE moves within cells, and this is the definition of mobility in wireless networks. When comparing wireless and wired networks, this is often cited as an advantage of the original. The UE's mobility allows it to move in a variety of ways. As long as there is coverage, the UE can switch its connection as it moves from the first cell (known as a serving BS) to a new cell (known as a target BS). The original serving BS can reroute the connection to the new target BS. All of these enhancements make wireless services more accessible to more users in more situations. The received signal strength (RSS) fluctuates continuously as the UE moves. A HO procedure is initiated when the RSS at a given location falls below a certain threshold defined by the RSS Indicator (RSSI). First, the serving BS sends a request to the target BSs, requesting that the UE's connection be rerouted to the target BS with the strongest signal. As a result, in the best-case scenario, the UE's connection to the serving networks will be stable throughout the user's journey [45].
Handover Concept
HO is the process of preserving connection in wireless mobile networks, this includes numerous scenarios in which the user maintains a connection while moving from one location to another. The method involves changing the BS that was previously servicing the moving user to one that has a better connection at the moment. This modification occurs through various scientific and technological procedures [45][46][47]. The HO method was developed to manage wireless mobile connections while users are traveling to provide highly dependable and smooth communication. In reality, HO is intended to boost user throughput while decreasing radio link failure (RLF) and interruption time. The dependability and quality of the serving network will improve if the HO management strategy is enhanced [48].
To complete the transfer, three activities must be accomplished. The mobile station may easily locate the surrounding BS in the first phase since the BS broadcasts a mobile neighbor advertisement control (MOB NBR-ADV) that identifies the radio channel and media access control (MAC) address. The target wireless network with HO timing is selected in the second step after scanning for surrounding BSs. Initially, the mobile station sends a scanning request (MOB SCN-REQ). The BS then receives the scanning response (MOB SCN-RSP) to provide a search time to the mobile station, which contains the list of target BSs. The HO decision is initiated by a mobile station HO request (MOB MSHO-REQ). The transition to the new wireless network is completed in the third step. The main HO procedures are listed in Table 1 [35,49]. Table 1. HO procedure list.
HO Procedure Description
Source-inter evolved Node BS (eNB) HO This occurs when the user leaves the coverage area of eNB and enters another area covered by another eNB (within E-UTRAN).
User-inter eNB HO
This occurs when the user enters a coverage area managed by eNB to one that is managed by another eNB (within E-UTRAN).
Source-Inter RAT HO
This occurs when the user leaves the E-UTRN cell.
User-Inter RAT HO
This occurs when the user enters the E-UTRN cell.
Source-intra eNB HO
This occurs from one sector to another when the user leaves the sector. User-intra eNB HO This occurs from one sector to another when the user enters the sector.
Several HO methods are used to promote user mobility to define when and how the UE should undergo the process of performing HO. Various optimization options are available, such as selecting the routing protocols and suitable objectives for any access point/base station (AP/BS). The availability of mm-waves in modern wireless technologies (5G and 6G) further complicates the selection of an adequate HO. The latest technologies allow for faster mobility, reaching speeds of up to 350 km/h in 4G and 500 km/h in 5G.
HO is well known for assessing wireless communication performance. Various requirements and indicators were developed to represent network performance during HO operations. The first requirement is that the relationship between the BS and the UE must be kept as stable as possible during the eNB transition. The second requirement is the HO interruption time, which is defined as the time when the UE is not permitted to deliver user plane packets to the BS. To ensure a smooth UE experience, the interruption time should be extremely short, such as less than 1 mms. The third requirement is the HO cost, which is computed by multiplying the mobility interruption time per HO by the number of HOs in the trajectory of a specific UE. The fourth requirement is the HO failure rate, which is calculated as the number of HO failures divided by the number of times the UE processes the HO. The fifth requirement is the signaling overhead, which is defined as data generated during HO processing to simplify the method.
Another issue is load balancing between BS cells. If the cell connection for the same BS is blocked, other UEs would have to move to a different cell. Another benefit of using HO is that it saves money by establishing a connection to a neighboring BS with lower communication power. It further preserves device battery life by regulating the transmitted power [48,50].
Mobility in Three Dimension (3D)
UAVs typically fly at high speeds above the BS antenna height in 3D space. 3D mobility changes the UAV's altitude which consequently influences the propagation channel characteristics. Thus, 3D coverage that can adapt to changing UAV elevations is required, and speed limitations must be maintained.
Communication Coverage in 3D
The data transmission coverage of a wireless network is referred to as "communication coverage". When the coverage area shrinks, so does the RSS. The RSS may be defined in 3D space using the altitude value. During the transition phase, the terminal will decide whether to remain on the current network or transfer to an adjacent one as the new base station. The conventional two-dimensional (2D) HO determination approach does not apply to UAVs due to their varied altitude values. To compute the coverage of a UBS, the following equation can be applied [49]: Several strategies and algorithms are used to make HO decisions. The RSS determines the best coverage option, which can be calculated using the following equation [49].
The path loss exponent is RSS min , which is the lowest value necessary for a terminal with a one-meter distance between the sender and receiver, d is the distance between the receiver and transmitter, and ε is a zero-standard-deviation Gaussian random variable. Figure 6 presents the radius of the BS, where A indicates the height of the UAV and d represents the radius of the BS's coverage in 3D space. HO occurs when the link between the BS and the UAV has stretched beyond its coverage.
Mobility in Three Dimension (3D)
UAVs typically fly at high speeds above the BS antenna height in 3D space. 3D mobility changes the UAV's altitude which consequently influences the propagation channel characteristics. Thus, 3D coverage that can adapt to changing UAV elevations is required, and speed limitations must be maintained.
Communication Coverage in 3D
The data transmission coverage of a wireless network is referred to as "communication coverage.". When the coverage area shrinks, so does the RSS. The RSS may be defined in 3D space using the altitude value. During the transition phase, the terminal will decide whether to remain on the current network or transfer to an adjacent one as the new base station. The conventional two-dimensional (2D) HO determination approach does not apply to UAVs due to their varied altitude values. To compute the coverage of a UBS, the following equation can be applied [49]: Several strategies and algorithms are used to make HO decisions. The RSS determines the best coverage option, which can be calculated using the following equation [49].
The path loss exponent is RSSmin, which is the lowest value necessary for a terminal with a one-meter distance between the sender and receiver, d is the distance between the receiver and transmitter, and is a zero-standard-deviation Gaussian random variable. (1) and (2) describe how to produce the A and R, as shown in the illustration. (1) and (2) describe how to produce the A and R, as shown in the illustration.
Speed Limitation in 3D
Today, WiFi, WiMAX, and cellular technologies are now available networks. Smart devices rely on mobility-based network services, thereby increasing their demand. Con- sumers expect an internet connection at all times and from any location. It is important to remember that when the UAVs are traveling faster than the UE, HO may frequently occur. HO depletes energy and causes connection delays. To solve this issue, the speed of the UAV must be limited by using the following equation: where w signifies the RSS value. As the distance between the BS and the terminal increases, the received signal intensity decreases. The scenario then moves to the execution phase when the RSS falls below the threshold [49,51,52].
Handover in UAV Networks
HO performance is a crucial indicator of the UAV network performance since it possesses a level of network flexibility. UAV coverage fluctuates based on transmission power and altitude. Researchers are investigating seamless HO to provide ground users with dependable HO. Traditional cellular networks and HO in UAV networks are not similar. To deliver ongoing services to mobile users, an intelligent HO approach has been devised. Effective solutions based on enhanced software have been provided to achieve quick HO in UAV networks. Knowing the quantitative expression of the likelihood of HO can aid in the construction of system gridlines. Small stochastic geometric models in UAV networks can also be created for evaluating mobility performance. By simulating UAV movement with a random mobility model, the statistical aspects of the channel gain can be examined [53][54][55].
UAV Handover Scenarios
In 3D space, UAVs typically fly much faster than the average BS. Additionally, the UAV can function as either a standard mobile UE or a fly BS. Different types of HO scenarios emerge for two cases based on these arguments.
HO Scenarios with Flying Base Stations
When a UAV is used as a fly BS, three scenarios can exist. In the first possible scenario, the UAV experiences HO when it changes its ground BS. In the second type of HO scenario, the UE changes from connecting to the serving UAV BS to connecting to another serving BS. In the third type of HO scenario, the UAV experiences HO when it changes its serving satellite node to another one. These scenarios are illustrated in Figure 7. Figure 7. HO for UAVs as base stations in a future ultra-dense heterogenous network.
HO Scenarios with Normal User
The UAV acts as a mobile user above the ground, and there will be two possible scenarios involving the UAV changing its connection to a different BS. This is also possible Figure 7. HO for UAVs as base stations in a future ultra-dense heterogenous network.
HO Scenarios with Normal User
The UAV acts as a mobile user above the ground, and there will be two possible scenarios involving the UAV changing its connection to a different BS. This is also possible for satellite communication systems, as evidenced by the fact that UAVs can switch from using one satellite node to another. These scenarios are illustrated in Figure 8.
HO Scenarios with Normal User
The UAV acts as a mobile user above the ground, and there will be two possible scenarios involving the UAV changing its connection to a different BS. This is also possible for satellite communication systems, as evidenced by the fact that UAVs can switch from using one satellite node to another. These scenarios are illustrated in Figure 8.
UAV Handover Based on Machine/Deep Learning
In 5G networks, the use of mm waves with higher frequencies will present new challenges for HO management, which will be difficult to overcome using traditional methods. Significant attenuation in these frequency ranges will initially be present, limiting their transmission distance. As a result, more BSs are required to cover the same area as those using microwave frequencies [56]. Directional beams are used in mm-wave transmission. Obstacles in the path of the transmitted beam may prevent the user from connecting to the network or deteriorate signal quality. As a result, users in mm-wave communication networks must determine which beam to connect to at any given time to optimize their QoS. Deciding on the best beam has become a new factor to consider in the HO management process. The large number of beams which the user must choose from makes the HO technique significantly more difficult [57][58][59][60].
The network's self-optimization will improve with the use of machine learning techniques. ML techniques can learn diverse attributes from data provided by the network to maximize different network sections. They can detect hidden network features and patterns in network data that analytical methods are unable to detect [61]. They are self-adaptive, which means they can respond to changes in the network environment and, in some cases, anticipate future organizational or user needs. This allows the network to prepare for them when they occur [62]. They can be written in such a way that the preparatory stage of the calculation, which is often computationally expensive, can be completed offline before moving on to the actual calculation [47]. The planned program is broadcast online to allow for real-time optimization, yet the model is rarely updated since it encounters unnecessary data [63].
Research Challenges
UAVs-based networks face numerous challenges. It is difficult to establish reliable, low-latency UAV control communication in a cellular network. The availability of infrastructure is a major goal for improving terrestrial communication services. Terrains with limited terrestrial BS coverage may be unable to provide connectivity services to cellular-connected UAVs, necessitating a solution for effective deployment of the technology. Numerous studies and research projects have demonstrated that low-altitude UAVs can be used in conjunction with cellular networks. With low-latency, high-throughput cellular network spectrums, UAV integration will be possible. According to 3GPP, aerial UAVs have lower SINR than terrestrial UEs. Excessive HO failures and HOs are among the issues. These concerns must be addressed because the consequences could seriously impact network stability in the future. Consider environmental implications, routing protocols, channel effects, antenna designs, and HO management to maximize UAV benefits. These issues must be addressed to properly connect UAVs. A UAV-based cellular network faces challenges in terms of 3D coverage area and ground channel [36,64,65]. These challenges are discussed in this section and summarized in Table 2 so that the reader can get a complete picture of them.
UAV operations in LTE
LTE technology is well suited to serve air vehicles, particularly at low altitudes, and this provides great potential for the rapid growth in the number of UAVs in use. This, in turn, creates numerous commercial opportunities for modern communications, which consequently requires improvements to LTE networks in the future to readily serve the anticipated rapid growth of aircraft.
Mobility in 3D
Aerial and ground UEs are based on different assumptions. UAVs for network services are different from traditional networks in that they use a 3D model rather than a 2D model. UAVs are incredibly mobile, making control and decision-making difficult. As a result, advanced mobility solutions will be required.
UAV-ground channels
One of the most complex design difficulties in producing cellular-connected UAVs is creating coexisting mechanisms between terrestrial and airborne users. UAV-ground interference management must be installed to achieve this coexistence. The communication channel between the ground BS and UAVs has extremely distinct interruption patterns. The elevation or angle of the UAV influences channel parameters such as shadowing and path loss exponents. These can be used in residential or sub-residential environments, depending on deployment.
Transmission protocols
UAVs can scan and capture data while dropping data packets, according to several patent applications. Transmission Control Protocol/Internet Protocol (TCP/IP) will be insufficient for UAVs. As a result, new methods based on UAV mobility must be devised.
General Challenges of Connected UAVs
One of the main concerns is the dangers associated with monitoring airborne applications. Pilot preparation, flight length, climate conditions, and risk restrictions are all key factors when transporting a tracked flying machine [66,67]. Connected UAVs and unmanned aerial system (UAS) technology can be used to place unmanned airships in situations where a human pilot cannot be placed due to various risks. UASs can also be used to gather extensive information regarding the progress of human activities to aid in risk mitigation and reduce the amount of time people spend in potentially dangerous situations. UAS is fully embraced by several development organizations to help mitigate the risks associated with various situations, such as foundation reviews, for instance. The vast majority of assessments rely on 'human eyes' on the ground to inspect the condition of basic components and determine whether or not maintenance is required. Support groups can evaluate the framework with low-cost UASs while staying on the ground, avoiding dangerous and time-consuming hikes. Take, for example, how energy firms assess the major arch framework [68]. On rare occasions, administrators may choose a helicopter inspection or require a support crew to attempt a climb to examine the arch foundation from the outside. Maintenance personnel can use UAS to conduct an initial inspection from the ground, avoiding perilous climbs and reducing casualties. Due to the fact that a UAS inspection takes less time and requires fewer people than an actual climb, groups can inspect arches more frequently or with a smaller team. If an irregularity is discovered, its severity and impact can be assessed on the spot. Although a human climb may be required to solve the problem, the maintenance crew can ensure that they have the replacement components, the right instruments, and the right people on hand to fix and complete repairs. Individuals can also be rescued from dangerous situations and accidents by implementing comparative risk reduction and mitigation strategies across numerous verticals [69][70][71][72].
UAV Operations in LTE
Long Term Evolution (LTE) is well-suited to serve aerial vehicles, such as UAVs. Field tests where LTE systems are used to connect UAVs to networks are becoming more common. UAVs are expected to rapidly grow, providing modern and exciting trade opportunities for new LTE telecommunication companies. LTE network enhancements can be made in the near future to better prepare for the expected increase in data traffic from aerial vehicles. Radio propagation parameters encountered by an airborne UE, for instance, are likely to differ from those encountered by a ground-based UE. The behavior of an aerial vehicle is normal as long as it flies at low altitudes to the BS radio line. After flying well above the BS antenna height, the UL signal from an aerial vehicle becomes more visible to many cells due to LoS propagation conditions. An aerial aircraft's UL signal can block the cells around it. Increased interruptions are harmful to UEs on the ground, such as smartphones and IoT devices. To maintain normal throughput performance for the UE, the network may impose restrictions on the admission of aerial vehicles within the organization. UAVs also have administrative authorizations that are unique to them. There are two types of "UAV UE" in the field. The first is the UAV with a cellular module that is approved for use in the air. The second is the UAV with a cellular module that is only authorized for terrestrial use. Not all districts allow their usage due to administrative concerns since the UL signal from a UE can block nearby cells. Another point to emphasize here is that the processing time in LTE systems is high, and the mobile station is moving quickly, which may raise the possibility that LTE will not be a supportive network for UAVs, particularly in high-speed scenarios. In addition, the HO delay will be extended beyond the standard 30 ms execution time. This may cause the UAV to fly outside of its coverage area without performing the necessary HO, disrupting the connection and reducing communication efficiency. Because LTE networks have a limited capacity and BW when compared to 5G and 6G networks, they may not be suitable for supporting UAV communication due to the massive growth of mobile devices connected to the network.
Mobility in 3D
Current radio access technologies are not well suited to promote flying radio devices since their formations are largely geared to assist terrestrial UEs. BSs are typically built and modified to provide the best possible performance for ground users. Existing BSs have been modified to achieve the aforementioned goal. The downward tilting antennas produce radiation patterns that are unsuitable for serving aerial UEs, which are expected to be positioned at various heights above ground level. Since the aerial user frequently flies above the BS antenna height, 3D coverage that can adapt to changing UAV elevations is required. The BS antennas of LTE networks may be able to achieve efficient channel gain by utilizing their side lobe antennas. The BS antenna length, UAV height, antenna design, and association criteria all play a crucial role in determining UAV coverage patterns in 3D space. As a result, a 3D coverage model for aerial users corresponding with terrestrial users is required for the network model. UAVs for network services are different from traditional networks in that they use a 3D model rather than a 2D model to create mobility. UAVs are extremely mobile, rendering their control and decision-making processes difficult. Advanced mobility solutions will be required as a result [49,[73][74][75][76].
UAV to Ground Channel
Creating coexisting mechanisms between terrestrial and airborne users is one of the most complex design challenges for developing cellular-connected UAVs. To achieve this coexistence, UAV-ground interference management must be installed. Unlike the ground BS to ground UE communication link, the ground BS to UAV communication link has very different interruption patterns. UAVs may establish LoS communications that are more dependable than those with terrestrial users since they fly higher than BSs. They also make use of significant macro-diversity gains provided by many BSs. Ground users, on the other hand, generate more UL/DL interference than the dominant LoS connections, making Inter-Cell Interference Coordination (ICIC) extremely difficult. Fading, shadowing, and route loss are also important considerations. Traditional ICIC solutions may be adequate for existing cellular designs; however, they fall short when it comes to UAV interference control which involves a large number of BSs, imposing limitations due to its complexity will emerge. As a result, effective interference management strategies are required for the coexistence of ground users and UAVs. Several books on the subject of downlink and UL up-link interference are available [49,[77][78][79].
The most common types of links in the communication channel are Ground-to-UAV (G2U) and UAV-to-Ground (U2G). The G2U link provides downlink control and command for suitable UAV operations in cellular-connected UAVs, while the U2G link provides UL payload communication. Rayleigh fading is the most frequently used small-scale fading model for terrestrial channels, however, Nakagamim and Rician small-scale fading are more common for U2G channels due to the presence of LoS propagation characteristics. Large-scale fading is altered due to the 3D coverage area and the varying heights of UAVs. A free-space channel model, an altitude/angle-dependent channel model, or probabilistic LoS models can be used as large-scale fading models, as follows: 1.
In the free-space channel model, fading and shadowing have little effect and interference is low. This method is most effective in areas where the LoS assumption holds true between high-altitude UAV s and ground stations. Low-altitude UAVs may encounter non-LoS connections in urban environments, necessitating the use of additional methods to accurately assess the propagation environment.
2.
In altitude/angle-dependent channel models, channel characteristics, such as shadowing and path loss exponents, are affected by the UAV's elevation or angle. Depending on the deployment, these varieties can be used in residential or sub-residential settings. Altitude-dependent models may not be appropriate if the height does not change or if UAVs fly horizontally. In analytical research, models based on elevation angles are commonly applied, but there is insufficient literature on the subject.
3.
Due to buildings, obstructions, or bottlenecks, approaches based on probabilistic LoS models are frequently allowed for residential scenarios where the LoS and NLoS links between UAVs and the ground are recognized. The LoS and NLoS components are separately displayed according to their likelihood of occurrence in a home environment. Their characteristics propagation are statistically determined by the nature of the residential environment in terms of building height and density.
Transmission Protocols
Data in UAV-based connected networks must be rerouted from one serving UAV BS to another serving BS because UEs switch from one BS to another during the HO process. This is an important point that makes the transmission protocol important in UAV-based connected networks and should be highlighted. Several companies have successfully filed patent applications detailing how UAVs to scan and acquire data while dropping data packets, such as a queuing delay and transmission delay (QDTD) routing protocol. This solution, on the other hand, employs a complex processing method that necessitates more computation time, resulting in data delays and a decrease in data throughput [80]. Another protocol is an adaptation of the distance routing effect algorithm for mobility (DREAM) protocol, which includes a location service, a local database, and a routing agent. The location service keeps track of each node's location. To compute location, the distance effect given by the difference in velocities of two nodes is used [81]. The Media Access Control (MAC) layer is another protocol, and directional antennas are used at the top and bottom of the aircraft. The UAV detects the medium to determine whether or not there is any active communication [82]. TCP/IP and other traditional transmission methods will be insufficient for UAVs. As a result, new procedures based on the mobility characteristics of UAVs must be developed [83].
Dominance of LoS
The radio environment differs from the terrestrial environment; therefore, issues may arise as elevation increases. UAVs cause significant disruptions to BSs in cellular networks when aerial and terrestrial users work together. This is a problem for terrestrial users who utilize UL communication services. The prevalent LoS determines the characteristics of UAV communication channels. They will have an unavoidable impact on the HO mechanism since no barriers are present in the sky. This characteristic must be considered when constructing UAV-based networks. UAVs are subjected to frequent HOs and pingpong effects as a result of their fast mobility, which causes rapid channel shifts. Several studies have suggested methods for reducing UAV crashes in cities while simultaneously easing traffic congestion. UAVs may encounter unexpected scenarios or tasks in smart cities which require relevant solutions. A number of drawbacks must be addressed in HO research for UAVs [84,85].
Related Works
The literature is crucial when it comes to incorporating UAVs into future networks. This section includes a review of relevant studies. Several research techniques, as well as the findings and outcomes of these efforts, are briefly described along with suggestions for future improvements. These papers mostly discuss UAV HO decision algorithms based on mathematical models or machine learning techniques. UAV architectures and new use cases are also examined. The following research papers are listed in chronological order and a summary of challenges and previous contributions has been provided in Tables 2 and 3. This overview can provide insight into the best UAV integration techniques and serve as inspiration for future efforts.
UAVs and cellular networks are becoming increasingly popular research topics. Recent proposals with unique solutions have been made to tackle scientific, technical, socioeconomic, and security issues. Several surveys, examples, and tutorials are also presented in the literature to provide clear information regarding this research topic. These works allow the research community to keep track of ongoing studies, and aids practitioners and researchers in acquiring necessary information. Several surveys and tutorials have focused on (a) the possibility of integrating UAVs with 5G/B5G cellular networks from the aspect of UAV-based cellular communication, (b) current advances, future trends, and challenges for UAV-based cellular communication, and (c) extensive analysis and performance studies regarding a specific communication challenge, such as channel modification.
The authors have presented a method for determining a UAV network's coverage. Constraints in the UAV network, battery capacity, and HO management have led to communication disruptions and other challenges, such as regular HOs. Since UAVs are positioned at different coverage areas and heights, traditional HO algorithms do not work. To maintain coverage, the recommended solution uses RSS to change the height and separation distance of each UAV. Several simulations were accomplished to determine the likelihood of a smooth transition using seamless HO success probability (Ps) and false HO initiation probability (Pf). Since the coverage algorithm matches all UAV heights to a value (the lowest coverage and heights to the lowest feasible value), the spacing between the UAVs can be modified using Pf and Ps. Pf grows as the vertical space between the overlapping sections shrinks. These sections shrink as the distance between the UAVs increases and Ps decreases. The chances of achieving a smooth HO decrease as the average RSS measurement increases. The chances of an inaccurate HO begin to lessen as the average RSS measurement duration increases. According to the simulation results, the proposed technique is a strong candidate for UAV networks. The method performs admirably when it comes to simulations. However, a more realistic scenario must be considered, such as the UAV's payload, the radio range of the BS, and other factors. Moreover, the coverage algorithm equalizes the RSS of each UAV, which may or may not be acceptable in practice.
In 2004, 2010, and 2012 [86][87][88], a novel HO decision technique was developed by establishing innovative HO criteria. The HO decision was made using a fuzzy inference method that considers several factors in HO decision situations. This paper examined fuzzy MADA methods and various proposed methods based on this approach, as well as their sensitivity. The HO approach based on an optimization algorithm was also suggested for cellular networks.
In 2004, 2012, and 2016 [88][89][90], the HO method was discussed for 3D aerial networks. As we know, the 3D method differs from the classic 2D approach. The height of the UAV and the distance between UAVs must be adjusted. The likelihood of seamless successful HO and false HO was also evaluated for the best coverage assessment technique. The authors devised the HO decision to select the appropriate network. The use of a fuzzy logic approach led to the ability to manage inaccurate data, which is a useful enhancement to this approach. This enables it to make multi-criteria decisions. The authors then created an adaptive HO management approach based on a fuzzy logic system that works in conjunction with an existing cross-layer HO protocol. Based on the compression between the performance of both the existing and proposed approaches, the suggested technique outperforms the traditional method with noticeable intra-system and inter-system HO.
In 2007, 2008, and 2014 [91][92][93], MATLAB was employed to develop a vertical HO scheme. Since MATLAB was the platform used, the proposed approach is suitable for wireless wide area networks (WWAN) and cellular networks. To construct the fuzzy logic quantitative decision algorithm (FQDA), eight factors were considered. There were 81 rules used for the eight factors, which were then compared to 6561 rules. Algorithm models were developed and implemented according to various criteria based on vertical HO. These vertical HO methods were demonstrated in the heterogeneous wireless networks (HetNets) WWAN and WLAN environments. The methods are per the IP-based workforce automation sector, which allows unrestricted mobility across networks while connecting via IP mode using a single device on multiple networks. The vertical HO approach was used to integrate Wi-Fi (IEEE 802.11) and WiMAX. The signal-to-noise ratio, moving speed, and signal strength were all factors considered in this study. NS2 and NS3 were used to create the simulation.
In 2010, 2013, and 2019 [94][95][96], two approaches to vertical HO were introduced in a HetNets environment. The fuzzy interface system was used, as well as subtractive clustering techniques. According to the simulation, the approach enables the HO procedure to become easier and faster for different protocol users. The authors proposed a method for conserving energy and battery life by using fuzzy logic. Mobile phones with LTE and Wi-Fi capabilities can also be useful in reducing battery consumption. Researchers proposed a method for performing HO in 3D space by considering speed and coverage constraints. A fuzzy interaction system was created to make HO decisions.
In 2010 and 2014 [35,97], a speed adaptive system with a knowledge method was created to enhance the rate of the network's candidate set. The decision algorithm was developed as a collaboration between vertical handoff, fuzzy logic, and pre-HO decision in order to generate effective and efficient judgments. A performance study was conducted to compare the proposed work with the typical RSS. According to its findings, the suggested method improved performance in terms of reducing unnecessary HO and the rate of call blocking or dropped calls. Many HO algorithms were measured and used to reduce HO in the network. According to the results of this survey, popular algorithms were developed to address complex challenges, which sometimes lack clarity or sufficient detail.
In 2011 and 2018 [56,98,99], the authors listed several problems that mobile aerial users face. In most recent static cellular deployments, the sidelobes of the antenna design assist aerial users. As a result, the connection pattern is broken. Due to the fragmented connection and low SINR, a higher risk of radio connection and HO failures are present. The uneven connection pattern, in which a user is returned to its original cell within a set time limit, will lead to more ping-pong HOs. While LTE is designed to allow users to travel at speeds of up to 350 km/h, it is based on large cell areas rather than the sidelobe-based cell attachment patterns seen in UAVs. The 3GPP research item has identified cell selection, HO efficiency, and robustness as critical performance criteria for aerial users in cellular networks.
In 2013, 2015, and 2016 [75,100,101], the authors implemented a machine learning technique for the UAV network as a potential solution. The machine learning technique is seen as a promising approach in this field since it can predict node mobility. Currently, prediction solutions are based on distance measurements. To address a two-dimensional issue, a categorization of movement to other classes based on nodes' prediction of near future positions has been proposed. Acceleration also has a significant impact on the likelihood of 3D node movement. The motion trajectory was calculated using state transition equations to determine the object class. The calculations were then used to clarify the mobility parameters. To complete these procedures, several steps must be accomplished. The most important step is to use an online class identification module to determine the classes and parameters that were unspecified but acquired from observed trajectories, while keeping in mind that each UAV has its tracking system, including Automatic Dependent Surveillance-Broadcast (ADS-B) technology and GPS positioning. The Kalman filter was used to achieve 91% accuracy in motion profiling. The online module generated more classes over time. Kalman filtering with intermittent observation forms the backbone of this approach, allowing for simultaneous estimation of the target vehicle's position, velocity, and acceleration using the relative position and velocity information provided by the radar system. Kalman filtering, which contains two sets of time updates, can be used to solve the state transition equations and obtain a reliable approximation of the state vector. The next state vector (position of flying item) is predicted using time update equations. Time update equations are used exclusively when no measurement is available to predict in the case of intermittent observation. The optimal state estimate of singular systems provides a solution for this system given an unknown input.
In 2016-2021 [28,47], the authors proposed an efficient HO mechanism for UAV networks. UAV network services differ from typical networks since the HO process is carried out in 3D rather than 2D. To enhance network services, this technology adjusts the height and distance between UAVs. The ideal coverage selection technique is assessed using the seamless HO probability and Pf. To ensure that each UAV covers the same area, the height of each UAV must be adjusted to account for physical limitations. A seamless HO is possible in certain circumstances. Ps and Pf have been modeled in numerous scenarios to examine how they change. A large number of graphs were obtained for investigation and evaluation. The vertical distance between the overlapping sections becomes smaller as Pf becomes higher. The overlapping area shrinks as Ps shrinks. Overall, this technique can help UAVs preserve the environment. The battery can last longer by avoiding frequent HOs. The proposed method can assist in optimizing a UAV network by determining the ideal overlapping region. By assigning the same RSS to all UAVs, UAV interference can be reduced. Although the study did outline the preferred method, several factors must be considered. The most serious issue is that adequate coverage for UAVs is difficult to achieve. If an obstacle prevents the UAV from flying to a lower altitude, for instance, the UAV's minimum height must be adjusted. The RSS level for moving UAVs must be raised to maintain smooth HO when the UAV is influenced by weather factors, such as wind. It is also necessary to consider the system's dependability and throughput rate.
In 2018-2019 [99,102], the authors realize that providers cannot sacrifice ground-level performance for aerial users by changing the BS antenna angle. According to several studies, the BS will be able to spatially separate users in 3D space by using directed antennas and beamforming, allowing for effective service of both ground and aerial users. Experts believe 5G is a good choice since it allows beamforming and high throughput connections while remaining significantly flexible.
In 2018 and 2021 [47,103], it was found that UAVs are especially vulnerable to LoS propagation, which is required for mm-wave communications to work. Use of mm-wave communications was suggested as a possible option. Larger path loss reduces inter-cell interference for mm-wave frequencies, while small antenna aperture size allows a large number of antennas to be used in the antenna array. Arrays can be employed to provide beamforming which compensates for the user's high path loss while simultaneously reducing interference. The application of mm-waves opens up a significant usage spectrum. The high throughput previously mentioned can be easily achieved by using a large bandwidth.
In 2018, 2019, and 2020 [104][105][106], the authors used simulations to further investigate this problem, discovering two issues. The first issue is that high levels of interference will make it impossible to maintain connection and complete successful HOs, resulting in a high percentage of radio link and HO failures. The focus shifted to LTE-M, a technology that allows users to communicate in a low SINR. The authors were able to reduce the number of radio connection and HO failures by simply increasing the number of ping-pong HOs. The second issue is that the default HO strategy will fail when aerial users transmit antenna pattern nulls. The volume must be kept low to avoid a radio connection failure. Fine-tuning parameters of the HO mechanism, such as the reaction time, was suggested to solve this problem. The introduction of 5G networks will alter people's communication habits. Several tests were conducted with a UAV connected to a 5G BS at a frequency of sub-6 GHz. HO to the 4G network automatically occurred. The UAVs experienced more HOs than land users, lowering the overall throughput. Researchers believe this will be corrected with the deployment of more 5G BSs.
In 2019-2020 [107,108], the authors considered equipping UAVs with highly directional antennas. They suggested the use of 5G's massive MIMO capabilities since it allows the BS to geographically separate users while simultaneously producing nulls for other users to prevent interference.
In 2021 [109], the authors mainly focused on static users, however, new issues emerge when mobile circumstances are considered. Beam training and tracking become more difficult, resulting in significant amounts of overhead. However, this overhead is lower than expected in the simulations, allowing mobile users to be serviced at standard rates. Another issue with using mm-waves is the large Doppler frequency changes that are proportional to the center frequency. To overcome the HO and Radio Resource Management (H-RRM) problem, a deep reinforcement learning approach was developed [84].
It concentrated on UAV as a user while ignoring UAV implementation as fly BS.
Yun Chen
A unique HO framework was offered to provide competent mobility support and a reliable wireless network to UAVs that are supported by a terrestrial cellular network. A deep Q-learning strategy was created to powerfully optimize HO decisions, ensuring a robust network for UAV users using instruments from deep reinforcement learning [110].
It did not address the inclusion of 3D UAV mobility in the present framework. As a continuation of their previous work, an efficient HO mechanism for UAV networks was proposed.
Since the HO mechanism is accomplished in 3D rather than 2D, the network services of UAVs differ from traditional networks [79].
It used RSS as the key point, but in practice, the RSS value may vary with LoS and NLoS, thus another metric, such as SINR, should be considered. 5 Mangina et al.
A system that combines an unmanned semi-autonomous quad rotor with a VR-based scheme was presented [111].
Experiments are limited by labs, so the challenge is to use UAVs as a UE to make assistive technology work better in the real world.
Bae
Using UAV telepresence is a powerful tool that many people may take advantage of. Existing robot technologies, on the other hand, are largely for indoor use since their mobility is sometimes difficult and problematic [112].
It needs additional development to minimize weight and increase power consumption efficiency. Furthermore, the tests must imitate real-world conditions. 7 Orsino et al.
A simulation was suggested to investigate the implications of HetNets mobility on Device-To-Device (D2D) and UAV-assisted Mission-critical machine-type communications (mcMTC) in 5G [113].
The heterogeneity of the equipment employed, such as UAVs, Fiber, and masts, causes operational challenges that must be handled by the quickly expanding industrial IoT ecosystem. 8 Lee et al.
A fuzzy inference method was used to create an intelligent HO scheme for UAVs. The system makes HO decisions via a fuzzy inference process [49].
Look at approaches to improve the functions of the HO decision for a variety of devices, including both UAV scenarios as fly BS and UE. 9 Peng et al.
A cutting-edge machine learning method was offered to address the issues arising from UAV network requirements [75].
Unsupervised learning from raw data is a time-consuming procedure. 10 Sharma et al.
A small coverage area means two cells may overlap, causing co-channel interference. More users near user-site APs make HO regulation difficult without too much communication expense and latency. 11 Yoo et al.
The UAV Delivery Using Autonomous Mobility (UDAM) idea was presented for delivery services. Nowadays, people use E-commerce for nearly everything [105].
Limited evaluators from limited companies evaluated the proposal, limiting the research's scope. No existing notions were compared numerically. 12 Hu et al.
A deep learning-based system for trajectory prediction and an intelligent HO control approach was presented for UAV cellular networks [115].
Deep learning's predictive power demonstrates its future utility. However, various challenges must be addressed, including spectrum, energy, and security management. 13 Nithin A location module was built to improve Over-The-Top (OTT) application location services [116].
Advanced machine learning could enable address discovery, navigation, and product delivery in the future.
The use of mm-waves and Terahertz (THz) band communications in UAV networks were examined where the transmitter and receiver are both mobile [117] Beam alignment frequency and directivity angle control in mm-wave/THz bands for studying mobility and weather conditions can be future research topics. 15 Euler et al.
The effects of changing radio environments and complications regarding UAV performance were analyzed [104].
To improve the results, future studies may explore avoiding low SNR sites and using directional antennas for the UEs. A stochastic geometry-based UAV cellular network model was assessed. Lately, UBSs have been receiving significant attention due to their versatility and wide-ranging applications [118].
Future work will focus on the mathematical analysis of complex mobility models like Random Waypoint (RWP) and Random Walk (RW). 17 Fakhreddine et al.
An experiment in a suburban setting was proposed to see how parameters influence cell selection and HO management when UAVs are employed as aerial UEs [85].
Connecting a UAV to a cell-based solely on the RSRP value and ignoring other key point values like SINR. 18 Banagar et al.
For UBS networks, a stochastic geometry-based mobility model was developed. The mobility of wireless nodes has a significant impact on the performance of wireless networks [76].
The flying BS that served ground UE was restricted to a constant height. A dynamic height may be proposed in the future to reflect the real 3D movement of UAVs. 19 Iranmanesh et al.
A Delay-Tolerant Network (DTN) technique was suggested for UAV communication packet routing optimization [83].
The work discussed UAV issues and offered graphics to illustrate the conclusions while employing a unique packet-based technique. However, future improvements to this algorithm or others are possible. 20 Bai et al.
A new approach (dubbed the route-aware HO algorithm) was suggested to improve UAV communication system reliability [119].
Improved estimation accuracy and granularity in presenting radio link quality can improve the findings even further. 21 Amer et al.
The probability of coverage and the impact of various parameters on the overall performance of the proposed system were examined [120].
Although main and secondary lobes are used to evaluate antenna layouts, side lobes and nulls have an impact on UAV-UE cell allocation and HO in practice. 22 Azari et al.
A machine learning-based technique was recommended for the HO mechanism and resource management of cellular-connected UAVs. When aerial and terrestrial users coexist in cellular networks, UAVs create significant interference to BSs, posing difficulty for terrestrial users' UL communication service [84].
More DL work is needed to make this study's results relevant in the future.
Proposed Solutions
With the increase in connected devices and related services, concerns have emerged regarding mobility and connection. Several configurations have been suggested throughout the literature. In the following subsections, the most common configurations are discussed. The configurations are organized according to the problem that must be solved and the method that will be used to solve it. Figure 9 demonstrates the classification of the proposed solution.
RSS-Based Algorithms
RSS data is used in algorithm-based HO management systems. RSS-based computations are generally less complex, but they are also less precise. Calculations have the benefit of allowing multiple factors to be considered in the HO decision-making process. This decreases computation complexity while further improving efficiency and precision. A method based on RSS was proposed to adjust the altitude of the UAVs and the distance between them using Ps and Pf to evaluate the optimum computation range. To increase the UAVs' scope to the same level, the height of each UAV can be adjusted while considering the physical constraints. This method is also RSS-based. It manages the range of each UAV by adjusting the height and distance between them. The Ps and Pf are calculated to evaluate the suggested configuration [78,83].
With the increase in connected devices and related services, concerns have emerged regarding mobility and connection. Several configurations have been suggested throughout the literature. In the following subsections, the most common configurations are discussed. The configurations are organized according to the problem that must be solved and the method that will be used to solve it. Figure 9 demonstrates the classification of the proposed solution.
RSS-Based Algorithms
RSS data is used in algorithm-based HO management systems. RSS-based computations are generally less complex, but they are also less precise. Calculations have the benefit of allowing multiple factors to be considered in the HO decision-making process. This decreases computation complexity while further improving efficiency and precision. A method based on RSS was proposed to adjust the altitude of the UAVs and the distance between them using Ps and Pf to evaluate the optimum computation range. To increase the UAVs' scope to the same level, the height of each UAV can be adjusted while considering the physical constraints. This method is also RSS-based. It manages the range of
Route-Aware HO Algorithm
The route-aware HO algorithm was proposed to make use of path data. The data from flight paths is used to optimize the network, reducing the number of unnecessary HOs and the likelihood of an incorrect HO. The airborne channels' consistency and pre-determined directions are applied to manage flexibility. In addition to the offline-based calculation, an online-based calculation was presented in which HO is triggered as a result of SINR computation. The final option entails setting updates regularly. As a result, it can reduce computation complexity while speeding up the execution of active wireless systems [63].
Delay-Tolerant Networking (DTN) Algorithm
A novel concept known as the DTN method (also known as Weighted Flight Path Planning (WFPP)) was proposed to maximize packet steering in UAV communication. The weight of packets is determined by their requirements, the time they must survive, and the amount of electricity they may consume. If the UAV's maximum length is less than the maximum length it can fly, the method generates an unused path that can be used. The path is obliterated if this is not the case [83].
Machine/Deep Learning Approaches
In recent years, machine learning and deep learning-based methods have been at the forefront of research. Thanks to advancements in the field of artificial intelligence, these ideas can ensure progress in HO decision-making, simultaneously reducing computational costs and addressing security concerns. Since information designs do not require frequent overhauls, the precision and effectiveness of asset utilization can be improved.
In [84,112], the UE's movement properties are recorded using a hidden layer. Social pooling is also used to capture the interaction between UEs. The four essential activities applied to complete HO are estimation, detailing, judgment, and execution. Unlike the standard HO, machine learning is employed to predict future trends. A confirmation method determines whether the customer should be transferred to another ABS. The optimization issues (HO and H-RRM) are defined by machine learning arrangement strategies that aim to capture relationships at worldly and spatial levels to create an appropriate HO choice. The buffer line is used to characterize the information entry rate, the apportioned range, and the impedances from BSs. Communication of the demonstrated framework is through the air-to-ground channel where the LoS path prevails. The optimization problem is then created, and the results are used to finish the decision-making process and remodel the HOs.
Future Research Directions
Despite the potential of combining UAVs with 5G methods, research into UAV-assisted wireless networks is still in its infancy. Several unanswered questions must be further investigated. This section highlights the most explored topics for future directions in the field.
Mobility Management
In future HetNets, managing UAV mobility will be a critical factor that requires thorough investigation. Due to their development features, UAV mobility poses a great risk since they rapidly move in 3D. The use of mm-wave groups in 5G and 6G systems is also a significant issue that adds to UAV mobility challenges. The massive expansion of UAVs and mobile connections will further create new problems since mass adjustment will be a significant task that necessitates a productive arrangement. The mobility management of linked UAVs must be properly addressed in future systems.
Energy Charging Efficiency
Energy constraints are a significant obstacle in any UAV communication scenario. Subsequent advancements in battery technology, such as improved lithium-ion batteries and hydrogen fuel cells, have enhanced energy charging to extend flight durations by using renewable energy sources such as solar power. The efficacy of energy charging, however, is significantly reduced due to longer removal time and irregular energy access. To improve charging productivity, novel energy transmission enhancements (such as energy beamforming using multi-antenna techniques and dispersed multi-point wireless power transfer (WPT)) are of great interest. The more important point to emphasize here is that addressing mobility management will improve power consumption efficiency. The goal of UAV mobility management is to reduce unnecessary HO processes, which in turn reduces the HO rate and head over signaling, saving more power and increasing energy efficiency.
UAV-to-UAV and Satellite-to-UAV Communication
When using a UAV as a communication terminal, the Doppler effect, pointing error effect, and atmospheric turbulence effect should all be carefully considered. To receive the frequency-shifted optical signals caused by the Doppler effect, the bandwidth of the optical filter at the receiver should be increased. When analyzing UAV-satellite channels, attention should be paid to the optimization effect in terms of cost efficiency. The receiver diameter design is related to the payload of a UAV in a DL and the restrictions on a satellite in a UL, whereas the transmission power design is related to the payload of a UAV in a UL. Because of practical effects such as the Doppler effect, atmospheric turbulence, and pointing error are all considered. It is also important to note that a swarm of UAVs forms a multi-hop network that assists ground wireless devices in transmitting and receiving packets, each of which contains a direction, in order to provide communication services over a relatively large area. Due to the high-speed flexibility and the need to maintain close communication links with ground users, the interface connection with nearby UAVs is disengaged as much as possible. All standard steering protocols will not work with FANETs in this scenario. As a result, mastering UAV flight control may be difficult. When multiple UAVs collaborate, avoiding collisions becomes a critical issue for UAV security. As a result, point-by-point proliferation sequences are required in modern satellite-to-UAV channel models [121].
Interaction between Different Segments
Using new methods to provide continuous integration between space-based networks, air-based networks, and the ground cellular network is a key challenge for the integrated space-air-ground network. It is crucial to incorporate several key factors into various cases. Cross-layer convention plans are required to ensure interface consistency. It is also essential to provide a flexible and adaptive interface that allows various parts to interact to achieve various advantages. An example would be the performance of consistent data exchange and information transfer between various systems. Because of the features provided by UAV mobility as they move in 3D space, the expanding range of services may necessitate the use of UAVs as gateways to numerous systems. It is critical to prepare the interacting components in such a complex system to ensure consistent interface quality.
Massive MIMO
Massive MIMO will revolutionize the way UAVs are used in communication networks. Massive MIMO guarantees several factors, such as the UAVs' exceptional mobility. One scenario for massive MIMO in mobility is that a large number of antennas are used at the BS to serve multiple single-antenna terminals with very high capacity at the same time. As stated in [87], UAV deployment should not have major limits, which is what pilots prefer. Establishing enormous antennas for UAVs is a target that must be achieved to implement 5G connected UAVs in fully loaded networks without affecting performance for existing ground users. Several studies conducted on MIMO are cited here to provide researchers with the relevant knowledge [13,14,[122][123][124][125][126][127][128].
Synergy of UAVs and IoT Systems
The Internet of UAVs (IoUAVs) is the dynamic integration of current IoT and UAVs. IoUAVs is a promising arrangement for creating the future IoT environment in which people, UAVs, and IoT gadgets are all harmoniously connected. This allows omnipresent data sharing and fine-granularity coordination among a swarm of UAVs due to unique features such as quick sending, simple programmability, controllable mobility, and flexibility. One of the technological contradictions is that while there are numerous benefits in IoUAVs application that arise from linking everything that can be connected, these operations necessitate significant energy capacity. UAVs are limited by their size, weight, and power (SWAP). SWAP limits have a direct impact on each UAV's maximum operating altitude, communication, coverage, computation, and endurance capabilities, and IoUAVs are no exception. As a result, there is an urgent need to develop this aspect of IoUAVs to provide seamless mobility and connectivity. In [129][130][131], the authors employed a UAV that was dispatched to collect data from IoT devices under stringent time limitations. The total number of IoT devices was maximized by optimizing the UAV trajectory and wireless resource allocation simultaneously. They proposed a UAV trajectory planning algorithm that addresses mixed-integer nonconvex and difficult issues.
Full Duplex Communication
In [114][115][116]132], recent advancements in electronics, sensors, and communication systems have made the use of small UAVs possible for many various applications. However, small UAVs are insufficient. Multiple-UAVs can make create a system that is beyond the limitations of a single small UAV. FANETs can expand connectivity and communication range in infrastructure-less areas due to their mobility, lack of central control, self-organization, and ad-hoc nature. FANETs can provide a rapidly deployable, flexible, self-configurable, and relatively low-cost network in catastrophic situations; however, connecting multiple UAVs in ad-hoc networks is difficult. If some of the several UAVs are disconnected during the operation of a FANET due to weather conditions, they can still connect to the network via other UAVs. Furthermore, ad-hoc networking among UAVs, can solve complications such as short range, network failure, and limited guidance that arise in a single UAV system. Although such distinguishing characteristics make FANETs an appropriate solution for a variety of scenarios, they also introduce several challenging issues, such as communications and networking of multiple UAVs. This level of coordination requires a reliable communication architecture and routing protocols on highly dynamic flying nodes. Military applications, disaster response, and other uses for FANETs are some examples. Another potential application for UAVs is discussed; the Flying Ad-Hoc Network (FANET). UAV hubs are equipped with cameras and sensors that allow them to communicate and share data. Military applications, disaster response, and other uses for FANETs are just a few examples. The engineering of FANETs was further investigated to propose a new steering convention. A clustering calculation was also suggested to accelerate the execution of UAV systems.
Security and Privacy
The integrated network may be vulnerable to malicious attacks due to open connections and congested topologies that span a mission-critical range via purposeful jamming or disruptions. Since UAVs are constantly unattended, they can be easily seized or assaulted. Security is a critical issue in UAV-assisted systems. A secure and lightweight component is required to avoid malicious modification, such as eavesdropping, man-inthe-middle attacks, etc. To address cyber-physical security gaps in UAV communication systems, a zero-sum network interdiction game was created. The system considers the case of a vendor and an attacker trying to move UAVs from one point to another. This game can successfully ensure the cyber security of the UAV delivery system. Fake signal solutions were also suggested to keep UAVs safe in cellular-connected applications. A spoofer strategy can be used to create fake GPS signals that are almost indistinguishable from original GPS signals, making it more difficult for cyber attackers to hack into the system. Within the vast scope of space-air-ground coordinate systems, Software-defined networking (SDN) controllers are capable of overseeing assets and controlling operations. It is critical to protect SDN controllers from various cyber-attacks that allow adversaries to wiretap data and control signals transmitted through UAV framework radio connections. Cyber-attacks on UAV frameworks have been documented. Cyber-security is still a major issue in the real-world application of UAVs. Convenient tactics and counter-mechanisms must be planned ahead of time to counteract dangerous cyber-attacks. The important point to emphasize here is that the mobility of UAVs must be controlled by security. This keeps the routing positions under network management. Furthermore, the deployment points of the deployed UAV as UEs or BSs must be secure to prevent any attacks aimed at stealing users' communication data.
Conclusions
Due to rapid technological advancements, UAVs have grown increasingly popular, attracting an increasing amount of attention in the field of wireless networks. Numerous articles on UAV-based network architectures have been included in the literature review. Particular attention was given to the development of HO for UAVs as well as the expansion of networks that make use of UAV technology. Several aspects of HO were considered by examining various available studies. Several key research problems, including 3D deployment and energy efficiency, were discussed. New methods to resolve the mentioned issues have been introduced, including algorithm-based learning, experimental works, etc. The challenges, potential solutions, and future research directions were examined. The fundamental problem with UAV-connected wireless networks is their 3D mobility. This study provides comprehensive information on the shift from the standard 2D mobility and 3D mobility to 5G and 6G networks. A conceptual explanation of numerous elements was also highlighted to aid in identifying the optimum HO decision. | 20,354.6 | 2022-08-01T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Heel riser height and slope gradient influence the physiology of ski mountaineering—A laboratory study
In ski mountaineering it is the goal to reach the top of a mountain by sheer muscle force. The specific equipment (flexible boot, only toe fixated binding, and a skin on the ski to prevent from slipping backwards) enables the skier to move up the hill ergonomically, where the heel part of the binding offers a special adaptation possibility. The so-called riser height supports the heel standing height and can be adjusted to individually preferred settings. General recommendations suggest using lower heel support in flat ascents and higher heel support in steep ascents to maintain upright posture and lower the strain. Still, it remains unclear whether the application of riser height affects the physiological response during ski mountaineering. This study was designed to investigate the effects of riser height on physiological response during indoor ski mountaineering. Nineteen participants took part in the study and walked on a treadmill with ski mountaineering equipment. The three available riser heights (low, medium, and high) were applied randomized at 8%, 16%, and 24% gradient. Results show that global physiological measurements like heart rate (p = 0.34), oxygen uptake (p = 0.26) or blood lactate (p = 0.38) values were not affected by changes in riser height. But local measurements of muscle oxygen saturation were affected by the riser height. Additionally comfort and rating of perceived exertion were also prone to changes in riser height. These results suggest differences on local measurements and perceived parameters, while global physiological measurements did not change. The results are in line with the existing recommendations but need to be confirmed in an outdoor setting as well.
Introduction
Ski mountaineering (skimo) is a multi-faceted winter sport where equipment and environment play key roles in performance (Bortolan et al., 2021). Like in many other sports, skimo exists primarily in two domains, as a recreational activity and as a competitive sport. While racing strives for optimizing performance combined with light equipment (Bortolan et al., 2021), as a recreational sport, skimo combines alpine and Nordic skiing characteristics and provides the chance for skiers to enjoy the outdoors in a unique and adventurous way. The physiological strain of skimo can be high, and racing was described as one of the most strenuous endurance sports (Duc et al., 2011;Praz et al., 2014;Lasshofer et al., 2021;Kayser and Mariani, 2022), with high relevance of performance testing and analysis (Menz et al., 2021;Schöffl et al., 2022;Zimmermann et al., 2022).
Crucial to the success of a ski tour is not only the planning of the tour and the fitness level, but also the equipment used. For example, heavier equipment can increase the energy cost by 1.7% for each percent of bodyweight added to the ankles (Tosi et al., 2009). This added weight has a significant impact on skimo racing performance (Bortolan et al., 2021), but appears to be negligible in recreational skimo tours. A wide range of skis, bindings, and boots are available. However, all boot and binding systems have in common a walking and a skiing mode. While the skiing mode compares well with alpine skiing systems, the walking mode clearly differs. Within the walking mode, the heel is not connected to the binding and the tip of the boot pivots. Pivoting and a flexible boot cuff allow for walking by an increase in lower limb joints range of motion, compared to the skiing mode. As a very specific part, the rear part of the binding offers an adjustable heel support. This so-called riser height makes it possible to alter the height of the heel support while climbing according to individual preference. General recommendations suggest using a higher riser height at steeper slope gradients and lower riser heights at flatter slope gradients to maintain an upright posture and reduce calf muscle strain (Vives, 1999;Winter 2001;House et al., 2019). Biomechanical analysis of the riser height showed a larger range of motion in lower limb joints by using a lower riser height, accompanied by a lower step frequency, but greater step length. Mechanical efficiency of skimo was not influenced by the application of different riser heights (Lasshofer et al., 2022).
Nevertheless, it remains unclear whether these recommendations are tenable concerning physiological variables and if so, like suggested by Sunde et al. (2021), certain slope gradients can be linked to the application of riser heights. Therefore, this study investigated the effect of riser heights on physiological variables and subjectively emphasized variables during treadmill skimo. We hypothesize that the application of higher riser heights at steeper slope gradients and lower riser heights at flatter slope gradients have a benefit on global physiological variables (heart rate, blood lactate, and oxygen consumption), local physiological variables (muscle oxygen saturation and electromyography signal), perceived exertion (Borg scale), and perceived comfort.
Participants
Participants were recruited by a public invitation and had to be between 18 and 50 years old and practice skimo regularly during the winter season, but do not participate in skimo races. Only male participants were included due to the availability of male specific equipment. Nineteen individuals who matched these criteria participated in the study. Anthropometric data and habitual training load are presented in Table 1. All participants volunteered and gave written informed consent. The study was approved by the ethical committee of the University of Salzburg (EK-GZ: 36/2018).
Experimental design
The study consisted of two laboratory sessions for each participant. Both sessions were performed on a h/p/cosmos Saturn 300 cm × 125 cm treadmill (h/p/cosmos sports and medical GmbH, Germany) with participants being equipped at both sessions with a pair of Atomic Backland 78 Skies (169 cm), Atomic Backland Tour Binding (riser height: low, 0.0 cm; medium, 3.0 cm; high, 5.3 cm), and an Atomic Backland Sport Boot (Atomic Austria GmbH, Austria). Participants used standardized poles, which were changeable in length, and had an adjustable hand strap. Individual pole length was kept consistent for all testing SD, standard deviation; BMI, body mass index; VO₂ max , maximal oxygen uptake; HR, heart rate; v peak , peak velocity at the end of the ramp protocol.
Frontiers in Physiology frontiersin.org 02 sessions. There were at least 72 h and maximally 2 weeks between the first and the second session and the preparations as training, food, and time of test had to match. Furthermore, participants were asked not to train within 24 h prior to each session, and to abstain from caffeine and food for 5 h and 2 h prior, respectively, to their exercise session.
Protocol-Performance test
During the first laboratory testing session, participants performed a specific performance test using touring skis on the treadmill to estimate their physiological fitness and to get used to the movement pattern on the treadmill. The test included a standardized warm-up of 5 minutes at 2.6 km h -1 and a gradient of 8%, with the riser height set at the medium position. After the warm-up, the measurement systems (ergospirometry system and heart rate sensor [HR]), which are described in detail later, were switched on and the incremental test protocol at a constant 16% elevation gradient was performed starting at 2.6 km h -1 . After every 4-min interval, there was a break of 30 s to take a lactate (La) sample, before the speed increased by 0.4 km h -1 . The step test was performed until a La level of ≥4 mmol L -1 was reached. After the last interval, the participants had a 3-min break, where the gradient was changed to 24% elevation and the ramp test protocol started. This protocol started at 2.6 km h -1 and the speed increased every minute by 0.4 km h -1 until participants reached their peak speed. The step test aimed to determine speed for the second session of testing, which was defined as the individual speed at a La value of 1.5 mmol L -1 (v experiment ), which was 4.0 ± 0.5 km h -1 on average, with a minimum of 3.1 km h -1 and a maximum of 5.2 km h -1 . The ramp protocol was conducted to obtain maximum oxygen uptake (VO 2max ), maximum HR and peak skimo velocity (Table 1).
Protocol-Experimental test
The second testing session was the actual experimental session. In addition to the ergospirometry system, the HR sensor, and La analysis, the second test setting included surface electromyography (EMG), near-infrared spectroscopy (NIRS) and subjective scales (comfort scale and rating of perceived exertion). After a standardized warm-up of 5 minutes at v experiment with the medium riser height setting and a gradient of 8%, the participant sat down on a chair, which was placed on the treadmill. The measuring systems were then switched on and 5 minutes of resting measurement followed. The test included three times 15 min walking in one gradient (8%, 16%, and 24%). Each 15 min block was split into three 5 min intervals, where the three riser height positions were applied randomly. The order of the gradients was the same for every participant, starting at 8%, followed by 16%, and ending at 24%. For data analysis, only data from the last minute of every 5-min interval was used. Between the 5 minutes intervals, the treadmill stopped for 1 minute to change the setting of the binding, take a blood sample, and to ask the participant for the ratings for the subjective scales. Between the 15 min blocks was a break of 2 minutes to additionally change the gradient.
Measurements
For the first test, only physiological data were assessed, such as HR measured by a Wahoo Tickr HR belt (Wahoo Fitness, California, United States) and stored in the portable metabolic system (Cosmed K5, Cosmed, Rome Italy), which was set to breath-by-breath mode. The mobile gas analyzer was used to maximize freedom of movement while participating, even though the measurements took place indoors. The participants were breathing through a proper sized oronasal face mask, which was connected to a turbine flowmeter. The system was calibrated before every test in agreement with the manufacturer´s instructions. Fresh air circulation was given by open windows and an additional fan in front of the ski mountaineer to minimize cumulated exhaled air around the participant.
La samples were collected before the test, after every step, and one, three, and 5 minutes after volitional exhaustion. The blood samples of 20 μL were obtained from the earlobe and analyzed by an EKF-diagnostics Biosen C-line system (EKF-diagnostic GmbH, Germany).
Electromyography data were collected from the rectus femoris (RF), bicep femoris (BF), and medial gastrocnemius (GAS) of the left leg according to SENIAM recommendations (Stegeman and Hermens, 2007). Sensor sites were shaved and skin cleaned with isopropyl alcohol wipes. Sensors were then attached to the skin with two-sided tape, and surgical tape was then used to secure the sensors in place. Data were recorded on a portable data logger until processing (Trigno Personal Monitor, Delsys, Inc., Boston, MA). A 125 ms interval was used in the RMS calculation. The EMG signal was filtered with a second order Butterworth bandpass filter (20-500 Hz; Delsys, Inc., Boston, MA, United States). Sampling frequency was 1926 Hz. Due to some muscles being biarticular muscles and some affecting a triaxial joint, relative data are reported as a percent of total voltage range within each subject for better data presentation. The average voltage from each step cycle was divided by the voltage range to give a percent. Thirty step cycles from each interval were used in the calculation with the average being used in data analysis.
NIRS sensors (Idiag Moxy 5; Idiag AG, Switzerland) were placed on the right side of the body, matching the muscles used for EMG (RF, BF, and GAS). Data were stored on the internal memory, as well as on the portable metabolic system, which then allowed for data synchronization. The sensors were fixed with self-sticking pads and wrapped with a bandage to prevent falsified data due to light interference. Data were recorded at 0.5 Hz and is displayed as desaturation (DS) from baseline (BL). The measured tissue saturation index (TSI) was normalized intra-individually based on general recommendations (Perrey and Ferrari, 2018). The BL was taken from the last minute of the resting period right before the test started, and DS calculation follows the equation:
DS TSI − BL BL
To assess subjective perception, two scales were used (Grant et al., 1999): the Borg 6-20 scale (Robertson et al., 1998) to get the rating of perceived exertion (RPE), and a comfort scale (1-10), where 1 describes a very uncomfortable situation and 10 a very comfortable situation, which was already proven as valid and reliable in other Frontiers in Physiology frontiersin.org 03 context (Mündermann et al., 2002;Lee et al., 2013;Yusof et al., 2019). The comfort scale was applied separately for comfort of the lower body and comfort of the upper body.
Energy cost of linear and vertical displacement was calculated similar to Praz et al. (2016b), dividing the energy expenditure (J·s -1 ), which was obtained from the ergospirometry system, by the system mass (body mass +5 kg of gear and measurement systems) and the velocity (ms -1 ). For energy cost of linear displacement, velocity represents the walking velocity and for vertical energy cost, velocity represents vertical displacement velocity.
Statistical analysis
For statistical calculations, SPSS, Version 27 (IBM Cooperation, United States) was used. For comparison of the different settings, a multifactorial ANOVA with repeated measurements was used to calculate the main effect of gradient and riser height of each dependent variable and their interaction. Whenever sphericity was not given (Mauchly Test, p < 0.05), Greenhouse-Geisser correction was applied for within-subjects effects. When a significant F value was found, Bonferroni's test was used for pairwise comparisons. Whenever sphericity was not given (Mauchly Test p < 0.05), Greenhouse-Geisser correction was applied for within-subjects effects. Alpha level for significance was defined as < 0.05 and partial Eta squared (ηp 2 ) is reported as effect size.
Results
Global physiological response to gradient and riser height are shown in Table 2. The response of HR, VO 2 and La measurements revealed an influence of gradient (p < 0.001), showing an increase in physiological response with an increase in gradient. But neither an effect of riser height (HR, p = 0.34; VO 2 , p = 0.26; La, p = 0.38) nor an interaction effect of gradient and riser height (HR, p = 0.3; VO 2 , p = 0.32; La, p = 0.73) was found.
Energy cost of linear ( Figure 1) and vertical (Figure 2) displacement demonstrated a main effect of gradient (p < 0.001), but not for riser height. With steeper gradients, energy cost of linear displacement increased, while energy cost of vertical displacement decreased. An interaction effect of gradient and riser height was found for linear (p = 0.016) and vertical (p = 0.009) energy cost. An increase of energy cost between riser heights was found within the 8% gradient, whilst no difference was found within 16%, and 24% gradient.
Muscle oxygen saturation (Table 2) of all three muscles (GAS, RF, BF) responded with an increase in desaturation to steeper gradients (p < 0.001). GAS (p < 0.001) and BF (p = 0.003) revealed a main effect of riser height, showing an increase in desaturation from high to low riser height. Additionally, GAS demonstrated an interaction effect of gradient and riser height (p < 0.001) with less difference in desaturation between the low and high riser height at 24%, compared to 8% gradient. RF (p = 0.71) and BF (p = 0.42) did not reveal an interaction effect.
The RPE scale (Table 2) showed an effect of gradient (p < 0.001), with rising values from 8% to 24% gradient, and an effect of riser height (p = 0.012), with rising values from the low to the high riser height. The interaction effect of gradient and riser height (p = 0.001) is a consequence of different relations within the gradients. While the increase in RPE from low to high riser height is linear at 8% gradient, an asymmetric U-shape was found at 16% and 24% gradient. The comfort scale applied to the upper body showed a main effect of gradient (p < 0.001), but no effect of the applied riser height (p = 0.52) was observed. Comfort for the lower body revealed a main effect of the gradient (p = 0.016) and of the riser height (p < 0.001). Both, comfort of upper body and lower body revealed an interaction effect of gradient and riser height (p < 0.001). While comfort decreased from low to high riser height at 8% gradient, we found an increase from low to medium riser height at 16% and 24%, with no, or only minor changes compared to 24%.
Discussion
The aim of this study was to investigate the influence of different riser heights and gradients in skimo on physiological and subjective variables.
Frontiers in Physiology frontiersin.org 05 setup of constant walking velocity, an increase in strain was to be expected, is evident in the data, and represented by increased energy cost. All three variables were not prone to detect changes related to the three available riser heights and therefore did not show a difference whether the low, medium, or high riser height was used. Energy cost of linear displacement also increased with the change in gradient from 8% to 24% (Figure 1). This finding is in line with work from Praz et al. (2016b). Although there was no effect of riser height, a significant interaction effect with a large effect size of gradient and riser height revealed that there is no difference with respect to riser height at 16% and 24%, but at 8% gradient. At the 8% gradient, the energy cost was greater for the high riser height compared to the low riser height, which means an advantage of the low riser height. Therefore, the greater step length and higher range of motion for the ankle and knee joints with the low riser height (Lasshofer et al., 2022) were found simultaneously with a more efficient linear displacement, although local measurements like NIRS of GAS suggest increased use. Since the typical goal of ski mountaineering is to reach the top of a mountain, energy cost of vertical displacement is a decisive metric in skimo. Lowest vertical energy cost was found at 24%, followed by 16% and 8% gradient (Figure 2), which suggests choosing a steeper gradient, if possible, might save up to 50% of energy per vertical meter climbed when comparing 24%-8% gradient. These results are also supported by others (Praz et al., 2016a;Praz et al., 2016b;Lasshofer et al., 2022) who suggested steeper gradients being advantageous compared to flatter gradients. Although evidence is lacking, there must be a functional threshold in the natural environment of skimo, upon which steeper is not better anymore due to human capabilities, snow conditions or equipment capabilities, (e.g., skis starting to slip backward). Accounting for this study, riser height did not reveal a main effect on vertical energy cost, although a trend, suggesting the low riser height being advantageous, is evident (p = 0.065; ηp 2 = 0.14). But a detailed look at vertical energy cost within the gradients based on an interaction effect of gradient and riser height (p = 0.009; ηp 2 = 0.22) demonstrated the advantage of the low riser height at 8% gradient, while no difference at 16% and 24% was found. In contrast to these variables, subjective scales revealed not only an effect of gradient, but also of riser height. RPE on the one hand confirms the results of global physiological variables with a main effect of gradient (p < 0.001; ηp 2 = 0.9), but on the other hand was also affected by the riser height (p = 0.012; ηp 2 = 0.22). Consequently, global physiological measurements are not in line with perceived exertion. Whether these global measurements are not sensitive enough to detect changes, or muscular compensation mechanisms keep the overall strain constant remains unclear. Similar to energy cost, the low riser height was rated as the least strenuous option at the 8% gradient. Additionally, the medium riser height was found as least strenuous at the 16% and 24% gradients. The comfort scale was divided in upper body and lower body, with the aim to differentiate areas of influence. While upper and lower body comfort were affected by the gradient, with a reduction in comfort from 8% to 24%, only lower body comfort showed a main effect with large effect size (ηp 2 = 0.42) of riser height. Specifically, the high riser height was clearly the least comfortable at 8%, which is perfectly in line with energy cost and RPE, followed by the medium and the low riser height. In contrast to the 8% result, at 16% and 24% gradient, the medium riser height was rated the most comfortable, followed by the high riser height, with the low riser height being the least comfortable at both the 16% and 24% gradients. This matches our hypothesis, that steeper gradients require a higher riser height, although we could not demonstrate the highest riser height being most comfortable at the steepest gradient.
Since the lower body kinematics are influenced by changes in riser height (Lasshofer et al., 2022), local muscular responses were investigated as well. Muscle oxygen desaturation represents the response of single muscles to exercise. While, once more, all three analyzed muscles confirm the greater strain or enhanced usage at steeper gradients (p < 0.001) by greater oxygen desaturation, GAS and BF were also prone to changes in riser height. It was shown, that a higher riser height functions as a supporter for the calf muscles and muscle oxygen desaturation in GAS is less with a higher riser height applied. There were similar results for the BF, with the low riser height resulting in greatest desaturation and strongest EMG signals (indicated by a strong trend in the effect of riser height (p = 0.069) and a large effect size (ηp 2 = 0.14) over all gradients. Nevertheless, probably also due to the least general strain, at 8% gradient the low riser height, which showed greatest desaturation for GAS and BF, was rated as the most comfortable and perceived as the least strenuous one. EMG and NIRS signals of RF showed neither an effect of riser height, nor an interaction effect of riser height and gradient. This lines up perfectly with the fact of hip joint kinematics not being influenced by changes in RH (Lasshofer et al., 2022), since RF is a biarticular muscle also responsible for hip movement.
Though EMG signal and muscle oxygen desaturation of GAS and BF suggest preferring the high riser height at 24%, RPE and comfort scale emphasize to apply the medium riser height. In other sports, NIRS signal was shown to be affected by cadence, where greater oxygen
Frontiers in Physiology frontiersin.org saturation levels were generally found with higher cadence (Zorgati et al., 2013;Steimers et al., 2016). In our specific case of skimo, the trend of less desaturation with higher cadence (Lasshofer et al., 2022) is also evident. But with respect to the intervention of not manipulating cadence, but cadence being adapted to changes in riser height and therefore manipulating whole body kinematics, we cannot assume cadence as the only reason for changes in muscle desaturation. Because at the same time, we also find a reduction in ankle joint range of motion, which can also be associated with less desaturation. Because global physiology was not affected by riser height, we suggest applying the most comfortable and perceived least exhausting riser height, since evident differences in local muscular strain were compensated elsewhere. This most comfortable choice can be supported at 8% gradient by energy cost, and at steeper gradients with EMG and NIRS analysis.
Limitations
This study was conducted in a laboratory setting. This allowed for strict standardization of testing and provided consistency. Although regular skimo equipment was used, walking on the treadmill might be somewhat different to walking on snow and the maximum gradient was limited to 24%. Other authors reported similar results comparing on snow and treadmill skimo for the available gradients (Tosi et al., 2010;Praz et al., 2016a;. Unfortunately, we were not able to extrapolate the results to steeper gradients (based on the maximal possible gradient of the treadmill), which can be found in outdoor skimo We could only hypothesize the high riser height gaining more relevance in steeper terrain, but this must be tested in another study.
We decided to apply a constant speed over all three tested gradients, which was tested and defined during the first testing session. Pilot testing prior to the study showed that participants were not able to walk at self-selected speed on the treadmill, especially applying an uncomfortable riser height.
Conclusion and practical application
In conclusion, even though global physiological parameters were similar between riser heights, local measurements of NIRS and EMG, perceived exertion and comfort can differ between the situations. Supported by energy cost, we demonstrated, a benefit of the low riser height at 8% gradient. While in general, at 16% the medium riser height showed benefits, and at 24% the medium and the high riser height outperformed the low riser height clearly with varying strengths. Based on the parameters and gradients analyzed in this study, it can be concluded that only the low and medium riser heights provided a benefit to the skiers, supported by subjective scales, local measurements and energy expenditure.
Data availability statement
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Ethics Committee University of Salzburg Universität Salzburg Kapitelgasse 4-6 5020, Salzburg AUT. The patients/participants provided their written informed consent to participate in this study. | 6,008.8 | 2023-04-19T00:00:00.000 | [
"Biology"
] |
Whole-Exome Sequencing for Identification of Genetic Variants Involved in Vitamin D Metabolic Pathways in Families With Vitamin D Deficiency in Saudi Arabia
Background Numerous research studies have found an association between vitamin D (vitD) status and single-nucleotide polymorphisms (SNPs) in genes involved in vitD metabolism. It is notable that the influence of these SNPs on 25-hydroxyvitamin D [25(OH)D] levels might vary in different populations. In this study, we aimed to explore for genetic variants in genes related to vitD metabolism in families with vitD deficiency in Saudi Arabia using whole-exome sequencing (WES). Methods This family-based WES study was conducted for 21 families with vitD deficiency (n = 39) in Saudi Arabia. WES was performed for DNA samples, then resulting WES data was filtered and a number of variants were prioritized and validated by Sanger DNA sequencing. Results Several missense variants in vitD-related genes were detected in families. We determined two variants in low-density lipoprotein 2 gene (LRP2) with one variant (rs2075252) observed in six individuals, while the other LRP2 variant (rs4667591) was detected in 13 subjects. Single variants in 7-dehydrocholesterol reductase (DHCR7) (rs143587828) and melanocortin-1 receptor (MC1R) (rs1805005) genes were observed in two subjects from two different families. Other variants in group-specific component (GC), cubilin (CUBN), and calcium-sensing receptor (CASR) gene were found in index cases and controls. Polymorphisms in GC (rs9016) and CASR (rs1801726) were found in the majority of family cases (94 and 88%), respectively. Conclusion In vitD-deficient families in Saudi Arabia, we were able to detect a number of missense exonic variants including variants in GC (rs9016), CUBN (rs1801222), CASR (rs1801726), and LRP2 (rs4667591). However, the existence of these variants was not different between affected family members and non-affected controls. Additionally, we were able to find a mutation in DHCR7 (rs143587828) and a polymorphism in LRP2 (rs2075252), which may affect vitD levels and influence vitD status. Further studies are now required to confirm the association of these variants with vitD deficiency.
INTRODUCTION
Vitamin D (vitD) plays an important role in maintaining skeletal calcium (Ca) homeostasis by stimulating intestinal absorption of Ca and phosphate (PO4), stimulating bone resorption and inducing Ca reabsorption by the kidney, thus sustaining the level of calcium and phosphate necessary for bone formation and supporting appropriate functioning of parathyroid hormone (PTH) to maintain Ca levels in serum (Holick, 2007;Holick et al., 2011).
Clinically, serum 25-hydroxyvitamin D [25(OH)D] has been identified as the most effective predictor of vitD status, to date. Levels of 25(OH)D in serum are influenced by the vitD produced dermally and consumed orally, through diet or supplementation (Hollis, 1996;Del Valle et al., 2011). In addition, there are physiological, pathological, and lifestyle factors affecting 25(OH)D levels such as aging, obesity, liver and kidney diseases, and inadequate exposure to sunlight (Holick, 2004(Holick, , 2007Tsiaras and Weinstock, 2011;Hyppönen and Boucher, 2018). Among other significant factors influencing 25(OH)D levels are the genetic factors with the heritability of circulating 25(OH)D levels predicted to be between 23 and 80% (Bahrami et al., 2018), primarily as single-nucleotide polymorphisms (SNPs) in genes involved in the vitD metabolic pathway (Ahn et al., 2010;McGrath et al., 2010;Wang et al., 2010;Jolliffe et al., 2016).
Vitamin D deficiency is highly prevalent in Saudi Arabia. Although several studies have already reported an association between vitD status and SNPs in genes involved in vitD metabolism (McGrath et al., 2010;Jolliffe et al., 2016), the influence of these SNPs on 25(OH)D levels might vary in different populations. For example, an SNP in DHCR7 (rs12800438) was related to vitD deficiency in African Americans but not in European Americans (Batai et al., 2014), and another SNP in DHCR7 (rs12785878) was linked to vitD deficiency in Chinese cohorts from Kazak ethnicity but not in Uyghurs (Xu et al., 2015).
The relationship between inherited variants in vitD-related genes and vitD deficiency has not been adequately addressed in Saudi Arabia. Whole-exome sequencing (WES) analysis is designated as state-of-art, sequencing large amounts of DNA with high throughput, providing fast and broad data about known or novel mutations in candidate genes in family members with a specific disease or trait. Therefore, we aimed to investigate the presence of genetic variants in genes related to vitD metabolism among families with vitD deficiency in Saudi Arabia using WES.
Study Design and Recruitment
Members from families with a history of vitD deficiency were recruited for this study from a single tertiary center [King Abdulaziz University Hospital (KAUH), Jeddah, Saudi Arabia] and seven primary health care centers (PHCCs) distributed in Jeddah (a PHCC from each of the seven sectors of Jeddah area). The study was undertaken at the Center of parental consent and child assent obtained for participants under 16 years of age.
In total, 23 families (104 individual participants) with a history of vitD deficiency [serum 25(OH)D < 12 ng/ml] were recruited. Of these, 39 samples from 21 families were selected for WES (Figure 1). Exclusion criteria for inclusion in the WES analysis included history of chronic renal and liver disease, cancer, malabsorption syndrome, rheumatoid arthritis, intake of medications with possible effects on vitD (such as glucocorticoids and anticonvulsants), hyperthyroidism, hyperparathyroidism, diabetes, or any other endocrinal disorders.
Study Procedure and Blood Analysis
All participants answered a questionnaire (filled by the researcher), which requested information including sociodemographic data, medical history, drug history, and lifestyle history. Each participant underwent basic anthropometric and blood pressure measurements. Multi-generation pedigree was carefully made for each family by interviewing the family and documenting the family history of vitD deficiency. Fasting blood samples of all members of the family and from 100 unrelated controls were collected. Total serum 25(OH)D and intact PTH were measured by chemiluminescence immunoassay (CLIA), using a LIAISON auto-analyzer (DiaSorin Inc., Stillwater, MN, United States); free 25 (OH)D was directly measured by immunoassay using ELISA kit (KAPF1991, Future Diagnostics Solutions B.V., Wijchen, Netherlands); and VDBP was measured by quantitative sandwich enzyme immunoassay using Quantikine R ELISA (DVDBP0B, R&D Systems, Minneapolis, MN, United States). Serum albumin, Ca, PO 4 , magnesium (Mg), lipid profile, blood glucose, and renal and liver function were all measured by the colorimetric method using a VITROS 250 Clinical Chemistry auto-analyzer (Ortho-Clinical Diagnostics Inc., Rochester, NY, United States).
Whole-Exome Sequencing
Genomic DNA was first extracted (DNA extraction kit 53104, Qiagen, Hilden, Germany), and the concentration and purity of the DNA filtrate were measured using a NanoDrop spectrophotometer (ND-1000 UV-VIS). WES with a 150-bp paired-end read length for 39 DNA samples was performed by next-generation sequencing (NGS) using the Illumina platform and Twist Human Core Exome library kit. Genomic DNA was extracted from all included blood samples, and a library was constructed by random fragmentation of DNA followed by 5 and 3 adapter ligation, or by "tagmentation" which coupled the fragmentation and ligation reactions in one step, increasing the proficiency of the library preparation procedure. Afterward, adapter-ligated fragments were PCR amplified and gel purified. The library was loaded into a flow cell so that fragments get captured on a lawn of surface-bound oligos complementary to the library adapters. Next, amplification of each fragment into different clonal clusters was done by bridge amplification. Once clusters were generated completely, templates were sequenced. Illumina SBS technology which uses a reversible terminatorbased approach was utilized to identify single bases integrating into DNA template strands. This technology was used due to its lower rates of raw errors compared to other technologies, as natural competition in this technology due to the presence of all four reversible terminator-bound dNTPs during each sequencing cycle reduces incorporation bias. In addition, Illumine SBS produces very precise base-by-base sequencing that practically removes sequence-context-specific errors even within repetitive sequence regions and homopolymers. Sequencing data were transformed into raw data. Raw data or images were generated by the Illumina sequencer using integrated analysis software called Real Time Analysis which is a sequencing control software for system control and base calling. The base call binaries were converted into FASTQ by using Illumina package (bcl2fastq). Reads were produced without trimming away adaptors.
Analysis of WES Data
Whole-exome sequencing data generated the raw reads in the form of FASTQ format. Insertion, deletion, and copy number variation were distinguished by utilizing SAMtools 1 . Data was aligned by using the BWA Aligner 2 , after the crude information FASTQ files were adjusted. The resulting VCF files contained over 120,000 variants per samples. The variants were clarified by using different parameters, such as quality, frequency, genomic position, protein effect, and association with vitD deficiency. SNPs or variants and short indel candidates were determined at nucleotide resolution. SNPs found were compared to 1000 genomes using the international genome 3 , SnpEff 4 , and gnomAD databases 5 . A bioinformatics tool (laser gene Genomic Suite v. 12, DNASTAR, Madison, WI, United States) was used to look for variants involved in vitD metabolism. Variant alleles were tagged according to dbSNP142 using ArrayStar v. 12 (Rockville, MD, United States). The obtained FASTQ sequences were aligned against the human reference genome using the Borrow-Wheel arrangement tool 6 and reference genome hg19 for humans 7 . FASTQ raw data files were then transformed to BAM file format that were afterward annotated using Toolkit for Genome Analysis 8 . In this study, we targeted indels and SNPs situated in the exons and splicing junctions of the genes that caused protein-level changes, with exclusion of synonymous variants. Our selected variants were identified in around 45% of total reads.
Variant Prioritization
For variant prioritization, the coding and splicing regions of genes involved in vitD metabolic pathways were analyzed and assessed using the available online database for these variants (see text footnote 5) 9,10 . Initially, variants positioned in introns, intergenic regions, and untranslated regions were excluded, as well as synonymous variants. To comprehend potential biological functions of the variants designated, the functional influence of the selected genomic variants and pathogenicity were evaluated using prediction algorithms (Mutation Taster, PolyPhen2, SIFT, PROVEAN, and Mutation Assessor) included in ANNOVAR 11 . Lastly, candidate genes were reviewed in PubMed publications and the Online Mendelian Inheritance in human's database.
After applying various filters, the total number of variants was reduced to 20-30 variants per sample. Finally, the variants involved in vitD metabolism were selected in the following target genes: GC, CUBN, LRP2, DHCR7, and CASR.
The validated results were compared with the results of control samples (n = 100). Controls were matched with index samples for age, skin tone, sunlight exposure, oral vitD intake, and BMI but notably were vitD sufficient.
Results of WES Data
Various missense variants with moderate impact were determined in GC, CUBN, LRP2, DHCR7, and CASR genes ( Table 1). The polymorphism rs9016 in GC was detected in 13 families (n = 30), rs1801726 in CASR was detected in 12 families (n = 28), while rs4667591 and rs2075252 in the LRP2 gene were observed in six families (n = 13) and three families (n = 6), respectively. In addition, rs1801222 and rs1801224 in CUBN and
CUBN
The CUBN variant c.758T > C in family 5 (n = 2) was validated. Both family 5 samples and the controls were homozygous (CC genotype) as shown in Figures 2A,B.
LRP2
The general and biochemical characteristics of families (F1, F3, and F9) that exhibited the c.12280A > G (rs2075252) variant in LRP2 are shown in Tables 2, 3. Validation of this SNP (rs2075252) showed that F1, F3, and F9 had this variant while the control did not. In family 1 and family 9, subject II-1 (the mother) had heterozygous AG genotype while subject III-1 (the daughter) had a homozygous GG genotype and the control samples had a homozygous AA genotype (Figures 3A-D). On the other hand, both subjects II-1 and III-1 in family 3 had the heterozygous AG genotype (Figures 3E,F).
The validation of the other polymorphism (c.12628A > C) in LRP2 that was observed with WES in F2, F5, F7, F10, F12, and F13 (n = 13) showed that this SNP existed in all the mentioned families and control samples (n = 100). All samples were homozygous CC except a single sample in F5 and four of the controls that were heterozygous AC ( Figure 3G).
DHCR7
Whole-exome sequencing results showed variant c.376G > A in DHCR7 in Family 1 (F1). General and biochemical characteristics of F1 subjects were presented earlier in Tables 2, 3, and the pedigree of this family is shown in Figure 3A. Validation of the observed variant c.376G > A in DHCR7 in F1 revealed that subject II-1 (mother) has a GA genotype and II-1 has an AA genotype in comparison to the controls that had a GG genotype ( Figure 3H). When this DHCR7 c.376G > A variant (rs143587828) was evaluated, it was found to be a mutation not a polymorphism.
GC
When the WES results were validated by Sanger DNA sequencing for SNP c.1391A > G in GC in family samples (F1-F10 and F12-F14) (n = 30), the presence of c.1334A > G SNP as homozygous genotype (GG) was confirmed in these family samples as well as in the control healthy samples (Figure 4A).
CASR
Validation of the c.3061G > C variant in CASR in subjects from F1 to F6, F8 to F10, and F12 to 14 (n = 28) showed that this variant is present in the CC genotype in controls and in these families except F2 where the genotype was heterozygous (GC) (Figure 4B).
Identified Polymorphisms and Mutations
In families with vitD deficiency, all observed variants were polymorphisms with the exception of the variant in DHR7 (rs143587828) which was a mutation. We found two single variants in LRP2 with one variant (rs2075252) observed in six individuals but not in control cases, while the other LRP2 variant (rs4667591) was detected in 13 subjects and in controls. A single variant in DHCR7 (rs143587828) and one in MC1R (rs1805005) were observed in two subjects from two different families but not in controls. Other variants in GC, CUBN, and CASR were found in index cases and controls. Polymorphisms in GC (rs9016) and CASR (rs1801726) were found in the majority of family cases (94% and 88%, respectively).
DISCUSSION
Several studies have linked vitD deficiency with numerous variants in genes involved in vitD metabolism (McGrath et al., 2010;Jolliffe et al., 2016). Our WES study in families having vitD deficiency revealed various variants in genes related to vitD; however, the majority of these variants including the ones in GC (rs9016), CUBN (rs1801222), CASR (rs1801726), and LRP2 (rs4667591) coexisted in both the vitD-deficient families and the non-affected control group (with GC and CASR SNPs having the highest frequency), suggesting no association between these SNPs and 25(OH)D levels. In agreement with our findings, a case-control study in Egyptians (n = 328) also found that CUBN (rs1801222) was not associated with total 25(OH)D levels Harding et al. (2006) and Elsabbagh et al. (2020) found no association between 25(OH)D and CASR (rs1801726). With regard to GC (rs9016) and LRP2 (rs4667591), no reports exist in the literature about their relationship with vitD. However, these two SNPs were reported in a family-based WES study specifically looking at SNPs in genes related to vitD metabolism in families with familial multiple sclerosis; however, no association was found with multiple sclerosis (Pytel et al., 2019). In the present study, a mutation in DHCR7 (rs143587828) was identified in two affected subjects from one family (mother was heterozygous and daughter was homozygous for the minor allele) but not in any of the control subjects. As DHCR7 encodes for the production of the enzyme that is responsible for the conversion of 7-DHC (the precursor of vitD) to cholesterol (Berry and Hyppönen, 2011), it is suggested that this mutation in DHCR7 (rs143587828) might result in increased activity of DHCR7 leading to reduced conversion to vitD and thus vitD deficiency (Kohlmeier, 2012). Two large genome-wide association studies in subjects of European ancestry found that minor alleles of nine alternative SNPs in DHCR7/NADSYN1 were associated with vitD deficiency (Ahn et al., 2010;Wang et al., 2010). However, this may be the first report of the association of rs143587828 with 25(OH)D. This observed mutation in DHCR7 (rs143587828) now needs to be investigated in a large-scale population study, to explore further the association between this mutation and vitD status.
Cubilin and megalin, which are receptor proteins present in the proximal renal tubules encoded by CUBN and LRP2 genes, respectively, bind to the VDBP 25(OH)D complex and contribute to the process of endocytosis of the VDBP 25(OH)D complex so that 25(OH)D can be hydroxylated to 1,25(OH) 2 D, the active form of vitD (McGrath et al., 2010;Kaseda et al., 2011;Kohlmeier, 2015). Severe hypovitaminosis D was reported in LRP2 knockout mice, which suggests an important role for LRP2 (Nykjaer et al., 1999). In our study, we found an SNP (rs2075252) in LRP2 in six affected families (n = 13) but not in the controls. This strongly suggests that this SNP might be related to vitD deficiency and emphasizes the need for additional studies on the association between vitD status and SNPs in LRP2. To our knowledge, there is only one report in the literature and this opposes our finding, with polymorphism rs4667591 in LRP2 not found to be associated with total 25(OH)D (Elsabbagh et al., 2020).
Our study has revealed relevant and novel exonic missense variants in both DHCR7 and LRP2 in vitD-deficient families (not evident in control individuals); the association between these variants and vitD deficiency now needs to be addressed. Our results provide information on the variants related to vitD metabolism in families with vitD deficiency, thus helping researchers understand genetic factors underlying vitD deficiency in the Saudi population.
DATA AVAILABILITY STATEMENT
The datasets for this manuscript are not publicly available because family consents to share data publicly was not allowed. Requests to access the datasets should be directed to corresponding author (MN).
ETHICS STATEMENT
Ethical approval of this study was obtained from the Research Ethics Committee in Unit of Biomedical Ethics, Center of Excellence in Genomic Medicine Research (CEGMR), King Abdulaziz University (KAU), (05-CEGMR-Bioeth-2018). Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
SA contributed to the study design and execution, data analysis, and manuscript drafting. MN contributed to the study design, data analysis, writing, editing, and review. EA and AC contributed to writing the review and supervision. MR contributed to the supervision and review of the manuscript. SL-N contributed to supervision. All the authors read and approved the final manuscript.
FUNDING
This work was supported by the Joint supervision program, KAU, Jeddah, Saudi Arabia. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
ACKNOWLEDGMENTS
We are thankful for all families who participated in this study. | 4,341.4 | 2021-06-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
A Short Review of Architecture and Computational Analysis in the Design of Graphene-based Bioelectronic Devices
Graphene possesses a high surface-to-volume ratio, which enables biomolecules to attach to it for bioelectronic applications. In this article, first, the classification and applications of bioelectronic devices are briefly reviewed. Then, recent work on real fabricated graphene-based bioelectronic devices as well as the analysis of their architecture and design using a computational approach to their charge transport properties are presented and discussed. A comparison to nongraphitic bioelectronic devices is also given. On the macroscale level, the design of devices is elaborated on the basis of a finite element analysis (FEA) approach, and the impact of design on the performance of the devices is discussed. On the nanoscale level, transport phenomena and their mechanisms for different design categories are elaborated on the basis of the density functional theory (DFT) and other quantum chemistry calculations. The calculated and measured charge transport properties of graphene-based bioelectronic devices are also compared with those of other available bioelectronic devices.
Introduction
Bioelectronic devices are defined as electronic devices that function by interacting with biomolecules such as human or animal tissues for the purpose of detection, actuation, or even power generation. Generally, bioelectronic devices may be classified into several categories on the basis of their applications as summarized in Fig. 1. As illustrated in Fig. 1, bioelectronic devices can be used for power generation in the form of enzymatic and microbial biofuel cells, (1)(2)(3) and for testing and diagnostic purposes in the form of e-skin devices, nucleic acid amplification chips, DNA extraction chips, and glucose monitoring devices. (4)(5)(6) For applications in artificial perception, several types of bioelectronic devices such as the bioelectronic nose, brain, and tongue have been demonstrated. (7,8) In the past decade, on-chip bioelectronic devices with the unique function of mimicking human organs such as the kidney, lung, heart, and liver have been developed for use in medical treatment. (9) Bioelectronic devices have also found their way into the development of exoskeletons for which a passive exoskeleton and active limbs have been designed and fabricated. (10) Furthermore, bioelectronic devices are also reported for use in drug delivery. (2,11,12) Electrochemical sensors or biosensors, which are also one type of bioelectronic device, have been the subject of both basic and applied research for nearly fifty years. (13)(14)(15) Clark et al. first introduced the principle of the enzyme electrode with an immobilized glucose oxidase at the New York Academy of Sciences Symposium in 1962. (16) Currently, there are many examples of device commercialization based on such biosensing principles including those for pathogens and toxins. (17)(18)(19) For example, an enzyme in the biorecognition layer acting as an electroactive substance was utilized in a multichannel electrochemical biosensor. (6,15,20) Here, detection occurs owing to the physicochemical transduction that provides a measurable signal. Generally, a native enzyme is used as a biorecognition component, i.e., as an analyte (2,12,22) or an inhibitor. (12,22) In addition, enzymes can also be used as labels bonded to antibodies, antigens, or oligonucleotides with a specific sequence, thus providing affinity-based sensors. (15,21,23,24) A similar principle has been extended to power generation, where an enzymatic biofuel cell has been developed in response to industrial demand. Such a bioelectronic device can generate energy in a sustainable manner by utilizing biocatalysts, either in the form of isolated enzymes (1,2) or in the presence of enzymes within microbial cells (3) to convert chemical energy into electricity. (1,3,25) After a breakthrough study on the preparation of graphene, which is a single-atom-thick sheet of carbon, by Novoselov and co-workers, (26) there has been tremendous effort to utilize this nanomaterial for the electronic coupling of redox enzymes. (27)(28)(29)(30) Its ultrahigh in-plane electron conductivity, (31,32) high thermal conductivity, (33) and mechanical properties (31)(32)(33) make it a promising material not only for the channel of electronic devices (34)(35)(36) but also for the construction of bioelectrodes and biomembranes. (37)(38)(39) Figure 2 shows a graphene-based field-effect transistor (FET) biosensor in which anti-immunoglobulin G • Enzyma�c fuel cell (3) • Microbial fuel cell (1) • Bioelectronic nose (7,8) • Bioelectronic tongue (7,8) • Bioelectronic eyes (7,8) • Fluidtrobe (80) • U tube concept (80) • Skin patches (4,5) • DNA extrac�on chip (4) • Glucose • monitoring (5,6) • Electrocardiography (ECG) (6) • Ac�ve exoskeleton (10) • Passive upper limb (10) • Passive lower limb (10) • Kidney mimic (9) • Liver mimic (9) • Heart mimic (9) • Lung mimic (9) (Anti-IgG) is anchored on gold nanoparticles (AuNPs). (40)(41)(42) The AuNPs are spread over a thermally reduced graphene oxide (TrGO) membrane as the channel material of the FET; these nanoparticles have shown specific recognition ability for IgG. (41)(42)(43) Such graphene-FET biosensors have been shown to exhibit exceptional sensitivity with detection limits down to 0.01 nM. Aptamer-modified graphene-FET biosensors also show label-free, real-time responses against immunoglobulin E (IgE) protein at very low concentrations. (42,44) This sensor enables the electrical detection between aptamer and protein at low concentrations by optimizing the Debye length (DL), i.e., the distance required for screening the surplus charge. A bioelectronic nose based on a multiplexed graphene-FET system prepared by the micropatterning of graphene using photolithography was also developed for a human sensory mimicking system that can discern a specific odorant in a mixture. (7,23,45) Figure 3 shows the design and architecture of a multiplexed bionose fabricated using microstructured graphene-based FETs and the corresponding human olfactory system. The bionose was designed to detect odorants such as amyl butyrate and helional. In this structure, the gate of the FET is controlled by an aqueous ionic solution. Large-area-grown graphene with high conductance and large surface area combined with human olfactory will give high selectivity towards odorant mixtures. (45) The olfactory sensory neurons are located at the entrance of the nasal cavity, where each of these neurons is separately coupled to a cluster of nerve endings at the olfactory bulb. As shown in Fig. 3, the olfactory neurons that are linked to certain olfactory receptors are attached to the same nerve ending. The interaction between odorants and their precise human olfactory receptors generates olfactory signals that are transferred through a cluster of neurons to olfactory bulbs. These signals are then processed by the human brain to recognize odorants by their smell. Correspondingly, the multiplexed bionose is designed and developed to mimic each step of the human olfactory system whereas the FET function is used to generate olfactory signals as well as an imitated olfactory code that combines numerous signals from a cluster of receptors. Using a template that resembles the transmembrane structure of the G-protein-coupled receptor and a structural model of the human olfactory receptor leads to the development of an excellent structure for an artificial bionose that can mimic the human nose. (7,23,45) The use of graphene in those FET biosensors as an electrochemical transducer shows an exceptionally highly sensitive response owing to the extreme sensitivity of graphene (40) to electronic perturbations and its surrounding environment. (6) Graphene with one atom thickness and high surface area allows a selective detection at single-carbon resolution, which leads to the development of highly sensitive FET biosensors. (7) Another growing area in biodevices is the microfluidic-based organs-on-chip (OOC) systems with advanced 3D tissue engineered scaffolds combined with cultured human cells to replicate a human organ of interest. (9,45,46) Microfluidic channel networks are designed and fabricated to mimic the organ structure, e.g., liver sinusoid or nephron in a kidney. The channel surfaces are usually modified with layers mimicking the extracellular matrix, allowing human cells to adhere, spread, and proliferate within the channels, thus requiring tissue engineering technologies. Once OOCs are constructed, a fluid flow is applied to generate mechanical forces that recapitulate the in vivo microenvironment experienced by cells. (46)(47)(48) Specifically, organspecific fluid flow enables gradient formations of molecular components and maintenance of cell-cell interactions, (49,50) which are vital to emulating human physiological responses. A pressure sensor is an important component in a microfluidic system in order to control and monitor fluid flow precisely. Precise pressure of the fluid is needed in order to stimulate the cultured cells. The use of graphene in a pressure sensor connected to a microfluidic system will allow for the sensitive and accurate measurement of the fluid pressure owing to its exceptional thermal and mechanical properties. A graphene pressure sensor monitors the changes in electrical resistance, and this will be the indication of detection regarding cell morphologies, contractile functions, and gene expression. (51) Figure 4(a) shows a schematic representation of the structure of a kidney-on-chip device. (46,47) A porous polymeric biocompatible membrane such as polydimethylsiloxane (PDMS) is sandwiched between epithelium and endothelium tissues. The former tissue lines the cavities, surfaces of blood vessels, and organs through the body. The structure of this tissue, as shown in Fig. 4(a), is in the form of a continous film with almost no intercellular space, where the unit cell is in a cuboidal form in most cases with nuclei at the center. (46,47) On the other hand, the structure of endothelium tissue is similar to that of epithilium tissue, but the endothilial cells are elongated and aligned in the direction of fluid flow. Such unique structures impose a restriction on the transport of any migrating (7,23,45) species through or across the hybrid structure. (46,47) A free-body diagram that is equivelant to the OOCs for computational purposes is shown in Fig. 4(b). The equivalent model of the microfluidic kidney epithelium is based on a multilayered microstructure that includes the arranged layers of PDMS microchannels and a PDMS chamber. The chamber is separated from the microchannels by a porous membrane made of polyester. This architecture offers a transport medium that is physically equivalent to polarized kidney epithelial cells. Hence, it accurately mimics the mechanism of fluidic flows and maintains the selective contact of apical and basal sides of the cells to fluid shear, hormones, and chemical gradients. In addition, this architecture enables the collection of samples from both sides of the polarized tissue. (46,47) Similar to a regular fuel cell, a biofuel cell includes both an anode and a cathode, where any part of it must be a bioelectrode (an electrode that involves biocatalysts). For instance, in the case of the anode as the bioelectrode, the supplied fuel is oxidized at the bioelectrode by the biocatalyst, where a group of electrons are transferred (donated) from the biocatalyst to the bioanode. The biocatalyst also enables the reduction of groups in contact with oxygen at the surface of the biocathode as depicted in Fig. 5(a). (52) Thus, to maintain high power output from the biofuel cell, it is important to preserve high catalytic activity at the bioelectrode surface. This may be achieved by selecting a proper biocatalyst and by embedding it in the bioelectrode, taking into consideration the optimum orientation once immobilized. The optimum orientation is achieved by aligning the catalyst along the conductive interface of an electrode in a way that enables direct electron transfer (DET). In order to achieve DET, the use of nanomaterials, i.e., graphene is needed to allow for the increase in electrocatalysis reaction. Graphene as a conductive agent will allow the further increase in the surface area of the electrode without changing its geometric dimensions, and with its unique properties, graphene can be used in a biofuel cell to provide good anchoring sites for catalyst deposition, and thus improve the performance of the biofuel cell. (52) With the effective graphene-biocatalyst reaction, it can provide the networked electrode interfaces, and thus increase the biocatalyst loading. In this manner, the biocatalyst effectively allows the exchange of electrons within the electrode, and thus increase the biofuel cell output. Recently, a promising novel concept has been developed to maintain high catalytic bioactivity, and a versatile family of new conductive nanomaterials was synthesized. In this concept, 3D graphitic structures were decorated with catalytic metals such as Pt, Ni, or Pd. The composite nature of the electrodes results in different transport modes for charged particles similar to the transport in porous electrodes shown in Fig. 5(b). (53) Upon the introduction of a potential difference between the electrodes, an electric field (E) is generated to force the ions into the pores of the electrodes. The ions of opposite charge are attracted to the electrode, while the ions with the same charge are forced away from the surface. This successfully rearranges the charges at the electrodes, resulting in the formation of a shallow interface layer known as the electric double layer (EDL) with dimensions of length L and radius λ d . A remarkable property of the EDL is that it behaves as a capacitor under steadystate conditions. Thus, the flow of charges across the pores of the electrodes proceeds until the diffusion and electromigration of the ions reach equilibrium. (53) These modes of transport must be carefully addressed during the modelling of the transport dynamics through such electrodes.
Macroscale Analysis of Bioelectronic Devices
Most bioelectronic devices are electrochemical. However, optical, calorimetric, piezoelectric, and surface plasmon resonance-based bioelectronic devices are also common. The schematic diagram presented in Fig. 6 shows the basic architecture of most bioelectronic devices. (54) A bioelectronic device has a substrate that acts as a receptor for the specimens that contain the species of interest, such as DNA, dopamine, or glucose. The main role of this substrate is to attract, absorb, and transport the species of interest to the next stage of the device (the membrane) in addition to supporting the mechanical structure of the device in most scenarios. The next stage of the bio-electrochemical process is the filtration and separation of species of interest from their carrier and transporting them to the immobilized detector. Such a step is usually carried out by functionalizing the membrane. The mechanical structure and the chemical composition of the membrane are critical design parameters for any bioelectronic device. In fact, these two parameters are the determinants of the device type. For instance, having a nanoporous bare graphene layer as a membrane could result in the ability to sense a single molecule of DNA in an ionic solution. On the other hand, using poly(vinyl alcohol) hydrogel as a membrane might result in an enzymatic biofuel cell. One of the most important modules that must be taken into consideration during the design of a bioelectronic device is the immobilized detector. The detector is a module responsible for detecting the separated species of interest (after it has been transported through the membrane) and enhancing its flow towards the transducer. A well-tailored immobilized detector results in a low-noise signal at the transducer, which is a feature of merit for any bioelectronic device. The electronic signal is collected from the transducer and amplified and may be further processed using microelectronics to capture the response.
The rate-determining step in several biological routines is electrodiffusion. For instance, the intertransmission of electrochemical signals within neurons through synapses is an electrodiffusion-dependant process. Another example is the electrodiffusion-controlled reaction that enables ligand-enzyme complexion. (55) In fact, in these biological processes, the rates and orders of the reactions are mainly dependent on progressive electrodiffusion of charged particles or complexes through the transport medium. Particle-based computational models such as Langevin and Brownian dynamics (LD and MD) and the Monte Carlo (MC) method are widely used to predict reaction rates and constants for biological processes. (56,57) Actually, particlebased computational methods depend mainly on the modelling of the asymptotic trajectory of a single particle in a specific energy frame in a discrete stochastic manner. Accordingly, simulations based on such models are difficult to converge in the case of multiparticle systems when the number of particles is large. To overcome this drawback of particle-based models, another computational model, namely, a continuum model, is used to compute reaction rates and constants. This model takes into consideration the distribution of charged particles through a system of differential equations that define the average density distribution. In this way, the (54) continuum models are more capable than the particle-based models to deal with multiparticle large systems. (53,58,59) In addition, such models may be coupled with established dynamic models such as Nernst-Planck, Nernst-Einstein, and Navier-Stokes models to integrate various modes of physical-chemical phenomena. These capabilities led the continuum electrodiffusion models to be more favoured for modelling the interaction of ion channels in biological processes (53,58,59) and for describing ionic transport in ceramic and polymeric membranes as well as modelling the transport of charged particles in semiconductors. (53,58,59) Figure 7 depicts a schematic of an ion channel bioprotonic device (60) that could be considered for the general case of bioelectronic device architecture. A supported lipid bilayer (SLB) (orange spheres with tails) is sandwiched in an electrolytic layer (a polymeric membrane, graphene membrane, or a liquid electrolyte). The continuum of the cell includes different types of channels and pores. Various ionic species travel across the cell and the membrane from both sides. Two electrodes are the source of ohmic contacts, where the current density flows across it to the whole circuitry. This representation could be a schematic for an OOC device, a bioprotonic device, or in the case in which the SLB is replaced with a bulk material, it could be a schematic for a biosensor, or even a biofuel cell. In the next section, we discuss the transport of charged species across each part of this architecture.
Mass transfer through electrochemical bioelectronic devices can be considered to be governed by the Nernst-Planck equation that characterizes the fluxes of mobile solutes. The molar flux of species, ϕ i , is described as (55) where D i , a i , C i , z i , ν i , and r k indicate the diffusivity (m 2 /s), chemical activity, concentration (mM), valence number, stoichiometric constants of the surface reaction, and reaction rate for the ith mobile species with respect to the boundaries, respectively. The term R is the universal gas constant (8.314 J/mol K), T is the absolute temperature (K), F is the Faraday constant (96487 C/mol), and φ is the electrochemical potential of the transport medium. The Nernst-Planck equation [Eq.
(1)] is formulated for a highly reactive ionic species, where Fig. 7. Schematic representation of common bioelectronic device architecture. (60) the chemical potential gradient is the driving force for transport as well as the electrostatic potential, where ψ is the electrostatic potential. In this case, Eq. (1) could be written as where µ i is the velocity field vector that is related to the electrostatic potential as However, for the bioelectronic applications, the terms of activity and electrochemical potential may be changed. For a nonionic solute where the charge z k = 0, the Nernst-Planck Eq. (1) is reduced to Fick's first law of diffusion. The motion of the ionic species ensures the current transport through the electrolyte(s) in which each ionic species carries a current density, J i , proportional to its charge as given by Eq. (4).
Equation (1) can be written as is the migration driven by electrostatic potential, and is the convective transport. Actually, the diffusive transport φ i = φ dif f,i + φ mig,i + φ conv,i , may be neglected in the case of outflow. In such situations, either convective or migration or both transport modes control the transport process; thus, the Nernst-Planck flux equation may be written as where n is the unit vector perpendicular to the outflow flux boundaries. In this sense, the transport of species through the membrane maintains a planner flux even though the concentration changes in three-dimensional space. This practice should save computation time and yet maintain accuracy.
Bulk Membrane Transport
Bulk membrane transport is a concept that describes the movement of a group of charges across (perpendicular to a membrane's axis of symmetry) a membrane in bulk. In such transport, charged species do not cross the membrane in a single ion queue but instead they are either clustered or aggregated in a cloud. Another reason for the nomenclature of this transport type is that the moving species need to cross a bulk film such as composite graphene membranes, a poreless membrane, or a plasma membrane. Figure 8 shows a schematic representation of transport of different ionic species through graphene oxide (GO) bulk membrane transport. In this type of transport medium, ions are allowed to permeate through the GO membrane. The permeability of ions is size dependent; for instance, Mg 2+ (radius 0.72 Å) cations have higher permeability than Fe 3+ (radius 0.6 Å). Furthermore, it was reported that, in addition to the size effect, the interactions between the transporting ions and the GO layers are mainly controlled by the selectivity of the GO membranes. In fact, the π-bonding-based interactions between the GO atoms and the transported alkaline cations are weaker than the coordinative interactions between transition metal ions and the sp 3 matrix of the GO layers. However, the coordination of a soft metal cation Cd 2+ seems to be weaker than those of Cu 2+ and Fe 3+ , possibly due to the larger distance between Cd 2+ and the oxygen-containing functional groups in the matrix. In addition, it appears that the concentration of a cation could also be affected by its source. For instance, the concentrations of Cu 2+ and Cd 2+ originating from CuCl 2 and CdCl 2 were both higher than those from CuSO 4 and CdSO 4 . This finding indicates that the through-membrane transport of a cation could be controlled by the electrostatic attraction of its counteranion. (61,62) In a bioelectronic device, abrupt differential changes in the activity of species might take place at the electrode and separator surfaces as shown in Fig. 8. In the bulk medium, the activity gradients are not remarkable, and the current is driven mostly by migration rather than convection. Thus, using the migration term from Eq. (4), the current density of transported ionic species through the bulk medium can be formulated as Here, λ i denotes the molar ionic conductivity (m 2 S/mol) and is related to the transport parameters by The value of λ for specific species depends mainly on the chemical composition of the transport medium. This is attributed to the fact that the interactions between functional groups on the transport medium can alter the diffusivity, and hence the mobility, of the transported species. Accordingly, the values of λ i are often reported for pure electrolytes and extrapolated to infinite dilution. Moreover, some references (53,63) use the ionic mobility of species μ i (cm 2 /Vs) instead of λ i , which is related to λ i by Actually, sometimes the following equation is used to replace the conventional Nernst-Planck equation in the case of the transport of species through saturated porous structures in which the pores are mainly occupied with fluids, fluid cavities, and bubbles: In this equation, θ stands for the liquid volume fraction, C i is the concentration of the ith species per unit of fluid volume, ρ b is the bulk density calculated as (1 − ε)ρ p , C p is the concentration of the ith species per unit mass of the solid, a v is the resulting gas volume fraction and is calculated as ε − θ, and C G is the concentration of the ith species per unit of gas volume.
Multilayer and Structured Pore Clusters as Transport Medium
In many cases, the architecture of bioelectronic devices involves a multilayer structure and/ or structured pore clusters, such as multilayer graphene cross-linked with graphene oxide used for water desalination [ Fig. 9(a)]. In this illustration, the membrane is based on a graphene Fig. 9. (Color online) (a) Multilayer graphene cross-linked with graphene oxide used for water desalination, (64) (b) structured pore clusters of graphenelike organic metal membrane for drug delivery, (11) (c) binary silica layer for water transport, (66) and (d) lipid bilayer. ( oxide framework (GOF) set perpendicular to the graphene layers. In the case of the bulk, GOF membranes have a finite length along the graphene layers. The entire structure appears to be intervallic along the x-and y-axes. The transport of particles through this type of GOF membrane depends on the concentration of cross-linkers, where the number of cross-linker strands is in the range of 16 to 64. This dependence was attributed to the fact that the distances d x and d y between successive cross-linkers in the xy plane (considered here as the pore size) and the distance d z between successive cross-linkers in the xz plane (pore spacing) determine the free volume of transport. The higher the number of cross-linkers, the smaller the pore size. Taking into consideration that d x is fixed (based on the type of cross-linker), the free volume for transport is reduced in the case of a high concentration of cross-linker. (64) An example of a structured pore transport medium is the structured pore clusters of graphenelike organic metal membranes for drug delivery [ Fig. 9(b)]. As shown in Fig. 9(b), a metal-organic framework (MOF-74) composed by 2,5-dihydroxyterephthalate and Mg(II) ion is used to deliver two anticancer drug molecules, methotrexate (MTX) and 5-fluorouracil (5-FU). (65) Such a MOF structure has the ability to have its transport properties controlled by adapting the organic linker or metal ion while maintaining its main architecture (similar to the GOF-based membranes). In fact, in most MOF architectures, the length of the organic linker is considered a design parameter; however, this parameter does not allow much control because the length of the linker depends on the chain type. The main ability to modify is attributed to changing the metal ion, as changing the metal strongly affects the coordination geometry and consequently the transport properties. (11) Other examples of multilayered and pore-structured transport media are binary silica layers for water transport [ Fig. 9(c)] (66) and transport though binary biolayers of functional groups such as lipid bilayers (62) [Fig. 9(d)], which may be used in the design of bioelectronic devices.
In this case, the Nernst-Planck equation cannot solely describe the transport of species through the structures. It only enables the implementation of a single scalar electric field computation as it interrelates only the charge, ionic mobility, and concentration of the species along the boundaries of the transport medium. The Poisson-Nernst-Planck (PNP) theory, as a continuum-based coupled model that describes the dynamics of ions and the evolution of the electric field inside an electrolyte, may be used as an alternative. The PNP equations are derived from the conventional Nernst-Planck transport equations coupled with Poisson's field equation. By this method, it is possible to describe an electrolyte solution via two scalar fields by considering a binary transport medium that can dissociate water into cations and anions that fill the entire transport medium. In this scenario, the PNP theory tends to reduce all electrostatic interactions between ions to Coulombic interactions between ions and a meanfield electric potential, ψ. This yields a very descriptive assumption that mimics the transport dynamics, and hence, the conservation of species concentration may be expressed as (53,55) Here, J i is the total flux of species. In such a case, i may be either negative or positive, and in the absence of fluid motion, J i is given by The right-hand side of Eq. (12) represents the previously mentioned diffusive flux, coupled with the electromigrative flux that takes into account the motion of ions as follows: It can be seen that the electromigrative flux term considers the transport of ions to be due to the electrostatic interactions between the electric field, E = −∇ψ, and the electric charge on the ion, z i e, where z i is the ion's valence and e is its fundamental charge. In addition, the mobility coefficient b i correlates the drift velocity of ions to the electric force exerted on it, which is attributed to the nature of the grad operator. This computational privilege is missed in the Nernst-Planck fundamental model. Finally, the mean-field potential ψ is related to the ion concentration and the permittivity coefficient of the transport medium, ε, via Poisson's equation, as follows: As can be seen in Eq. (15), applying the curl operator to the dot product of the gradient of the mean-field potential and the medium permittivity is practically a descriptive technique. It physically results in assigning normal and tangential field vectors to the cavity's boundaries as well as distributing the charges of the functional groups on the medium all over the cavities' walls. When this expression is combined with the right-hand side of Eq. (15), the dynamics of transport of ionic species based on their interaction with the transport medium can be accurately computed and described.
Transport in Nanoporous Membranes, Composite Electrodes, and Nanochannels
Transport in nanopores and channels takes place in many bioelectronic devices. Figure 10(a) shows an example of a nanopore introduced to a single layer of graphene for DNA sequencing. It shows a single-stranded DNA transported through a nanopore in a graphene monolayer, where the diameter of the nanopore is around 1.5 nm, corresponding to about 35 carbon rings. The strand is vertically transported under the effect of an applied electric potential. DNA should be transported through the pore by the effect of ionic flow (vertical yellow shading), and hence the characteristic changes in the ionic current caused by each type of DNA base may be measured through the graphene nanopore. (67) Figure 10(b) shows catalyst-decorated 3D graphene composite electrodes for a biofuel cell in the form of a nano-honeycomb-like strongly coupled CoMoO 4 -3D graphene hybrid structure where graphene replicates the perfect 3D porous network of a compressed foam. The width of the graphene network is in the range of 100 to 120 µm. The graphene network is uniformly covered with the CoMoO 4 . (68,69) The transport modes and hence the dynamics are quite different compared with bulk transport. In this section, the principles of the transport of species in nanopores and nanochanneled media are introduced. A good practice to discuss these principles is to consider a nanochannel with positively charged inner walls, which bridges two domains of a given species with identical concentrations as shown in Fig. 11(a). When a charged surface is immersed into an electrolytic medium, anions are attracted to its walls and accumulate in its inner walls forming an electrical bilayer. (70,71) Nanopored graphene located in between the charged inner walls can act as the selective membrane of the translocating ions. In order to mimic chemical selectivity by the biological channels, the size of the synthetic pores must be measurable with the diameter of the diffusing ions; hence, nanopored graphene is the best candidate owing to the minimal thickness of graphene. (70) A critical parameter for transport through nanochannels is channel width because the diameter of the channel dictates the thickness of the double layer. In fact, it is the DL of the solution that dictates the thickness of the electrical bilayer, not the diameter of the transport channel. The DL is the distance over which the electrostatic coupling between the charged species in the solution takes place because of the identical concentration of the charges. Consequently, the transport channel in Fig. 11(a) has a radius that is much larger than the DL, and the center of the channel is occupied by both types of charges at the same concentration, as shown in Fig. 11(b), while the peripheral negative charges accumulate at the channel walls. If the radius of the channel is short compared with the DL, then the concentration of negative ions dominates the channel's cross section as depicted in Fig. 11(c). (70) In such an environment, where changes in species velocity are very sensitive to any change exerted by the forces of the medium, the transfer of momentum between ions and solvent is better explained by the Navier-Stokes equation: (64) and (b) catalyst-decorated 3D graphene composite electrodes for biofuel cells. (67,68) Here, the velocity of the diluted species, v, and the species density ρ are related to the pressure inside the channel, p, the viscosity of the transport medium, η, and the volume force acting on the fluid, f, in one governing equation. As can be observed from Eq. (16), three components represent the normal forces: the viscosity embeds the tangential sheer forces in the model, and the volume force that may contain gravitational, electrostatic, and magnetic forces. However, in the current scenario, f is the force that the electric field exerts on the ions in the fluid, namely, f = −ρ Q ∇ψ, where ρ Q is the total charge density.
In the Navier-Stokes equation given by Eq. (16), because the pressure appears only as a gradient, the absolute magnitude of the pressure has no effect on the transport; in fact, it is only the pressure differences along the DL of the pore that affect the transport. Moreover, because in bioelectronic applications, the fluids are mostly incompressible, the force acting on the fluids in the absence of an electric field can be represented as where g is the acceleration due to gravity. The set of PNP equations, coupled with the Navier-Stokes equation under the appropriate boundary conditions, has been widely used to model ion transport for many examples of solidstate nanochannels and nanopores. (63,72) Recent computational studies coupling both models have enabled the description of complicated systems with very low free-volume structures such as grafted polyelectrolytes. (63,72) This set of equations, owing to its description of the ion fluxes, as a continuum, is appropriate for cases when the radius of the channel is larger than double the DL. (73) However, modelling pores and channels with radii less than twice the DLs (which is the case in biological ion channels) requires particle-based methods, such as molecular or Langevine dynamics or quantum chemistry calculations. In fact, coupling the Navier-Stokes equation with the PNP equations led to an accurate prediction of the transport properties of different species among porous structures.
For instance, Fig. 12 shows the calculated and measured I-V curves for graphene membranes where lithium chloride, caesium chloride, and potassium chloride transported through nanopores of the membrane. These calculated I-V curves presented in Fig. 12 resulted from a published simulation carried out using Matlab where the domain boundary was assumed to consist of an inlet, an outlet, and an impermeable lateral boundary. At the inlet boundary, velocity and concentrations were specified. The outlet boundary was subjected to constant pressure and a free exit condition for component concentrations. The lateral boundary was impermeable to flow and may not have been reactive. The domain consists of solid grains and pore space occupied entirely by a single liquid phase. The system of equations is discretized in the pore space of the domain using a conservative finite volume method on a Cartesian grid.
As can be seen in Fig. 12, the results calculated using the Navier-Stokes equation with the PNP equations align perfectly with the measured values for linear and quasi-linear transport performance. However, as the actual transport performance tends to be nonlinear, for negative transmembrane potentials, the calculations diverge. This divergence is attributed to the approximations used to solve the continuum set of equations numerically. Furthermore, the only constant in the computation algorithm that represents the membrane is the diffusivity. The diffusion coefficient is usually calculated on the basis of the famous Einstein formula, and 3D dissimilarity is neglected during calculations. In other words, the Einstein diffusivity calculations assume that the diffusion is similar in all directions in one plane. Figure 13(a) shows an interface layer of zinc oxide (ZnO) nanorods on graphene. Such a combination was reported as a glucose biosensor. (74,75) The ZnO branch is not symmetric, which resulted in an asymmetric electrostatic field around the atoms. Consequently, the diffusion of the glucose into the interface was direction dependent.
Furthermore, the electrostatic potentials and mean fields shown in Fig. 13(a), calculated using DFT, show that even the electrostatic properties for such sensor are isotropic, which is not the case for the Poisson calculated field. Various attempts have been made recently to couple the Nernst-Planck equations with DFT calculations, rather than with the Poisson model. Moreover, the continuum PNP equations that predict the macroscale properties of transport based on the microscopic properties of devices neglect the volumetric changes in the structure. Figure 13(b) shows the structures and electrostatic potential maps for a ZnO-decorated graphene biosensor based on ZnO nanowires, nanorods, and nanosphere clusters. As can be seen in the figure, the electrostatic potential depends strongly on structure, as has been mentioned in our work. (75)(76)(77) We found that the transport time for electrons differs by about 10 orders of magnitude between nanorods and nanosphere clusters. (75) The use of graphene as an interface layer in the ZnO-decorated sensor is the main reason behind the sensitivity of the transport properties to the volumetric changes in the structures. Detailed information based on X-ray absorption measurements (XAS), from the bulk and the surface on the degree of carbon sp 3 /sp 2 hybridization and oxygen functional groups in two different C thin films is presented in the following. Although the surfaces of the two films are identical with respect to the sp 3 to sp 2 ratio, the differences in the sp 2 content in the bulk make the C thin films electrically dissimilar. Such phenomena make graphene a promising material for biosensing. (78) Other examples of DFT studies have described the detection of dopamine (DA), (74) uric acid (UA), (74) and ascorbic acid (AA) (74) levels in blood using a AgO-G biosensor. In those studies, the interactive forces between Ag nanoparticles and analytes (DA, UA, and AA) were investigated through a molecular orbital study where the positions of the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) in the system were determined. Consequently, the band gap (the difference between the HOMO and the LUMO) and the affinity patterns as well could be identified. In those studies, DFT calculations were performed at the B3LYP/LANL2DZ level of theory using the Gaussian 09 package to investigate the optical changes caused by the interaction of the Ag nanoparticles with UA, AA, and DA. (74) To perform these calculations, the optimized geometries of the DA, UA, and AA molecules were attached to a nanoarray of Ag atoms representing the substrate. Accordingly, B3LYP/LANL2DZ functional and basis sets were employed on the molecular model, and the absorption energies were determined by time-dependent DFT (TD-DFT) calculations. The results are illustrated in Fig. 14, where the lowest energy of the optimized geometries of the Ag nanoparticles with DA, UA, and AA are presented. It can be seen that the LUMO is positioned near the dopamine side in the Ag-DA case, while in the case of the Ag-UA adduct, the LUMO is positioned at both sides of the Ag and UA branches. In contrast, the LUMO is located at the Ag side with a slight inclination towards the AA group in the case of the Ag-AA adduct. Finally, the HOMO is mainly located on the Ag side. (74) The HOMO-LUMO energy separation has been used as a simple indicator of kinetic stability and may indicate the affinity pattern of the molecule. A reasonable HOMO-LUMO energy gap (3.8214 eV for Ag-DA, 3.9220 eV for Ag-UA, and 3.9293 eV for Ag-AA), showing a reasonably high affinity for DA with Ag and a lower affinity for UA and AA, agrees with the fact that it is energetically unfavourable to add electrons to a high-lying LUMO or to extract electrons from a low-lying LUMO. The simulated spectra of the Ag nanoparticles with DA, UA, and AA obtained from the TD-DFT calculations concur with the experimental absorption spectra. Hence, it is clear that the changes in the surface plasmon resonant (SPR) responses of Ag nanoparticle clusters with DA, UA, and AA due to the binding of these species lead to the formation of an internal charge transfer (ICT) complex. DFT studies gave further evidence for the higher detection limit of DA due to the higher binding with Ag nanoparticles compared with the others, as confirmed by the energy gap. A similar study was conducted for the detection of DA using a magnetite-graphene biosensor. (79) It was found that a DA molecule could be strongly physisorbed on the G surface in various orientations. The orientations of the DA molecule on the G surface may affect the active sites for interactions. The Stone-Wales defect on graphene could affect the interactions between the DA molecule with specific orientations and the graphene surface. Generally, the effect of defects on the interactions between DA and DG was not obvious. (79) For the DA-GO systems, the DA molecule might be chemisorbed on the GO surface in specific orientations. For the DA-Fe-G systems, the interactions between the DA molecule and the Fe-G surface can be improved by doping the surface with a Fe atom bound to DA in their configuration. This indicates that graphene-based materials with specific (75) structures or chemical groups may be designed to satisfy the demands for different applications of graphene-based materials. The interactions between DA and graphene could be adjusted by chemical or physical methods. This research is very meaningful for designing specific graphene-based materials for DA sensing and DA/graphene composites.
A similar study found that oxygen-functionalized structures in graphene oxide lead to a small band gap that made the graphene more sensitive to the materials in the environment so it could be placed on top of the Au layer in a typical surface plasmon resonance sensor to improve the sensitivity of that sensor. In this case, variations in carrier density affected the graphenebased SPR sensor response. In addition, in the presence of different organic molecules, the refractive index shift was determined and the molecular properties of each sensing material such as electronegativity, molecular mass, and effective group number were considered. On the basis of these parameter sets, the analysis was performed simultaneously and the related coefficients were reported. A semiempirical model for the interpretation of changes in the SPR curve has also been suggested and tested for some organic molecules. (80) According to the results of determining the adsorption energy E ads , the interactions between a DA molecule and an Fe-G surface were improved when compared with those of the other three graphene systems, i.e., pristine, defected, and GO, as shown in Fig. 15. The data indicated that Fe could be helpful in strengthening the interactions between DA and graphene sheets. To further study the interactions between a DA molecule and the Fe-G surface, the electron density difference was explored. An obvious electron transfer could be seen from Fe in the Fe-G sheet to the DA molecule. The number of electrons transferred was closely related to the interaction between the sheet and the molecule. When a DA molecule lay on the Fe-G surface, it had the largest interactions with the Fe-G surface and also exhibited the most apparent electron transfer (Fig. 15). Thus, the results of the E ads determination corresponded to those of the electron density difference. (79)
Conclusions
In this short review, the main types of bioelectronic devices found in the literature were classified and investigated from the viewpoint of their design. The analysis of the architecture and design of graphene-based bioelectronic devices was considered and discussed using a computational analysis of the charge transport properties. The analysis was carried out on both the macroscale and nanoscale. The design of devices was investigated on the basis of approaches using FEA, DFT, and coupled multiphysics models. The impact of design on the performance of the bioelectronic device was also discussed. Moreover, a study of transport phenomena with respect to various structures of bioelectronic devices was conducted to clarify the charge transport mechanisms for several design categories. Finally, the published results of computed and measured charge transport properties of graphene-based bioelectronic devices were compared. | 10,175.2 | 2018-01-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Recognition Method of Corn and Rice Crop Growth State Based on Computer Image Processing Technology
,
Introduction
Automatic identification of crop growth period is one of core parts of precision agriculture support technology [1].Traditional crop growth period identification is mainly recorded through manual observation, which has problems such as time-consuming and laborious, low efficiency, strong human subjectivity, different observation standards, and difficult to ensure measurement accuracy [2].With mature application of sensor detection technology and remote network transmission technology, crop observation is gradually transitioning from manual observation to automatic observation.However, there are still problems of low efficiency, observation accuracy, and observation frequency [3].At present, computer vision technology is mainly used for the classification and recognition of crop growth period [4].Because taking crop images in the field requires fixed shooting equipment and shooting at the same distance, it has high requirements for light and shooting angle, and the recognition effect is poor [5].ese problems are observed by many researchers and relevant scholars have conducted a lot of research.
e authors propose a crop hyperspectral remote sensing recognition method based on the random forest method [6].
e random forest method is used to analyze the reflection spectra of 8 typical crops, extract and classify the characteristic bands, and compare the recognition effects of different methods.Results show that the random forest method does not need to preprocess the reflection spectrum but directly processes the full band reflection spectrum data.It not only selects the characteristic bands to distinguish different crops but also uses the selected bands to classify crops.While showing the advantages of hyperspectral remote sensing in identifying crops, it also provides reference for remote sensing fine classification of large-area crops.However, due to the influence of illumination, this method has the problem of poor recognition effect in the recognition process, and some details of the image are blurred.Researchers have proposed a fast wheat identification method based on GF-2 data, using GF-24m multispectral remote sensing images as data source using supervised classification methods [7] (including support vector machines, artificial neural networks, and maximum likelihood) for rapid extraction and precision analysis of spatial distribution information of wheat planting.
e results show that the recognition results of this method have high accuracy and can provide relevant data for the study of crop growth characteristics, but there is a problem of high segmentation errors in crop growth state images.
In [8], authors have proposed a crop canopy recognition model based on thermal infrared image processing technology.Firstly, using adaptive characteristics of the fivelayer linear normalized fuzzy neural network, Gaussian membership function is selected to automatically calculate reasoning rules of canopy visible light image recognition, and effectively segment the canopy region in visible light images.ree segmentation indexes and entropy were analyzed to quantitatively evaluate the canopy segmentation quality of visible images.Taking canopy effective area of the obtained visible images as the reference image, the affine transformation algorithm is used to adjust the optimal image transformation factors such as translation, rotation, and scaling and register the original thermal infrared image.A canopy thermal infrared image recognition method based on affine transformation is proposed.Finally, the mutual information of entropy is used as the supervision index to evaluate the recognition method of crop canopy thermal infrared image.Results show that this method can reflect physiological and ecological information characteristics of crops through thermal infrared images and has certain effective practicability.However, there is a problem of long time for crop feature extraction which affects real-time acquisition of recognition results.
Under the background of automatic crop observation requirements, this paper takes corn and rice as research subjects and uses image processing technology to effectively identify the growth state of corn and rice.In the rapidly developing field of computer vision, image processing technology is widely used in various target recognition occasions.Morphological features of the target are extracted by means of image binarization and segmentation, and then feature representation is carried out.Finally, the target recognition is completed, such that it can be well applied in the area of research.e results obtained by the proposed technology are promising.
e next section describes the proposed work, followed by the results obtained from the proposed work.Finally, this paper summarizes our research work.
Image Preprocessing of the Growing States of Corn and Rice Crops
Image processing technology is very helpful for extracting important or meaningful features of the image.is technology takes any image as input and gives output in terms of its number of features or specifications as per user's requirement.
Crop Image Collection.
Before identifying the growth state of corn and rice crops, the target image needs to be collected.is paper mainly uses the image collection platform established on the basis of the CMOS image sensor [9] to carry out this operation.e structure of the CMOS image sensor is shown in Figure 1.
e CMOS image sensor is a typical solid-state imaging sensor, which is an important component to realize image acquisition.It is connected with the embedded platform through CMOS interface and controlled by the embedded platform to obtain crop images [10].e function of the embedded platform is to collect crop images regularly, then fuse, quality judge, and compress the images, and then transmit the processed image data to the data center through 4G network after receiving the image data acquisition instruction from the crop image information acquisition management system.e crop image acquisition platform based on the CMOS image sensor is responsible for image acquisition node management, image data acquisition instruction issuing and receiving data, and image processing.It is also providing users with services such as image retrieval, crop growth analysis, and pest and disease analysis [11].With reference to crop image processing technology, there is a need to have identifiable difference among the images.For that purpose, growth state identification technique is used.Users can access the crop image acquisition platform through the client (desktop computer, mobile phone, PDA, etc.) to complete the management of image acquisition nodes, image retrieval, crop growth analysis, disease and insect pest analysis, weed analysis, and other applications.
Crop Image Preprocessing.
Image preprocessing is a kind of preprocessing relative to image recognition which uses a series of specific operations to "transform" images according to specific goals.No matter what kind of device is used, the collected images are often unsatisfactory.
e captured images may be too blurry, the outline of the object may be too sharp, and the image may be distorted or deformed.e specific target can make the image clearer and can also obtain some specific information from the image.In this study, image preprocessing process is shown in Figure 2.
According to Figure 2, image binarization is the key link of image processing.In order to achieve accurate image segmentation, image binarization processing is required [12].After grayscale conversion, image input to the computer is a grayscale image.To extract the shape characteristics of crops in images, it is often converted into a binary image, and the target information is obtained from the binary image.Compared with grayscale images, binary 2 Journal of Food Quality images have greatly reduced information, faster image preprocessing, lower cost, and higher practical value [13].
e key to image binarization is to correctly select the threshold.Process of image processing is to take all pixels whose gray value falls within the threshold range, representing the object, and taking the value of other remaining pixels for representing the background.rough binarization process, crops are extracted out from complex image background.
e transformation function expression for image binarization is shown in the following equation: In the above formula, K represents the threshold value during binarization processing.
e key to image binarization is the selection of threshold.By selecting an appropriate threshold for image binarization, the features of objects in the image can be highlighted, which facilitates the extraction of feature parameters and at the same time retains as much useful information as possible.
Crop Image Segmentation.
Image segmentation is the key to image processing and also the bottleneck restricting the development of computer image processing technology.
erefore, based on the results of image binarization, the crop images are segmented [14].e original crop image is represented by a two-dimensional grid with f dimensional vectors, where each grid point represents a pixel.When f � 1, it is a grayscale image; when f � 3, it is a color image; when f > 3, it is a multispectral image.e space where the grid is located is called the air domain and the space where the grayscale or spectral information is located is called the chromaticity domain.Considering the spatial information and color information of the image uniformly, an f + 2 dimensional vector U is formed, such that each image pixel can be represented by a vector U � (u i , u j ), where u i represents the position coordinates of the pixel, and u j represents the color feature of the pixel.Let F k (u) denote the mean-shift iterative formula on f + 2 dimensional space, and its expression is given by the following equation: where d represents the pixel value of the smoothed pixel point; (i � 1, 2, . . ., N) represents the value of the pixel point in the square area with the smoothed point as the center and the side length is l, and all the pixels in the square area are called sampling points; P(x) represents the kernel.e two commonly used kernel functions are the unit kernel function and the Gaussian kernel function; a ij represents the weight value of each sampling point.e main steps of crop image segmentation are as follows: Step 1.Given initial conditions, including an initial pixel point d (generally set as the first pixel point at the upper left of the image), kernel function P(x), weight a ij , and allowable error e r of each sampling point; Step 2. Calculate the F k (u) value of pixel d according to equation (2); Step 3. If F k (u) > e r , assign F k (u) to d, and return to Step 2; if F k (u) < e r , end the iteration of the pixel, and select the next pixel in sequence; Step 4. Repeat steps (2) and (3) until the entire image traversal ends and obtain crop image segmentation results.Journal of Food Quality
Identification of the Growth State of Corn and Rice Crops
Based on the crop image preprocessing results in Section 2, the CNN deep network is used to extract the image features of corn and rice crops which classify the images according to the feature extraction results. is method recognizes the canopy height and other information of crops in combination with the feature extraction results and classification results.CNN is one of important type of intelligence methods to process images very deeply.
Feature Extraction of Corn and Rice Crops' Growth Period.
Using CNN to extract features in corn and rice crop images, the dataset must have sufficient records.When the dataset is too small, the network cannot be fully trained and the advantages of CNN cannot be fully exerted [15]. is paper first uses data enhancement technology to expand the corn and rice crop images.Due to the local connection and weight sharing characteristics of the CNN, it has better distortion tolerance.erefore, the data enhancement technology is used to expand the dataset.When the sample size is large enough, the loss of the feature information of the sample data can be avoided and the effective features of the corn and rice crop images can be extracted as much as possible [16].
In traditional CNN, saturated nonlinear functions are usually used as excitation functions, such as sigmoid function and tanh function.Nonlinear activation functions usually suffer from the vanishing gradient problem which has certain limitations.Unsaturated nonlinear functions are often used as excitation functions in current CNN structures such as ReLU functions.Compared with the saturated nonlinear excitation function, the unsaturated nonlinear function does not have the problem of gradient disappearance and they can reduce the overfitting phenomenon.In this paper, the ReLU function is used as an activation function of the convolutional network.e sigmoid function expression is shown in the following equation: where T(x) represents the saturation function and λ represents the output of the sigmoid function.e unique double feature extraction structure in the CNN makes the network more capable with respect to the translation, scaling, and rotation of the corn and rice crop image sample data to a certain extent [17].e convolution kernel size of the fully connected layer is the same as the output of the last pooling layer which guarantees the output of a one-dimensional vector.
e network structure parameters used are shown in Table 1.
On the basis of the above CNN operation, the feature extraction algorithm of corn and rice crop images is improved and with the aid of the SURF algorithm [18], the geometric structure features of corn and rice crop images are enhanced along the gradient direction by C times.e geometric feature vectors are in the overlapping area and the scale fractal equation of the local geometric structure of the corn and rice crop images is constructed using the following equation: where I(t) represents the image detail information function; q i and q j both represent the image structure pixel; and C ′ represent the image expansion coefficient.
Calculate the Harris corner of the corn and rice crop images, reconstruct the local area of the images according to the characteristics of the corn and rice crop images in the gray pixel area using the statistical analysis method [19], and express the image variable scale intuitionistic fuzzy set as follows according to the contour information of the images as given by the following equation: where a and b, respectively, represent the invariant moment feature quantities of points in different image regions and η i represents the ridge contrast of image feature regions.Using the Harris corner detection method [20], the Harris corner information distribution of corn and rice crop images is obtained as given by the following equation: where α i represents the high frequency part in the image; α j represents the scale factor of the Harris corner detection; and ϖ represents the normalization factor.e manual labeling method is used to match all the sample images in blocks, and the continuous wavelet transform method [21] is used to perform the time-frequency transformation of the feature points.Wavelet transform considers continuous time signal into different scale features.e continuous wavelet transform is calculated by the following equation: where χ ab i,j represents the wavelet transform coefficient.e image is rotated and scaled in three-dimensional space, and the geometric dispersion of the image is obtained as shown in the following equation: where u j (x) represents the transformation function obtained by the projection of the image to the space.e block wavelet transform method is used for feature adaptive matching, and the statistical features are implied in the structure and parameters of the convolutional neural network [22].Let a � ] 1 and b � ] 2 , and rewrite formula (7) as given in the following equation: Here, u p (] 1 ) and u p (] 2 ) both represent the image statistical feature constructor.
Combined with the template matching method, the elastic template of image feature extraction is obtained as shown in the following equation: where In equation ( 11), F 1 and F 2 both represent image gradient information.
e feature analysis and contour region feature extraction are carried out in the block region of the corn and rice crop images, and the multilayer wavelet decomposition results of the image features are obtained as follows in the following equation: Here, K n represents the edge information of the image; K a represents the image intensity information; and ω a and ω b represent the valley value and the peak value of the image, respectively.
e image is reorganized according to the edge, peak, valley, and intensity information of the image, and the output result of the image feature is obtained as shown in the following equation: where x k , y k , and z k represent image local features and A i and B j both represent image feature vectors.e above feature extraction results are used as identification parameters to classify and identify the growth period of corn and rice crops.
Classification of Corn and Rice Crops in the Growth Period.
e growth period of corn and rice crops is divided into four categories: seedling stage, jointing stage, tasseling stage, and mature stage.Based on the binary classification characteristics of SVM, it is combined with the idea of decision tree to construct a binary tree structure.Binary trees organize the imageset as per the features with specific rule.First, all samples are divided into two categories, and several easily confused categories are divided into one category, and then the above two subcategories are further divided into two low-level subcategories.Iteratively, the binary tree obtains a binary classification tree.
ere are two main structures of a binary tree: one is that at each inner node, one class and the remaining classes form a partition surface and the other is the division of several categories and several categories at the inner node.e binary tree structure generation algorithm used in this paper performs clustering before classification.
is method is based on the class distance method in clustering.An advantage of binary tree classification is that there is no inseparable part, and it is not necessary to traverse all the classifiers during classification.
Given the sample set Q � q 1 , q 2 , . . ., q n , the minimum square error of dividing the sample set Q according to the feature extraction result using the K-means algorithm is given in the following equation: Here, E 1 and E 2 both represent mean vectors.Formula ( 14) reflects the degree of closeness between different samples in the sample set to a certain extent, i.e., the smaller the values of E 1 and E 2 , the higher the similarity of the samples in the set.According to this principle, the growth period classification of corn and rice crops is realized.
Recognition Algorithm of Corn and Rice Crops' Growth
State.Based on the feature extraction of corn and rice crops and the classification of the growth period, the identification of the growth state of corn and rice crops is carried out.Canopy height measurement of corn and rice crops is a difficult point in crop growth identification.e traditional identification method is mainly that technicians go deep into the field, measure the height values of several areas, and then obtain the average value.
e workload is large and the identification results are affected by the greater impact.
erefore, this paper studies the use of the fuzzy mathematics [23] model to replace the traditional method in order to improve the recognition effect.
Firstly, the growth images of corn and rice crops are collected based on laser technology, and the laser rangefinder is installed on the three-axis PTZ to dynamically scan the observation area and obtain the canopy position point set.
e canopy height is obtained through internal angle error correction, triangular geometric conversion, and data fitting.
Set the horizontal scanning range as r( < SOS′) and the vertical scanning range as s( < SOS ″ ). Figure 3 is a schematic diagram of the rotation path of the laser rangefinder.When the altimetry device is powered on, the three-axis gimbal automatically completes initialization, the laser emission point rotates to point O, the theoretical angle of the vertical angle r at this position is 0 °, the three-axis gimbal is at point o in the horizontal position, and the theoretical angle of the horizontal angle s is 0 °.
Journal of Food Quality
Let the length matrix of the scanning point matrix be L, the height measuring device starts to measure from the set initial value, and the vertical direction scans the stationary point according to the set step size.After reaching the set vertical end point, record its length matrix.L i � l i1 , l i2 , . . ., l im ; at the same time, translate a horizontal step in the horizontal direction, and then scan to the initial vertical angle in the vertical direction, record the length scan matrix L j � l j1 , l j2 , . . ., l jm until the altimeter scans to the set end position.e measurement height is linearly related to the scan point length, and its correlation coefficient matrix is ϑ. e value of ϑ is related to the rotation radius R of the laser rangefinder and the pitch angle ρ.In the device, r is a fixed value, and ϑ � ϑ 1 , ϑ 2 , . . ., ϑ m is set.Assume that the height matrix of the scanning point is h 2 ij and h 2 ij is the height from the laser emission port to the crop plane after the calibration of the threeaxis gimbal; its expression is given in the matrix as follows: Due to differences in weather, sunlight intensity, etc., the collected images of corn and rice crops are significantly different.erefore, it is particularly important to reduce the impact of environmental interference on the recognition results.Using the fuzzy algorithm to process corn and rice crop images can not only save computing time but also make the recognition results more accurate [24,25].e specific steps are as follows: (1) Select feature factors to construct feature sets.e feature set is constructed according to the feature vector extracted in Section 3.1.e set includes complexity, aspect ratio, mean contrast, compactness, ratio of the number of brightest pixels to the total number of target pixels, and so on.
e expression of the feature set is as follows: (2) Establish membership functions and construct fuzzy sets to be identified.e key to constructing a fuzzy set lies in the determination of the membership function.e membership function is a function of the characteristic quantity which can be expressed as ℓ(x).e fuzzy set to be identified is as shown in the following equation: (3) Use the principle of closeness to make attribution judgment on the identified objects to complete the target identification.
e proposed work is explained in Sections 2 and 3, respectively, and the next section provides the experimental analysis.
Experimental Analysis
In order to verify the validity of the proposed method for identifying the growth state of corn and rice crops based on computer image processing technology, simulation experiments have been carried out to verify the results.In the experiment, the crop hyperspectral remote sensing identification method based on the random forest method and the wheat fast identification method based on GF-2 data are compared and analyzed with the proposed method.e error of segmentation results, feature extraction time, and recognition effect of corn and rice crop images are used as experimental indicators.Among them, the error of the image segmentation result affects the subsequent recognition effect.erefore, the lower the value, the better would be the recognition results.
Experimental Samples.
e experimental samples are representative real images selected from the Heilongjiang Bayi Agricultural University, Daqing.e light intensity in the region is high, and the photo light change rate is obvious.e principles of image selection are: the first is the real image of corn in the pre-growth stage which is between the three-leaf stage and the seven-leaf stage and it is called type I image and type II image, respectively.e third type is the late growth stage after the tassel stage of corn and before the mature stage; it is called type III image.According to different lighting conditions, three types of images are selected: insufficient lighting, normal lighting, and bright lighting.2.
It can be seen from the data in Table 2 that the proposed method, the recognition method based on the random forest method, and the recognition method based on Gaofen-2 data segment the sample image and there are certain differences in the segmentation result errors.Among them, the segmentation result error of the proposed method is always lower than the other two.It shows a stable change trend while the segmentation result error of the recognition method based on the random forest method and the recognition method based on Gaofen-2 data is always higher than the proposed method.It is verified that the proposed method can realize the accurate segmentation of corn and rice crop images and improve the image processing effect; it is also helpful to improve the recognition effect of crop growth state.
Feature Extraction Time Analysis.
Comparing the extraction time of corn and rice crop growth state features of different methods, the results are shown in Figure 5.
According to Figure 5, in multiple iterative tests, the feature extraction time of the proposed method is lesser than that of the traditional method.e maximum feature extraction time is only 1.4 s while the maximum feature extraction time of the recognition method based on the random forest method and the recognition method based on high Gaofen-2 data is 4.9 s and 3.8 s, respectively.e comparison shows that the proposed method has obvious advantages. is is because this method classifies the growth period of corn and rice crops before recognizing the growth state.
e feature extraction efficiency is also high of the proposed method.
Recognition Effect Analysis.
In order to more intuitively show the recognition effect of the proposed method, the recognition effect of the three methods on the sample image is further tested and the results are shown in Figure 6.
By analyzing Figure 6, it can be seen that when three methods are used to recognize the characteristics of corn image, the proposed method can eliminate redundant information.ere is no problem of blurred boundary as well.However, there are many parts in the recognition result of the recognition method based on the random forest method that are inconsistent with the actual image, which increases the complexity of follow-up research.ere is a problem of local ambiguity in the recognition result of the recognition Journal of Food Quality method based on high Gaofen-2 data which also affects the follow-up processing effect.erefore, the proposed method has better a recognition effect and can obtain clear and complete feature information.Journal of Food Quality method like CNN, decision tree, and fuzzy-based approaches.e proposed approach could be very beneficial for farmers of in the field of agriculture for analyzing the growth of crops based on automated methods.
Limitations.
e proposed method is trying to overcome many problems of the traditional methods.e space complexity of the proposed method is high as it is considering the integration of many techniques which can help in the analysis of the growth of plants based on their respective images.
Conclusion
Aiming at the problems of long feature extraction time, large error of crop image segmentation, and poor recognition effect obtained by the traditional methods, a corn-rice crop growth state recognition method based on computer image processing technology is proposed.is proposed method focuses on getting good-quality crop by observing their images at each growth step which will help in curing diseases at that particular state.e experimental results show that the proposed method effectively solves the problems of traditional methods, and has the advantages of good recognition effect, high feature extraction efficiency, and high image segmentation accuracy.
e application of this method to the field of agricultural research has certain practical significance.Different computational techniques such as CNN, binary tree, image processing techniques, K-means clustering, and fuzzy logic help get efficient results in good-quality production of crops, especially for corn and rice plants.
Figure 4
is a schematic diagram of an image of an experimental sample.
Figure 3 : 6 Journal of Food Quality 4 . 2 .
Figure 3: Schematic diagram of the rotation path of the laser rangefinder.
Figure 4 :
Figure 4: Images of experimental samples under different lighting conditions.
4. 3
. Discussion. e proposed system focuses on the error analysis of error segmentation, feature extraction time analysis, and recognition effect analysis.Traditional methods are lacking of deep observation on crop development and disease recognition.Samples of images are taken in different phases, in different light, and in different environments.e proposed algorithm is able to work on all types of images which will decide an efficient way for good-quality production of crops.Random forest methods analyze different reflections of different spectra.Rapid analysis of crop images can be done with the use of different artificial intelligence techniques, which are currently used in this proposed Identification method based on random forest method Recognition method based on Gaofen-2 data
Figure 5 :
Figure 5: Feature extraction time of different methods.
Figure 6 :
Figure 6: Comparison of feature recognition effects.
Table 1 :
Convolutional neural network structure parameters.
Table 2 :
Segmentation result errors of different methods. | 6,645.8 | 2022-06-08T00:00:00.000 | [
"Computer Science",
"Agricultural and Food Sciences"
] |
Modeling of anisotropic and asymmetric behaviour of magnesium alloys at elevated temperature coupled with ductile damage
Poor formability of Magnesium alloys at room temperature is due to their Hexagonal Closed Packed (HCP) crystal structure. These materials also have a pronounced Strength Differential (SD) effect. In the present work, an improved constitutive model of thermo-elasto-Dviscoplasticity with mixed nonlinear isotropic and kinematic hardening strongly coupled with isotropic ductile damage is developed. The induced anisotropy as well as tension compression asymmetry are carefully considered including their interaction with thermal effects. The numerical implementation of the developed model into ABAQUS/Explicit FE is made through the user subroutine VUMAT. The proposed model is used to simulate material responses of AZ31 Magnesium alloy during sheet metal forming processes at elevated temperature.
Introduction
Due to the high strength to weight ratio of Hexagonal Closed Packed (HCP) structured metals which are desired in light-weight design, these kinds of materials have become an attractive research focus, meanwhile, modeling the plastic behavior of these materials is becoming a highly challenging task. Plastic deformation of HCP materials can be divided mainly into slip and twinning modes. The activation of these modes is highly depending on the critical stress and loading directions. Twinning is a directional deformation mechanism which means a pronounced strength differential (SD) effect: the compressive strengths are much lower than the tensile strengths [1][2][3][4][5]. Experimental results have shown that mechanical response of magnesium alloys have a strong anisotropy and asymmetry under tension and compression at room temperature [1]. Modeling of the plastic deformation with various phenomenological yield functions has been widely applied to sheet metal forming processes. Various isotropic and anisotropic [2] yield functions have been developed by introducing more coefficients to describe the plastic behavior more accurately. To capture the SD effects in the anisotropic model, Cazacu [3][4][5] has developed two yield functions: (1) by introducing the third stress invariant based on the Drucker's criterion; (2) by introducing a new parameter to control the asymmetry in tension and compression and extend to anisotropy using a liner translation on stress based on Balart's criterion. These two yield functions have been extended by others to describe the anisotropic hardening [6] and pressure sensitive metals [7]. In the meantime, the HCP metals (e.g. Magnesium alloys) often have poor formability at room temperature due to the limited number of active slip systems in their HCP crystal structure. Accordingly, hot sheet metal forming technology is proposed to increase the formability of Magnesium alloys, as a result, the thermomechanical behavior complexity is significantly increased under high temperature. Experimental temperature affect the anisotropic response and tension compression asymmetry of HCP materials [8,9]. Besides the thermal and SD effect, the induced anisotropy due to the evolution of the texture affects significantly the hardening evolution and yield surface. Few macroscopic approaches have been published to capture this phenomenon, see [10][11][12][13].
In order to describe more accurately the hot sheet metal forming processes of the magnesium alloys, we propose a thermo-elasto-viscoplasticity model with mixed nonlinear isotropic and kinematic hardening coupled with isotropic ductile damage. Distortion of the subsequent yield surface was included for modeling the distortion-induced anisotropy. The formulation of the model is performed in the framework of thermodynamics of irreversible processes with state variables [14] using generalized non-associative theory under finite strains [15]. The coupling with isotropic ductile damage is made in the framework of continuum damage mechanics with the effective variables based on the total energy equivalence assumption.
State potential and associated state relations
In order to ensure thermodynamical admissibility of the formulation, the framework of thermodynamics of irreversible processes is assumed. The Helmholtz free energy, a convex and closed function of strain-like state variables in the effective strain space, is taken as a state potential. Assuming that the plastic strain and hardening do not affect the elastic properties of the material, the state potential can be decomposed into two parts: the thermoelastic part Following the thermodynamics of irreversible processes, the stress-like state variables are obtained: where is the material density, ET is temperature dependent Young's modulus and is Poisson ratio, () CT and () QT are the kinematic and isotropic hardening temperature dependent moduli respectively, γ is a material parameter governing the damage effect on the isotropic hardening(see [14] for further details).
Dissipation analysis and evolution equations
In this paper, the dissipative phenomena are described by: (i) a yield criterion or yield surface The usual stress deviator S is replaced by a 'distorted stress' T is the melting temperature of the material and is a temperature independent material parameter.
Thermal dissipation analysis
The combination between the first and second principle of thermodynamics supplies the Clausius-Duhem inequality, which leads to the definition of the state relations (equations (3)- (7)) and the residual (or dissipation) inequality defined by equations (32) to (34). It is assumed that the mechanical and the thermal dissipation are uncoupled, so the dissipation analysis can be split into two parts mech and the : 0 :: The heat flux vector q can be obtained from Fourier potential using the classical linear heat theory, k is the heat conduction coefficient. q kg (35) The generalized heat equation can be obtained by using this equation in conjunction with the first law of thermodynamics [14]. It will be used for solving thermal problem. : :
Applications
The developed model has been implemented in ABAQUS/Explicit finite element code using the VUMAT user material subroutine. The numerical integration algorithm developed in this routine is based on elastic prediction and viscoplastic correction with radial-return mapping algorithm and considering adiabatic assumption for the thermal coupling. In the following section this routine is applied within parametric study of the local response using only one element.
Parametric study: effect of the asymmetry and distortion
To examine the effect of asymmetry parameter w of the proposed model without damage full coupling, we assumed the material parameters under isotropic mode, leading to F=G=H=0.5; L=M=N=1.5. The assumed model parameters can be found in Table 1. Figure 1 shows the effects of asymmetry parameter w. When w is greater than zero, the yield stress in tension is greater than that in compression, and with the increase of the value of w, the difference between yield stress in tension and compression increase. Similarly, we can find the same tendency when w is negative. The SD effect is correctly captured with the proposed model. In order to examine the distortional parameters, we use the same conditions as mentioned above, and compare the initial yield surface to the subsequent surface after 10% tension pre-straining on the principal stresses plane. Figure 2a shows the effect of the parameter 1 c l X , it is found that 1 c l X controls the distortional ratio of the yield surface without changing the size of the yield surface. Figure 2b shows that with the value of 2 cross size of the yield surface orthogonal to the loading direction. The details of the parameters study can be found in the work of H.Badreddine [12] . The damage effect is studied by comparing the yield surface evolutions for uncoupled and fully coupled cases in uniaxial tension as shown in Figure 3. The evolution curves of Cauchy stress, kinematic hardening back stress and isotropic hardening stress with plastic strain under coupled and uncoupled conditions are shown in Figure 3a, when the damage effect increases to fracture, all the three stresses decrease. It is remarked that the yield surface reaches a maximum surface at 15% strain after which the surface size decreases and the surface center moves towards the reference origin, for the fully coupled case showed in Figure 3c. When the damage value d = 1 the yield surface will reduce to a single point located at the origin.
Application to AZ31 magnesium alloy
In order to validate the proposed model, the experimental results were taken from the work developed by Khan et al., [8]. These results concern the tests which have been done on Magnesium alloy AZ31 under two different temperatures (25°C, 150°C) at strain rate 0.01s -1 on the rolling direction. Note that, the numerically predicted results of Figure 4 are obtained based on the model using the material parameters given in Table 2. Initial isotropy was assumed, F=G=H=0.5; L=M=N=1.5. For the coefficient , =0.9 for Young's modulus; =1.08 for damage factor S, for other modules =1.03.
Conclusions
An advanced constitutive model of thermo-elasto-viscoplasticity fully coupled with nonlinear isotropic and kinematic hardening and isotropic damage is developed. The induced anisotropy by distortion of yield surface is considered and the strength differential effect is carefully taken into account to extend the proposed model for the HCP materials in sheet metal forming processes. The capabilities of the proposed model have been investigated through parametric study and validation with some experimental results. Future applications will be made to hot sheet metal forming with higher temperatures and more complex loading paths. | 2,079 | 2017-09-01T00:00:00.000 | [
"Materials Science"
] |
Comparison of IPSA and HIPO inverse planning optimization algorithms for prostate HDR brachytherapy
Publications have reported the benefits of using high‐dose‐rate brachytherapy (HDRB) for the treatment of prostate cancer, since it provides similar biochemical control as other treatments while showing lowest long‐term complications to the organs at risk (OAR). With the inclusion of anatomy‐based inverse planning optimizers, HDRB has the advantage of potentially allowing dose escalation. Among the algorithms used, the Inverse Planning Simulated Annealing (IPSA) optimizer is widely employed since it provides adequate dose coverage, minimizing dose to the OAR, but it is known to generate large dwell times in particular positions of the catheter. As an alternative, the Hybrid Inverse treatment Planning Optimization (HIPO) algorithm was recently implemented in Oncentra Brachytherapy V. 4.3. The aim of this work was to compare, with the aid of radiobiological models, plans obtained with IPSA and HIPO to assess their use in our clinical practice. Thirty patients were calculated with IPSA and HIPO to achieve our department's clinical constraints. To evaluate their performance, dosimetric data were collected: Prostate PTV D90(%),V100(%),V150(%), and V200(%), Urethra D10(%), Rectum D2cc(%), and conformity indices. Additionally tumor control probability (TCP) and normal tissue complication probability (NTCP) were calculated with the BioSuite software. The HIPO optimization was performed firstly with Prostate PTV (HIPOPTV) and then with Urethra as priority 1 (HIPOurethra). Initial optimization constraints were then modified to see the effects on dosimetric parameters, TCPs, and NTCPs. HIPO optimizations could reduce TCPs up to 10%–20% for all PTVs lower than 74 cm3. For the urethra, IPSA and HIPOurethra provided similar NTCPs for the majority of volume sizes, whereas HIPOPTV resulted in large NTCP values. These findings were in agreement with dosimetric values. By increasing the PTV maximum dose constraints for HIPOurethra plans, TCPs were found to be in agreement with IPSA without affecting the urethral NTCPs. PACS numbers: 87.55.‐x, 87.55.de, 87.55.dh, 87.53.Jw
Publications have reported the benefits of using high-dose-rate brachytherapy (HDRB) for the treatment of prostate cancer, since it provides similar biochemical control as other treatments while showing lowest long-term complications to the organs at risk (OAR). With the inclusion of anatomy-based inverse planning optimizers, HDRB has the advantage of potentially allowing dose escalation. Among the algorithms used, the Inverse Planning Simulated Annealing (IPSA) optimizer is widely employed since it provides adequate dose coverage, minimizing dose to the OAR, but it is known to generate large dwell times in particular positions of the catheter. As an alternative, the Hybrid Inverse treatment Planning Optimization (HIPO) algorithm was recently implemented in Oncentra Brachytherapy V. 4.3. The aim of this work was to compare, with the aid of radiobiological models, plans obtained with IPSA and HIPO to assess their use in our clinical practice. Thirty patients were calculated with IPSA and HIPO to achieve our department's clinical constraints. To evaluate their performance, dosimetric data were collected: Prostate PTV D 90 (%), V 100 (%), V 150 (%), and V 200 (%), Urethra D 10 (%), Rectum D 2cc (%), and conformity indices. Additionally tumor control probability (TCP) and normal tissue complication probability (NTCP) were calculated with the BioSuite software. The HIPO optimization was performed firstly with Prostate PTV (HIPO PTV ) and then with Urethra as priority 1 (HIPO urethra ). Initial optimization constraints were then modified to see the effects on dosimetric parameters, TCPs, and NTCPs. HIPO optimizations could reduce TCPs up to 10%-20% for all PTVs lower than 74 cm 3 . For the urethra, IPSA and HIPO urethra provided similar NTCPs for the majority of volume sizes, whereas HIPO PTV resulted in large NTCP values. These findings were in agreement with dosimetric values. By increasing the PTV maximum dose constraints for HIPO urethra plans, TCPs were found to be in agreement with IPSA without affecting the urethral NTCPs.
I. INTRODUCTION
Several authors have reported the benefits of using interstitial brachytherapy as an alternative to radical prostatectomy and external beam radiotherapy for the treatment of low and intermediate stage prostate cancer. (1)(2)(3)(4) Results of multicenter studies (5)(6)(7)(8) have shown that brachytherapy delivered as monotherapy or concurrently with external beam radiotherapy yields biochemical control rates similar to other techniques, while showing the lowest rates of long-term complications to the organs at risk (OAR). (9) High-dose-rate (HDR) brachytherapy, performed with remote afterloaders, has also the additional advantage of potentially allowing dose escalation (10) without increasing considerably OAR toxicities or treatment times.
HDR is widely used, due to the recent ability to integrate 3D images into the treatment planning process. These images, which can be obtained either by computed tomography (CT) scans or ultrasound, provide the possibility to perform an accurate treatment plan based on the anatomy of the patient and the position of the implant at the time of treatment.
Additionally, the quality of HDR brachytherapy planning has advanced with the introduction of inverse planning optimizers similar to those used in external beam planning. (11)(12)(13) These algorithms, which are now implemented in commercial treatment planning systems (TPS), generate reproducible treatment plans in a faster way by using clinical constraints set by the users.
Among the optimizers currently available, there has been great interest in the development and use of the Inverse Planning Simulated Annealing optimization algorithm (IPSA), in particular for the treatment of prostate cancer. IPSA is an anatomy-based algorithm which optimizes the source dwell times using a simulated annealing algorithm, based on the work by Kirpatrick et al. (14) and developed for brachytherapy applications by Lessard and Pouliot. (11) The model is governed entirely by the anatomy of the patient contoured from a CT scan and by a series of surface or volumetric prescribed dose constraints set by the user at the time of planning. IPSA gives an acceptable conformal plan in a matter of seconds by providing the distribution of the dwell times within the catheters. However, it was not initially designed to include a smoothness function to take into account the distribution of a single dwell time with respect to the adjacent ones.
The result of a standard unrestricted IPSA plan is that, in the majority of cases, the dwell times have an inhomogeneous distribution similar to the one shown in Fig. 1 (left) in which there are a number of dominating dwell times in particular positions within the catheter, usually at both ends, leaving the others with very small times or empty. This behavior could potentially lead to localized hot spots and, more importantly, to underdosage of the planning target volume (PTV) and overdosage of the OAR in cases in which there is a displacement of the catheters. (15) Recently the Dwell Time Deviation Constraint (DTDC) parameter has been added to the IPSA optimizer implemented in the Oncentra Brachytherapy (OCB) treatment planning system (TPS) V. 4.3 (Nucletron B.V., Veenendaal, The Netherlands). This option can restrict the dwell time deviation in each catheter so as to control potential hot spots around individual dwell positions; however, its use is new and its effect is still under investigation.
As an alternative to IPSA brachytherapy, TPS users have started looking into different optimization approaches, among them the Hybrid Inverse treatment Planning Optimization algorithm (HIPO), (13) which was also recently implemented in OCB V. 4.3, to be used for a variety of treatment sites including the prostate. In the present work, a series of patients treated for low-and intermediate-risk prostate cancer was retrospectively replanned with both the IPSA and HIPO algorithms implemented in OCB V. 4.3 for the same initial constraints. The resulting plans were then analyzed in order to evaluate the differences between them and the benefit of their use in clinical routine. Previously, HIPO was evaluated for the particular case of gynecological cancer (16) and in comparison to geometrical and graphical optimization for HDR prostate brachytherapy. (17,18)
A. Clinical plans
Thirty patients treated between 2007 and 2008 were chosen from our institution's clinical database. These patients were all treated consecutively with CT-based plans originally performed with the Plato V. 14.3.2 TPS (Nucletron) using geometrical optimization. The prostate planning target volume (PTV), rectum, and urethra were all contoured at the time of treatment by the same oncologist. The prostate PTVs covered a wide range, between 26 and 121 cm 3 .
According to the protocol followed at the time of treatment all patients were planned to receive 19 Gy in 2 fractions. All plans were exported from Plato and imported into OCB V. 4.3 TPS. This version of Oncentra allows the user to perform both manual and optimized planning on the reconstructed clinically placed catheters.
B. IPSA optimization
Following our current clinical practice, the plans were first optimized with the IPSA algorithm using the initial parameters shown in Table 1.
As mentioned, the initial implementation of IPSA does not include a function which aims at adjusting the smoothness of the dwell time distributions within the catheters. The result is that in most cases after performing an IPSA optimization it is still necessary to adjust the dwell time manually to avoid high-dose gradients ( Fig. 1 (left)).
Since the DTDC parameter is currently under investigation and not clinically used in our institution, in order to perform a clinically relevant comparison, in this analysis the dwell times after an IPSA optimization were not manually modified and the DTDC parameter was disabled in order to have unrestricted optimization.
C. HIPO optimization
Using the clinically placed needles, all patient plans were then calculated using the HIPO algorithm implemented in OCB V. 4.3. HIPO is a CT-based 3D anatomy-based algorithm (13) which uses a combination of deterministic and stochastic models in order to potentially perform -the inverse optimization of needle placement (by a heuristic algorithm) and the inverse optimization of dwell time for a given needle or applicator configuration (quasi-Newton algorithm). In this work, only the second option was used and HIPO plans were obtained by assigning dosimetric constraints similar to those used for the IPSA plans, as shown in Table 2. HIPO requires only the use of volumetric constraints, but allows setting optimization priorities to the target and OAR. In this study, for each patient two plans optimized with different HIPO settings were carried out: the first was done by assigning priority 1 to the Prostate PTV (defined as HIPO PTV in the text) and the second by assigning priority 1 to the urethra (HIPO urethra ) in order to observe the effect of this parameter on the overall plan.
HIPO also allows the users to lock a number of catheters in order to keep their dwell times fixed and perform the optimization of the remaining catheters. This option, which has been widely used in gynecological applications, aims at restricting modulation and eliminating hot spots. In addition, it also offers the option of a modulation restriction (MR) parameter, which allows the user to obtain control of the free modulation of the dwell times in order to have smoother source movement and dwell time distribution within the catheters. However, as shown in previous works, (17) it does not seem to introduce major improvements for prostate HDR cases. In this work, both options were disabled to perform a direct comparison with the IPSA optimizer.
To assess the effect of changing the initial HIPO optimization constraints, ten patients were then recalculated by changing the prostate PTV maximum initial constraint (Max Value(Gy)) from 14.25 Gy to 18 Gy.
D. Analysis
All patient plans performed with IPSA, HIPO PTV , and HIPO urethra were evaluated by comparing dosimetric parameters, radiobiological parameters, and global conformity indexes.
The dosimetric parameters analyzed were the dose-volume histograms (DVH)-based values proposed by GEC/ESTRO-EAU (19) for the Prostate PTV: the dose that covered 90% of the volume D 90 (%), the percentage of the prostate PTV that received at least 100% of the prescribed dose V 100 (%), the volume that received 50% and 100% more than the prescribed dose V 150 (%), V 200 (%), and for the OARs the dose that covered 10% of the urethra D 10 (%) and the dose that covered 2 cm 3 of the rectum D 2cc (%). According to clinical practice, acceptability of the plan was evaluated according to the values provided in Table 3. Statistical significance between different algorithms was proven with a two-sided t-test (α = 0.05).
Dosimetric parameters are obtained by using DVH calculated by the TPS for each structure. The DVH is extremely dependent on the size of the histogram bin and its relative height, and this variability can directly influence the dosimetric parameter calculated. For this reason, comparisons were also made by considering radiobiological indexes for both PTV and OAR, namely tumor control probability (TCP) and normal tissue complication probability (NTCP). These parameters were calculated by employing BioSuite, (20) a software tool specifically designed for radiobiological analysis. TCP values were obtained by using a Poisson model, (21) while NTCP parameters were obtained by using a Lyman-Kutcher-Burman (LKB) model. (22,23) Since there is much discussion on the appropriate parameters to be used in order to model tumor control for prostate cases, (24)(25)(26)(27)(28) different combinations of modeling values were used in this analysis. As previously performed by Uzan and Nahum, (20) the α/β ratio was varied between 5 and 1.5 Gy.
Despite the general belief that the α/β ratio should be low for these types of tumors, the value of 5 Gy was also considered, since several authors have highlighted the possible effect of hypoxia or dose heterogeneity in the assessment of α/β for prostate cancer. (25,28) The other parameters, such α and αspread were assigned accordingly. (20) Additionally the clonogen density (25) was varied between 10 5 and 10 7 . Tumor repopulation was not considered for these types of diseases as they repopulate very slowly. To determine the best set of parameters, an average TCP was considered according to the collected clinical data at our institution. This value was considered to be between 70%-80%, assuming an average of five years biochemical tumor control for each patient.
In order to model NTCP for the OAR, rectal bleeding was considered the endpoint for the rectum. According to the QUANTEC publication, (29) the parameters were set to be α/β = 3 Gy, n = 0.09 for volume effect, m = 0.13, and TD 50 = 76.9 Gy. These values were confirmed by Liu et al. (29) and were considered suitable for this cohort of patients. For the urethra, NTCPs were estimated by looking at shrinkage, ulceration, and stricture. In contrast to the rectum, parameters to model urethral complications are not readily available and, again, there is not a general consensus on the most appropriate values to be used for prostate HDR brachytherapy. In this work they were set to α/β = 5 Gy, n = 0.085 for volume effect, m = 0.27, and TD 50 = 60 Gy, according to the recent publication by Gloi and Buchanan. (27) These parameters provided an average urethral NTCP of 25% in accordance to our institution's collected clinical data.
Finally, in order to look at the quality and homogeneity of the plans, the conformation number (CN) proposed by van't Riet et al. (30) and the conformal index defined by Baltas et al. (31) (COIN) were also compared.
A. Dosimetric parameters
Mean and standard deviation values of the dosimetric parameters obtained for Prostate PTV and OARs are presented in Table 4. The last two columns represent the statistical significance of the differences between doses calculated with IPSA and, respectively, HIPO PTV and HIPO urethra . According to the t-test and taking IPSA as the reference algorithm, differences between IPSA and HIPO PTV and HIPO urethra were all statistically significant, except V 150 (%) for HIPO PTV .
Generally both HIPO optimizations yielded lower values of V 100 (%) than IPSA independently of the size of the volume treated ( Fig. 2(a)).
Considering each patient independently, in six patients, HIPO plans produced PTV V 100 (%) below the clinical tolerances, summarized in Table 3 (as shown in Fig. 2). For these cases, it was not possible to find a correlation with the size of the PTV. Parameters related to inhomogeneity V 150 (%) and V 200 (%) were, instead, generally within acceptable limits (Figs. 2(b) and (c)), similar to D 90 (%) which was within tolerance levels in all but two cases (Fig. 2(d)). Looking at the OAR for all patients analyzed, the urethra D 10 (%) exceeded the acceptable tolerances for plans calculated with HIPO PTV in the majority of the cases. Considering the rectum, both HIPO calculations provided lower doses (D 2cc (%)) than IPSA (Figs. 3(a) and (b)). Figure 4 illustrates the TCP and NTCP values obtained with the three types of optimization studied. For the OAR, only the urethra NTCP is shown since, as expected, the rectum NTCP was found to be negligible for all algorithms. For the TCP, the results shown are those obtained with α/β of 1.5 Gy and a clonogen density of 10 5 . For these parameters, the IPSA TCPs were between 70%-80%, which was the value expected from the clinical biochemical data.
B. Radiobiological analysis
The results show that the use of HIPO optimized with the same initial dosimetric constraints used in IPSA could potentially reduce the tumor control probability up to an average of 10%-20% for HIPO PTV and for HIPO urethra for all volumes lower than 74 cm 3 . Interestingly, this behavior changes for PTVs larger than 74 cm 3 , in both cases analyzed, as both HIPO algorithms provided TCPs 10% larger than IPSA.
For the urethra, the results show that IPSA and HIPO urethra provided similar NTCPs for the majority of cases and volume sizes, with HIPO urethra generally being lower than IPSA. Instead HIPO PTV resulted in large NTCP values, as expected from the dosimetric data (Fig. 4). Only in one case were IPSA and HIPO urethra larger than HIPO PTV .
Looking at a subset of patients with various size PTVs, if the initial prostate PTV maximum constraint was increased to 18 Gy, HIPO urethra provided TCP similar to IPSA without increasing urethral NTCP (Fig. 5).
C. Conformity indices
The CN and COIN values for each plan calculated using each of the three optimizations are illustrated in Fig. 6. The CN values show that the HIPO plans provided better conformation to the target volume than the IPSA plans, regardless of the target size. This behavior was generally confirmed by the COIN parameter, which also proved that HIPO plans typically tended to provide a larger degree of protection of the critical organs as well as target coverage.
A. Planning target volume
The IPSA optimizer is widely used in HDR brachytherapy planning. However, its standard unrestricted implementation is known to provide plans usually characterized by large dwell times at the ends of each catheter ( Fig. 1 (left)). (16,32) This behavior could lead to large delivery errors in the case of catheter movement, by significantly underdosing the target or potentially overdosing the OARs. For plans obtained with IPSA, in order to control such hot spots, it is common for the user to manually limit the large dwell times and then proceed to a final dose distribution using graphical optimization. All these steps increase the overall planning time and make treatment planning process less reproducible and robust.
For HDR prostate patients this analysis showed that the HIPO optimizer implemented in OCB V.4.3, used with 3D CT images and clinically placed needles, could provide a valid alternative to IPSA as it allowed production of an acceptable plan directly with inverse optimization, as previously seen for gynecological cases. (16) Moreover it generally tended to provide more homogeneous dwell time distributions (Fig. 1 (right)).
The analysis of the dosimetric parameters recommended by GEC-ESTRO (19) showed that plans obtained with HIPO using the same initial parameters employed in IPSA provided lower V 100 (%) and D 90 (%) to the PTV, with an average difference within 7%-10%. Similarly V 150 (%) and V 200 (%) were lower, but the differences were of the order of 1%-4% (Table 4). Besides the dosimetric parameters being lower than IPSA, in six cases these values were below the clinical tolerances used in our department (Table 3 and Fig. 2).
These dosimetric results are directly reflected in the TCP parameters, but differences are within larger ranges (Fig. 4), since the TCP parameter is also very strongly correlated to the volumetric dose distribution in the target, represented by the differential DVH used for its calculation. In the majority of instances, plans calculated with HIPO showed lower minimum doses than those obtained with IPSA. This behavior could be due to the general tendency of HIPO to be more conformal to the target and more protective to the OARs, as shown by the CN and COIN values (Fig. 6). This trend could also be as a result of the different implementations of the two optimization algorithms and the use of the weights assigned to the various objectives in the final total objective function.
Due to the variety of radiobiological parameters associated with prostate TCP modeling in the literature, (24)(25)(26)(27)(28) in this study various combinations were tested in order to match the average biochemical control recorded in ten years of HDR data collection at our institution. A lower α/β value of 1.5 Gy and clonogen density of 10 5 appeared to reproduce, on average, the observed control of 70%-80%, confirming the hypothesis that a lower α/β ratio could be more appropriate to model its TCP. (26) For prostate HDR brachytherapy, this result seems to be in accordance with the fact that large α/β ratios produce steeper dose response curves that are more sensitive to the large dose gradients characterizing these types of treatments.
Dosimetric values might suggest that simple rescaling of the initial parameters could provide HIPO plans dosimetrically equivalent to those obtained with IPSA. In order to confirm these findings, ten patients with different PTV sizes were recalculated with HIPO urethra by assigning as initial parameter a PTV maximum dose (Max Value (Gy)) of 18 Gy. All plans provided dosimetric parameters within the tolerances accepted (Table 3), and TCP values within 70%-80% expected by the clinical outcomes, while keeping NTCP values as low as the original IPSA plan (Fig. 5).
Interestingly, a detailed evaluation of individual patients also showed differences in dose distributions according to the size of the PTV volume (Figs. 2 and 4). For PTVs larger than 74 cm 3 , both HIPO algorithms provided better coverage and TCP than IPSA without any adjustment of the initial parameters. This result shows the potential benefit of using HIPO plans for treating patients with larger prostates, but in our cohort of patients only four cases had such large volumes, so more research is warranted to confirm this finding.
B. Organs at risk
The HIPO optimizer available in OCB V. 4.3 allows assigning priorities to PTV and OARs, additionally to setting dosimetric constraints. If there is an intersection of volumes, the volume with the lower priority value is taken into account for generating dose points in the intersection. For example, if the PTV is set as priority 1 and the urethra is set as priority 2 and fully contained in the PTV, the class solution will not take into account the constraints set on the urethra, as this OAR will be considered part of the PTV. If instead the priorities are reversed, the urethra will be considered the organ with the highest priority to optimize. In our analysis, both options were considered in order to see the differences in the final dose distribution. As expected, for all patients, HIPO PTV plans provided lower dose coverage to the target than IPSA, but higher than those obtained with HIPO urethra . However, the calculated urethral doses almost all exceeded the clinical tolerances, and were considered unacceptable for treatment. HIPO urethra instead in all cases was able to keep urethral doses equal to or lower than IPSA. From the NTCP analysis, the results again were confirmed; however, in two patient plans, the HIPO generated urethral NTCPs were significantly higher than with IPSA (Fig. 4). For these two cases, the differences could be attributed to the dose distributions represented in the DVHs, which showed large V 100 (%) despite being in tolerance according to the D 10 (%) value. For the subset of ten patients recalculated with a larger initial PTV maximum dose constraint, the urethral dose was still within tolerances (Table 3 and Fig. 5) and the NTCP was not significantly affected, proving that the HIPO urethra optimizer could be used with larger initial constraints to improve PTV coverage without affecting OAR sparing.
Rectal doses calculated with HIPO were in all cases lower than with IPSA, as shown in Table 4 and Fig. 3(b), showing that changing the algorithm would not increase the risk of toxicity for this organ.
V. CONCLUSIONS
Prostate HDR brachytherapy benefits from the use of inverse planning performed by dedicated optimization algorithms. In this work, the widely used IPSA algorithm was compared with the HIPO algorithm, recently implemented in the OCB (V. 4.3). This analysis showed that HIPO used with priority 1 set to the urethra, could provide an alternative to IPSA and equally acceptable clinical plans if the initial maximum dose constraints are increased with respect to those used in IPSA, while providing a more conformal plan and, potentially, a more homogeneous distribution of the dwell times, possibly limiting the amount of hot spots in the dose distribution. | 5,919.2 | 2014-11-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
MiR-125a promotes paclitaxel sensitivity in cervical cancer through altering STAT3 expression
Cervical cancer (CC) is one of the most common malignancies in women. Paclitaxel is the front-line chemotherapeutic agent for treating CC. However, its therapeutic efficacy is limited because of chemoresistance, the mechanism of which remains poorly understood. Here, we used microRNA (miRNA) arrays to compare miRNA expression levels in the CC cell lines, HeLa and CaSki, with their paclitaxel resistance counterparts, HeLa/PR and CaSki/PR. We demonstrate that miR-125a was one of most significantly downregulated miRNAs in paclitaxel-resistant cells, which also acquired cisplatin resistance. And that the upregulation of miR-125a sensitized HeLa/PR and CaSki/PR cells to paclitaxel both in vitro and in vivo and to cisplatin in vitro. Moreover, we determined that miR-125a increased paclitaxel and cisplatin sensitivity by downregulating STAT3. MiR-125a enhanced paclitaxel and cisplatin sensitivity by promoting chemotherapy-induced apoptosis. Clinically, miR-125a expression was associated with an increased responsiveness to paclitaxel combined with cisplatin and a more favorable outcome. These data indicate that miR-125a may be a useful method to enable treatment of chemoresistant CC and may also provide a biomarker for predicting paclitaxel and cisplatin responsiveness in CC.
INTRODUCTION
Cervical cancer (CC) is a common gynecological malignancy that is a leading cause of cancer-related mortality among women worldwide. [1][2][3] Paclitaxel is a front-line chemotherapeutic agent for treating CC, usually in combination with other chemotherapeutic agents. 4,5 However, the therapeutic efficacy of paclitaxel is limited, with response rates between 29-63% because of chemoresistance. [4][5][6] Paclitaxel resistance is caused by several mechanisms, including overexpression of P-glycoprotein or other drug efflux pumps, 7,8 alterations to microtubules involved in drug-binding or altered expression of tubulin isotypes and microtubule-associated proteins, [9][10][11] alterations to cell cycle and cell survival pathways [12][13][14] and the induction of treatment-related autophagy. [15][16][17] However, the molecular mechanisms by which resistance to paclitaxel occurs are not fully understood and further investigation is required.
MicroRNAs (miRNAs) are a class of endogenous short noncoding RNAs that inhibit post-transcriptional gene expression by binding to target mRNA at their 3′-untranslated region (UTR). 18 Aberrant expression of miRNA has been associated with cancer chemoresistance, including resistance to paclitaxel. 16,17,19 MiR-125a is an anti-oncogene that has a key role in tumorigenesis in multiple cancers 20 and is crucial for paclitaxel sensitivity in colon cancer 21 and cisplatin sensitivity in nasopharyngeal carcinoma. 22 However, its importance in enabling sensitivity of CC to paclitaxel has not been explored.
In this study, we found miR-125a was significantly downregulated in paclitaxel-resistant CC cells. Overexpressing miR-125a in paclitaxel-resistant cells increases the cell sensitivity not only to paclitaxel both in vitro and in vivo but also to cisplatin in vitro by enabling apoptosis via suppressing STAT3 expression. High expression of miR-125a in CC patients was associated with a favorable response to paclitaxel combined with cisplatin treatment and prognosis. Therefore, upregulating miR-125a may be a novel way to treat chemoresistant CC and miR-125a may be a useful biomarker for predicting the response of CC to paclitaxel and cisplatin.
RESULTS
miRNA profiles in paclitaxel-sensitive and -resistant CC cells To screen critical miRNAs associated with paclitaxel resistance in CC, we simultaneously analyzed miRNA expression in two CC cell lines (HeLa and CaSki) and their paclitaxel-resistant counterparts (HeLa/PR and CaSki/PR cells; Supplementary Figures 1A and 1B) using miRNA array chips that covered a total 2549 miRNAs. A total of 18 differentially expressed miRNAs were detected in paclitaxel-resistant cells compared with paclitaxel-sensitive cells, including six upregulated miRNAs and 12 downregulated miRNAs (Figure 1a). To further validate the miRNA array chip results, we randomly selected six miRNAs (two upregulated miRNAs, miR-424 and miR-229-5p, and four downregulated miRNAs, miR-27a, miR-125a, miR-19a and miR-130b) to determine their expression by reverse transcription-polymerase chain reaction (RT-PCR). The results were consistent with the miRNA array experiments (Figure 1b). MiR-125a was the most differentially expressed miRNA detected, with more than sixfold lower expression in HeLa/PR and CaSki/PR cells when compared with HeLa and CaSki cells ( Figure 1b). As miR-125a expression was associated with paclitaxel-sensitive cells, we hypothesized that miR-125a has a prominent role in paclitaxel resistance of CC.
Paclitaxel and cisplatin sensitivities are modulated by changes in miR-125a expression in vitro To investigate the biological functions of miR-125a in paclitaxel sensitivity of CC cells, we analyzed the relationship between miR-125a and the IC 50 Figure 2A). miR-125a expression was negatively correlated with the IC 50 values of paclitaxel in all four cell lines (P = 0.0484, r = − 0.9372; Supplementary Figure 2B). To confirm the association between paclitaxel resistance and miR-125a expression, miR-125a was transfected into HeLa/ PR and CaSki/PR cells, which were then treated with increasing concentrations of paclitaxel. In this viability assay, miR-125a overexpression increased the sensitivity of HeLa/PR and CaSki/PR cells to paclitaxel (Figure 2a). In addition, suppression of miR-125a with specific miR-125a inhibitor in HeLa and CaSki cells increased the paclitaxel resistance ( Figure 2b). Recently, Chen et al. 22 reported that miR-125a correlated with cisplatin and cisplatin is another important agent for treating CC. Consistent with the results reported by Chen et al. in CC, we repeated the experiments as paclitaxel using increasing concentrations of cisplatin. As expected, paclitaxel-resistant cells also acquired cisplatin resistance (Supplementary Figures 1A and 1B). Moreover, similar effects were observed that miR-125a increased cisplatin resistance as paclitaxel (Figures 2a and b). These results suggest that miR-125a expression levels are positively correlated with the sensitivity of CC cells to paclitaxel and cisplatin. miR-125a inhibits STAT3 expression by binding to its 3′-UTR Previous studies have indicated that miR-125a directly targets STAT3. 23,24 To confirm this, we analyzed the binding ability of anti-miR-125a to wild-type or mutant STAT3 3′-UTR using the luciferase reporter assays. Our results indicated that miR-125a suppression increased wild-type STAT3 3′-UTR reporter activity in HeLa and CaSki cells but did not alter luciferase activity in cells with mutations in the binding sites for miR-125a (Figures 3a and b), which further confirmed previous study results in CC cells. These results indicate that miR-125a inhibits STAT3 expression by directly binding to its 3′-UTR in CC cells. miR-125a increased sensitivity of CC to paclitaxel by altering apoptosis because of the downregulation of STAT3 STAT3 has been shown to inhibit apoptosis, the mechanism of which contributes to Lee et al. 25 Thus, we analyzed the expression of STAT3, STAT3 phosphorylated tyrosine 705 (p-STAT3 (Tyr705)), two key apoptosis inhibitors Bcl-2 and Bcl-xL using western blot assays in paclitaxel-sensitive and -resistant CC cell lines. The results indicated that the expression of STAT3 and p-STAT3 (Tyr705) were increased in paclitaxel-resistant cells (HeLa/PR and CaSki/PR) when compared with paclitaxel-sensitive cells (HeLa and CaSki). Similar trends were observed for the expression of Bcl-2 and Bcl-xL, which were also downstream effectors of STAT3 (Supplementary Figure 3).
As miR-125a directly targeted STAT3 and increased paclitaxel and cisplatin sensitivity, we hypothesized that miR-125a increased paclitaxel and cisplatin sensitivity in CC by enabling apoptosis through downregulation of STAT3. To confirm this hypothesis, we examined the effects of miR-125a and chemotherapeutic agents on the apoptosis of HeLa/PR cells by flow cytometry. The overexpression of miR-125a, paclitaxel treatment and cisplatin treatment, individually, increased the proportion of apoptotic HeLa/PR cells to 8.63%, 5.63% and 6.52%, respectively, compared with that of the control cells (1.25% apoptotic; Figure 4a). Figure 4B). Inducing the re-expression of STAT3 in the CC cells reversed the effects of miR-125a overexpression and chemotherapeutic agents treatment-induced apoptosis (Figures 4a and b). These data indicate that, miR-125a enables CC cell paclitaxel and cisplatin sensitivity by inducing apoptosis pathway via downregulation of STAT3.
Role of miR-125a in modulating paclitaxel resistance in vivo After determining that miR-125a mediates paclitaxel resistance in different CC cell lines in vitro, we investigated the phenotype of cells overexpressing miR-125a in vivo. MiR-125a-overexpressing HeLa/PR cells or control HeLa/PR cells were subcutaneously injected into the backs of BALB/c nu/nu mice. Once the mice developed palpable tumors (45 mm in diameter within 3 weeks), they were randomly assigned into paclitaxel or saline treatment groups. There was no significant difference in initial tumor volumes (within 3 weeks) between the four groups. Paclitaxel (15 mg/kg) or the same volumes of saline were intraperitoneally injected once a week for 5 weeks. Paclitaxel induced a slight inhibition of tumor growth, however, miR-125a overexpression in paclitaxel-resistant HeLa cells induced an inhibition of tumor growth. The combination of miR-125a overexpression and paclitaxel treatment induced a significant reduction of tumor growth (Figures 5a and b). RT-PCR and immunoblot analysis confirmed that the expression level of miR-125a, STAT3, p-STAT3 (Tyr705), Bcl-2 and Bcl-xL in HeLa/PR cells from the representative tumor mass (Figures 5c and d). These findings indicate that miR-125a upregulation increases paclitaxel sensitivity in paclitaxelresistant CC cells.
miR-125a expression correlates with PFS and OS in CC patients who received paclitaxel-based chemotherapy To study the relationship between miR-125a and paclitaxel resistance in CC, we selected 43 patients who had received paclitaxel combined with cisplatin chemotherapy as the first-line treatment. We stratified patients into high miR-125a expression and low miR-125a expression according to miR-125a expression levels in their CC cells. The effect of chemotherapy on the patient's progression-free survival (PFS) and overall survival (OS) was evaluated according to the Response Evaluation Criteria in Solid Tumors. Patients in the low miR-125a expression group had poorer PFS (P = 0.0049; Figure 6a), OS (P = 0.0229; Figure 6b) and response rate (P = 0.015; Table 1) than those in the high miR-125a expression group. Furthermore, compared with patients who responsed to chemotherapy, miR-125a expression was significantly downregulated in non-responsed patients (P = 0.0136; Figure 6c). These data indicate important roles for miR-125a in CC paclitaxel resistance.
DISCUSSION
Chemoresistance in cancers is the main cause of treatment failure. 26,27 Although recent data indicate that aberrant miRNA expression is closely linked to chemoresistance by targeting genes related to chemosensitivity 28,29 or chemoresistance, 16,17,21 the specific chemoresistance-related miRNAs are largely unknown. More research is needed to identify miRNAs associated with chemoresistance and the mechanisms by which they induce chemoresistance. In this study, we profiled miRNA expression in CC cells by miRNA microarray and compared miRNA expression between paclitaxel-resistant and -sensitive cells. In total, 18 miRNAs had altered expression between the cell lines, several of which have been reported to associate with paclitaxel resistance in CC 28 and other cancers, 17,21 however, miR-125a was first reported in this paper to associate with paclitaxel resistance in CC. miR-125a for paclitaxel sensitivity in CC Z Fan et al miR-125a was the most significantly downregulated miRNA in the resistance cells, and therefore miR-125a may have an essential role in modulating paclitaxel resistance in CC. Based on this, the function of miR-125a in paclitaxel resistance was further analyzed. In solid tumors, the studies identified miR-125a as an anti-oncogene that inhibits tumorigenesis and cancer progression. 20,23 In breast cancer, miR-125a has been demonstrated to inhibit cancer growth and migration by suppressing ERBB2 and ERBB3. 30 The antitumor functions also confirmed in gastric cancer through ERBB2/miR-125a loop 31 and by regulating angiogenesis through VEGF-A. 32 Moreover, Ninio-Many et al. 33 indicated miR-125a modulates molecular pathway of motility and migration via Fyn expression in prostate cancer cells. In addition, miR-125a has been associated with many diseases, including autoimmune diseases, 20 cardiovascular diseases, 34,35 microbial infection 36,37 and hematological malignancies. 20,30 Recent data indicate that miR-125a upregulation sensitized paclitaxel-resistant colon cancer cells to paclitaxel. 21 However, the association between miR-125a and paclitaxel resistance in CC is unknown. Consistent with the function of miR-125a in colon cancer, we demonstrate that miR-125a expression is suppressed in paclitaxel-resistant CC cells. As expected, overexpressing miR-125a increased the sensitivity of resistant CC cells to paclitaxel and miR-125a knockdown in paclitaxel-sensitive CC cells resulted miR-125a for paclitaxel sensitivity in CC Z Fan et al in resistance to paclitaxel. Meanwhile, we found that miR-125a sensitizes acquired cisplatin resistance of paclitaxel-resistant CC cells. Therefore, a negative correlation between miR-125a expression and chemoresistance was identified in CC.
STAT3 is a well-characterized transcription factor that has been demonstrated to contribute to tumorigenesis and chemoresistance by regulating apoptosis by promoting Bcl-2 and Bcl-xL expression. 25 In this study, we determined that miR-125a can directly bind to the STAT3 3′-UTR and suppress its expression. Moreover, the expression of STAT3, Bcl-2 and Bcl-xL were significantly increased in paclitaxel-resistant CC cells. Therefore, it is possible that miR-125a regulates chemoresistance by inhibiting STAT3 expression. We determined that enforced overexpression of miR-125a enhanced cell apoptosis, suppressed apoptosis-related proteins. Furthermore, re-expression of STAT3 reversed the function of miR-125a. These data suggest that miR-125a may be a novel molecular mechanism contributing to chemoresistance by inhibiting STAT3 expression in CC. Therefore, miR-125a and STAT3 regulation may be beneficial for the treatment of paclitaxel and cisplatin resistant CC.
Treatment of CC cells with paclitaxel or cisplatin for 24 h did not change the expression of miR-125a or STAT3, but it did decrease p-STAT3 (Tyr705) protein levels. This suggests that paclitaxel and cisplatin inhibits STAT3 not through changing miR-125a/STAT3 pathway but through other pathways, such as the IL-6/Jak2 pathway. 38 There are several explanations for why miR-125a is downregulated in CC. In this study, we found that overexpressing miR-125a in CC cells and treatment with paclitaxel significantly increased the apoptosis rate than either treatment individually. Thus, sub-populations of cancer cells that have low miR-125a expression may be a potential explanation for resistance against paclitaxel and acquired cisplatin resistance, and combining miR-125a reactivation with paclitaxel and cisplatin treatment may be a useful therapeutic intervention against chemoresistant CC.
In our study, we indicate for the first time that miR-125a upregulation can sensitize HeLa/PR cells to paclitaxel treatment in mice. Tumor volume was more significantly reduced after miR-125a overexpression and paclitaxel treatment than in the other three groups (control group, miR-125a overexpression or paclitaxel treatment groups). We also demonstrated that CC patients with low miR-125a expression had shorter chemotherapyinduced PFS and OS. Moreover, miR-125a expression was significantly downregulated in patients no-responsed to chemotherapy. These data confirm the function of miR-125a in maintaining sensitivity of CC cells to paclitaxel in vivo and paclitaxel combined with cisplatin in the clinic.
In conclusion, paclitaxel-resistant CCs have reduced miR-125a expression. MiR-125a negatively regulates paclitaxel and cisplatin resistance in CC by reducing STAT3 expression, which promotes apoptosis and microtubule stabilization. Upregulating miR-125a or inhibiting STAT3 may be useful in combination with paclitaxel and cisplatin for treating chemoresistant CC for these two agents.
Patients and tumor tissues
A total of 43 human CC samples were obtained from the Chinese PLA 309th Hospital and General Hospital with the informed consent of patients and approval for experimentation from the Chinese PLA 309th Hospital and PLA General Hospital. Diagnoses were based on pathological evidence. Patients had not undergone immunotherapy, chemotherapy, hormone therapy or radiotherapy before specimen collection. The clinical stage and histological grades were based on the International Federation of Gynecology and Obstetrics staging. Tissue samples were snap frozen in liquid nitrogen and stored at − 80°C until RNA extraction. All patients underwent intravenous neo-adjuvant chemotherapy Cell culture and transfection CC cell lines, HeLa and CaSki, were obtained from the American Type Culture Collection (Manassas, VA, USA) and tested for mycoplasma contamination. Paclitaxel-resistant HeLa/PR and CaSki/PR cells were developed from HeLa and CaSki cell lines by treatment with gradually increasing concentrations of paclitaxel in cell culture medium. Briefly, cells were seeded in six-well plates and reached about 80% confluency in fresh medium before treating with paclitaxel. The dose of paclitaxel range from 0.1 to 20 nM and it was increased by a dose gradient that was 25-50% of the previous dose. The next dose was given until the cells were stable in proliferation without significant death. Stable cell lines overexpressing miR-125a were established by lentiviral transduction using a pCDH plasmid (System Biosciences, Mountain View, CA, USA) carrying miR-125a. All cells were cultured at 37°C in a humidified atmosphere with 5% CO 2 in Dulbecco's modified Eagle's medium or RPMI-1640 medium (Life Technologies, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (Atlanta Biologicals, Lawrenceville, GA, USA) and 1% penicillin/streptomycin (Life Technologies). For transfection, 39 cells were seeded in 24-well or 6-well plates and then transfected with the indicated plasmids using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's protocol.
Luciferase reporter assay
Cells were seeded in 24-well plates at a density of 1 × 10 5 cells per well. The cells were co-transfected with luciferase reporters, either wild-type or mutant STAT3 3′-UTR, in combination with anti-miR-125a or a scramble using Lipofectamine 2000. Forty-eight hours later, cells were harvested and analyzed for luciferase activity using a luciferase assay kit (Promega) according to the manufacturer's protocol. Figure 6. Expression of miR-125a correlates with PFS and OS in paclitaxel treated CC patients. (a, b) CC patients who received paclitaxel-based chemotherapy were separated into groups based on low or high miR-125a expression levels. Kaplan-Meier survival curves and log-rank tests were used to compare the (a) PFS and (b) OS between the two groups. (c) Expression of miR-125a in patients responded to paclitaxel (n = 18) and no-responsed (n = 20) to paclitaxel was compared using the two-tailed Student's t-test. U6 small nuclear RNA was used as an internal control.
miRNA microarray analysis
Total RNA was extracted from HeLa and CaSki cells and their paclitaxelresistant counterparts using a Qiagen miRNeasy mini kit and following the manufacturer protocol. Total RNA was sent to CapitalBio Corporation (Beijing, China) for miRNA labeling, quality control, chip hybridization and microarray analysis. Briefly, total RNA was labeled with Hy3 and Hy5 fluorescent dyes. Pairs of labeled samples were hybridized to miRCURY LNA miRNA array slides with 2549 human miRNAs. Normalization was performed using a LOWESS filter (Locally Weighted Regression) method to remove system-related variations. An analysis of variance was first applied to produce a miRNA expression profile overview across all samples and then t-tests were performed to identify significantly differentiated miRNA expression among all interested combinations of paired groups. miRNA extraction and quantitative RT-PCR Total RNA, including miRNA, was extracted from cultured cells or tissues samples with a miRNeasy Mini kit (Qiagen). Target miRNA was reverse transcribed to complementary DNA using a specific miRNA primer and miScript Reverse Transcription Kit (Qiagen). miRNA expression was measured with a miScript SYBR Green PCR Kit (Qiagen) using the ABI7500 Real-Time PCR System (Applied Biosystems, Foster City, CA, USA). Primers for miRNAs and the endogenous control, U6 gene were displayed in Table 2. The relative fold expression of the target was calculated by the comparative Ct method and was normalized to control.
Cell viability assay
Cell viability was measured using a CCK-8 Kit (Dojindo, Kumamoto, Japan) according to the manufacturer's protocol. To analyze the effects of miR-125a in combination with paclitaxel or cisplatin, cells transfected with either miR-125a or anti-miR-125a were treated at concentrations of 0, 2.5, 5, 10, 20, 40 and 80 nM with paclitaxel or at concentrations of 0, 2, 4, 8, 16, 32, 64, 128 μM with cisplatin for 24 h. The IC 50 value was calculated as the concentration of paclitaxel that reduced cell viability by 50%.
Apoptosis and flow cytometry analysis
Stable miR-125a-overexpressing and miR-125a plus STAT3-overexpressing or empty vector control paclitaxel-resistant cells (1 × 10 6 cells) were cultured in 60 mm dishes and treated with paclitaxel (20 nM) or cisplatin (10 μM) for 24 h before harvesting. The cells were labeled with propidium iodide and annexin V according to the manufacturer's instructions (BD Biosciences, San Jose, CA, USA). A minimum of 10 000 events for each sample were collected and analyzed using a FACScalibur Flow Cytometer (Becton Dickinson, BD Biosciences).
In vivo cervical tumor xenograft model
All animal experiments were undertaken in accordance with the National Institute of Health Guide for the Care and Use of Laboratory Animals, with the approval of the Scientific Investigation Board of PLA General Hospital, Beijing. Female 6-week-old BALB/c nu/nu mice were purchased from Vital River Inc. (Beijing, China). For the tumor growth model, HeLa/PR cells (2 × 10 7 cells) stably transfected with the pCDH control vector or pCDH-miR-125a were injected subcutaneously into the backs of BALB/c nu/nu mice (n = 10), which divided into two groups (n = 5 based on minimal 30% decrease from 1 g tumors with 250 μg s.d., α error of 0.05 and β error of 0.8) using random number method. After 3 weeks, if the tumor diameter was 45 mm, either paclitaxel (15 mg/kg) or saline was injected intraperitoneally once a week for 5 weeks with no blinding. Tumor sizes were measured at the indicated times using calipers. Tumor volumes were estimated according to the following formula: volume = (longest diameter × shortest diameter 2 )/2.
Statistical analysis
All in vitro experiments were performed in triplicate and repeated three times. Differences between variables were assessed by a χ 2 test or two-tailed Student's t-test. The survival rates in relation to miR-125a expression were estimated using the Kaplan-Meier method and the difference in survival curves was analyzed with a log-rank test. The relationship between miR-125a and the IC 50 of paclitaxel was examined using the Spearman's rank correlation. The SPSS 17.0 statistical software package (SPSS Inc, Chicago, IL, USA) was used to perform all statistical analyses. Data are presented as the means ± s.d. Po0.05 was considered statistically significant. | 4,915.4 | 2016-02-01T00:00:00.000 | [
"Biology"
] |
Predicting Hepatocellular Carcinoma With Minimal Features From Electronic Health Records: Development of a Deep Learning Model
Background: Hepatocellular carcinoma (HCC), usually known as hepatoma, is the third leading cause of cancer mortality globally. Early detection of HCC helps in its treatment and increases survival rates. Objective: The aim of this study is to develop a deep learning model, using the trend and severity of each medical event from the electronic health record to accurately predict the patients who will be diagnosed with HCC in 1 year. Methods: Patients with HCC were screened out from the National Health Insurance Research Database of Taiwan between 1999 and 2013. To be included, the patients with HCC had to register as patients with cancer in the catastrophic illness file and had to be diagnosed as a patient with HCC in an inpatient admission. The control cases (non-HCC patients) were randomly sampled from the same database. We used age, gender, diagnosis code, drug code, and time information as the input variables of a convolution neural network model to predict those patients with HCC. We also inspected the highly weighted variables in the model and compared them to their odds ratio at HCC to understand how the predictive model works Results: We included 47,945 individuals, 9553 of whom were patients with HCC. The area under the receiver operating curve (AUROC) of the model for predicting HCC risk 1 year in advance was 0.94 (95% CI 0.937-0.943), with a sensitivity of 0.869 and a specificity 0.865. The AUROC for predicting HCC patients 7 days, 6 months, 1 year, 2 years, and 3 years early were 0.96, 0.94, 0.94, 0.91, and 0.91, respectively. Conclusions: The findings of this study show that the convolutional neural network model has immense potential to predict the risk of HCC 1 year in advance with minimal features available in the electronic health records.
Introduction
Liver cancer is the sixth most cancer in incidence and the fourth leading cause of cancer-related mortality worldwide [1]. The most common type of liver cancer is hepatocellular carcinoma (HCC), accounting for approximately 80% of all liver cancer [1]. The incidence and mortality rate of HCC are higher in Sub-Saharan Africa and Southeast Asia than in the United States [2]. HCC incidence has been increasing globally, including in the USA, and is expected to continue growing over the next 20 years due to the higher number of patients with advanced hepatitis C virus and nonalcoholic steatohepatitis [3,4]. A significant number of studies (epidemiological and clinical) have reported risk factors of HCC that can be used to correctly stratify patients at risk and to implement prevention measures [5,6]. Accurate risk stratification tools may contribute to the timely identification of HCC patients and facilitate early detection and diagnosis.
The recent widespread adaption of electronic health records (EHRs) has caused the proliferation of clinical data and offers tremendous potential for predicting different diseases early, including cancer [7,8]. The use of EHRs can also contribute to high-quality treatment, improved patient management, reduced health care costs, and efficient clinical research [9,10]. Multiple studies have demonstrated that risk prediction models can anticipate the future incidence of HCC and ensure early treatment [8,11]. Flemming et al [12] recently developed a model for predicting the 1-year risk of HCC among patients with cirrhosis, but the performance was not satisfactory.
Convolutional neural network (CNN) models have already shown remarkable performance in detecting diseases from digital images and predicting diseases from EHRs [13]. CNN models take advantage of the hierarchical pattern in EHRs and assemble more complex patterns using smaller and simpler patterns. Thus far, however, no study has used deep learning algorithms, including CNN models, to predict HCC. Therefore, we developed a CNN model that analyzes EHRs to accurately predict HCC risk. We presented each patient's EHR data as a matrix which was formed by the medical events versus the temporal continuity and regarded the matrix as a 2D EHR image. With the time information, the EHR image revealed the severity and the trend of the medical events explicitly, which were beneficial to HCC risk classification.
Data Sources
We collected data from Taiwanese National Health Insurance Research Database, a rich source of data with the medical histories of 23 million people (approximately 99.9% of the total population in Taiwan). The database contains demographic, medication (number of prescriptions, the brand and generic name of the drugs, the date of the prescriptions, the dosage of the medication), and diagnostic information. The database is of excellent quality and completeness, and is used to conduct high-quality research. The Taipei Medical University research ethical board approved this study. Participant consent was not required because all individual's information was deidentified.
Study Population
We screened HCC cases and their information from a subset of 2 million patients from the National Health Insurance Research Database of Taiwan from January 1, 1999, to December 31, 2013. We also randomly sampled non-HCC patients, and there were nearly 4 times the number of non-HCC cases as there were HCC cases from the same database. We chose this multiple because the increase of predictive performance slowed down after a control:case ratio beyond 4 to 5 in our experiment and another study [14]. Moreover, all the participants were between 20 and 90 years old.
HCC Patients
HCC cases were identified by the International Classification of Disease, Ninth Revision, clinical modification (ICD-9-CM) code 155. HCC patients were ascertained only when they also met one of the following criteria: individuals registered as having cancer in catastrophic illness file, individuals with a primary cancer diagnosis in inpatient admission, and individuals that took HCC treatment medications or any specific procedure for HCC.
Variables Employed
The input variables for the predictive model included deidentified patient's ID, gender, age, diagnosis code, visiting date, prescription code, and exposure time of drugs. However, only the first 3 digits of the ICD-9-CM were adopted to represent the disease information. After 88 undefined codes were excluded, 993 ICD-9-CM were considered in this study, including V-code (Multimedia Appendix 1). Drug exposure was reflected by the World Health Organization Anatomical Therapeutic Chemical (ATC) classification system. We took the first 5 characters to cover most drugs in the same category; for example, the 5-digit ATC code (C09AA = angiotensin-converting enzyme inhibitors, plain) included all plain angiotensin-converting enzyme inhibitors, such as C09AA01 (captopril), C09AA02 (enalapril), and so forth. Nevertheless, 7 characters (eg, R06AX12) were considered for the other drugs with "X" as the fifth character because usually "X" means other agents in ATC code. There were 699 ATC codes expressed in this manner among these enrolled patients.
Constructing the EHR Image
Three-year (observation time) data of every enrolled patient were extracted from the National Health Insurance Research Database. To predict HCC's risk 1 year in advance, the final day of the extracted data was 1 year (advanced time) before the index day, as shown in Figure 1. For patients with HCC, the index day was the day they were diagnosed with HCC, while that of the non-HCC patients was the last day they had a diagnosis code in the data set. We chose 3 years as the observation time as a trade-off: the longer the observation time, the fewer the number of eligible patients with a sufficient data period there would be; on the other hand, with a shorter time window, the amount of data of each patient would be less. We needed a total 4 years of data for every patient, including 3 years of training and the skipping of the last year. In other words, 3 years of data were used to predict the next year of HCC cancer risk. Another thing to remember is that the duration of the drug exposure was not counted repeatedly if the times of the drug orders overlapped in different prescriptions.
We used these extracted data to construct the matrices which were regarded as the EHR images for each patient and which were afterward used to train and validate the CNN model for HCC risk prediction. The rows of the matrices were the diagnostic codes and the drug codes, and the columns were the temporal information of those events. Once the patient was diagnosed with a certain ICD-9-CM code or given a certain ATC code on a certain day, "1" was assigned to the corresponding coordinate of his or her matrix. At the end, to reduce the column size of the matrix, we rounded up the temporal coordinate by a period of 7 days, meaning the unit of the temporal sequence was 1 week instead of 1 day after being aggregated. Furthermore, to normalize the sum value (0 to 7) of the elements in the matrix to 0-1 for the following CNN computation, each element was divided by the maximum value of all the enrolled patients at the same coordinate. Considering it is not reasonable to mix different organ systems with a common CNN filter, we broke the ICD-9-CM down into 19 organ systems. Adding up the drug group, we found a total of 20 images for each patient to develop the deep learning model, as shown in Figure 2.
Architecture of the CNN Model
CNN is a biologically inspired variant of a multi-layer perceptron, which uses filters to extract the features of the input by dot production [15,16]. We applied 5 hidden layers between the input and the output layer. Among them, the first one was a convolution layer with 4 filters in the shape of 1 × 57, where 157 was the number of the weeks in 3 years and the number of columns of the input matrix. The filters were trained to learn the weighting of the temporal sequence of each organ system and the drug group.
The second layer was a max-pooling layer with a size of 1 × 3 to reduce the sparsity of the learned features and was followed by a dropout layer that set 10% of the data to 0 at random to prevent the overfitting of the model. The fourth layer flattened the output of the previous layer and concatenated age and gender information. The fifth layer was a fully connected layer with 400 neurons. Finally, the output layer had 2 neurons, representing high risk and low risk, with the softmax classifiers to indicate the predictive result, as shown in Figure 3.
As for the hyper-parameters of the CNN model, the epoch was set as 2 to obtain the optimal area under the receiver operating curve (AUROC) according to our experimental result. The batch size was 32, and the learning rate was optimized by the AdaDelta method [17]. Moreover, the activation function used in the first 3 layers was the rectified linear unit [16]. To eliminate the bias of data sampling, we introduced 5-fold cross-validation [18] to evaluate the performance of this model. Therefore, each time, 80% of all patients were applied for training and the remaining 20% were used for validation by turn. The final performance was assessed by the average of all AUROC of the 5 folds.
Statistical Analysis
In this study, continuous numeric variables are presented by mean and SD, while the categorical variables are described by frequency and percentage. The performance of the model was assessed by the AUROC, sensitivity, and specificity. Moreover, we used odds ratio (OR) as an indicator to compare to the weighting of the variables in the CNN model to check their consistency. OR is a statistic that quantifies the strength of the association between 2 events, which in this study were ICD-9-CM (or ATC code) and HCC. If the OR is greater than 1, then the 2 events are considered to be associated. Conversely, if the OR is less than 1, they are considered to be negatively correlated. For the calculation of the OR, the ICD-9-CM or ATC code was considered as true only when they occurred 3 times or more in the extracted 3 years of EHR data. In stepwise fashion, we set the content of each input variable to 0 and checked the AUROC loss against the result of the full input. The variable would have higher weighting if it underwent more AUROC loss in the testing like the feature selection [19].
All analyses were performed using R language (The R Foundation for Statistical Computing). Keras, a high-level neural network application programming interface was applied as the top of TensorFlow to construct the mentioned CNN model in this study. Running on a computer with Intel i7 CPU, 64GB DRAM, and an Nvidia GTX 1080 GPU with 8GB DRAM, the 5-fold cross-validation took 80 minutes to complete.
Results
A total of 47,945 patients (24,664 males and 23,281 females) were included in this study, with 9553 being diagnosed with HCC and 38,392 being non-HCC patients. The mean age of HCC patients was 59.9 (SD 14) years while that of the control patients was 47.5 (SD 17.3) years. Moreover, the portion of the male patients in the HCC group and the control group was 64.64% (6175/9553) and 48.16% (18,489/38,392), respectively. Table 1 shows the demographic variables of the HCC and control groups.
The overall AUROC of predicting HCC patients 1 year in advance was 0.94 (95% CI 0.93-0.94), with a sensitivity of 0.869 and a specificity of 0.865. The threshold for the output of the CNN model to classify the risk group was 0.11, which was chosen by the maximum sum value of the sensitivity and the specificity. We also evaluated the performance of the model with different advance times. The overall AUROC when predicting HCC patients at 7 days, 6 months, 1 year, 2 years, and 3 years early was 0.96, 0.94, 0.94, 0.91, and 0.91, respectively.
Furthermore, different input groups and their combination were applied separately to assess their value. Our 1-year-in-advance predictive model with training and validating completed with only age and gender information achieved an AUROC of 0.73. The AUROC was 0.86 when only the disease codes were used and 0.88 when age, gender, and the disease codes were used. Meanwhile, the model applying only ATC achieved an AUROC of 0.91, while the application of age, gender, and ATC yielded an AUROC of 0.92. Table 2 shows the AUROC impact of age, gender, and some diseases when they were withdrawn from the model, together with their ORs, against HCC. Some high impact variables were chronic liver disease and cirrhosis (AUROC loss 2.52%), viral hepatitis (0.67%), age (0.57%), peptic ulcer (0.41%), gender (0.39%), and screening for malignant neoplasms (0.78%), all of which were negatively associated with HCC due to having an OR of less than 1. Table 2 also shows some variables with extremely high or low ORs, but their AUROC was not high because the number of patients was not large; these variables included varicose veins (OR 22.47), other disorders of the liver (OR 5.34), normal pregnancy (OR 0.16), and others. Table 2 also shows the ORs of a cohort whose age and gender were matched with those of the HCC cohort, and their individual number was also 4 times greater than that of the HCC cohort, which was similar to the random sampled cohort. After the correlation of age and gender with HCC was decoupled, the ORs of the matched cohort did not appear to be as critical as those of the random sampled cohort, but their trends were consistent. Table 3 displays the AUROC-impacted value and ORs of the drugs. The high impact drugs include liver therapy (AUROC loss 1.35%), antacids with antiflatulents (1.2%), solutions for parenteral nutrition (0.77%), aluminum compounds (0.63%), antihistamines (0.57%), and others. Some drugs appear to be negatively associated with HCC, including the treatment of acne (0.48%) and progestogens (0.36%), but this does not mean that they could reduce the risk of HCC since only an association, and not causation, was discovered between them. Given the age-and gender-matched cohort, the ORs greater than 1 could be considered similar to those of the unmatched cohort, while the ORs less than 1 were not so low.
We referred to this CNN predictive model while testing a special case in which a male patient had only age and gender information but did not have any medical records during the observed 3 years. The estimated HCC risks are listed in Figure 4 according to his age. In this case, the patient was classified into the high-risk group at the age of 52 years. However, if the patient had 1 record of screening for malignant neoplasms (V76 of the ICD-9-CM) a half year before the final day of his EHR and the result was benign, the high-risk alarm would be delayed until the age of 87 years. The reason for this is that the screening for malignant neoplasms was negatively relevant to HCC.
Main Findings
Accurate stratification of patients at high risk for HCC is the primary step for early detection and treatment. Our predictive model, based on a CNN algorithm and using minimal features from electronic medical records, can correctly stratify HCC risk in patients. The main advantages of our model are that it can predict patients with HCC 6 months, 1 year, and 3 years early with an AUROC as high as 0.96, 0.94, and 0.91, respectively. Furthermore, this model does not require any laboratory data. It is entirely based on age, gender, diseases, and drug data from the EHR as part of routine patient care. Finally, results of the prediction are reliable and can be trusted; this paper presents the highly weighted variables and checked their OR against HCC to gain insight into the black box of the CNN model. HCC risk stratification performed 1 to 3 years in advance could help physicians in identifying the high-risk patients and thus improving treatment and surveillance in an evidence-based fashion, such as by actively treating hepatitis C, instructing patients to improve their lifestyle, or screening for malignant neoplasms before normally scheduled.
Comparison With Other Studies
Several groups of researchers have already attempted to improve the identification and risk stratification of HCC patients. Flemming et al [12] showed that the ADRESS-HCC risk model (including 6 variables of age, diabetes, race, etiology of cirrhosis, sex, and severity of liver dysfunction) could identify HCC patients 1 year earlier with an ROC of 0.70. A total of 34,932 patients were included in their model, and the median follow-up was 1.26 years. The traditional statistical regression was used to develop and validate the predictive model for HCC risk. Furthermore, Yang et al [20] developed a predictive model of HCC risk of over 5 or 10 years in advance in patients with chronic hepatitis B. Potential risk factors, including age, sex, alcohol consumption, and serum alanine aminotransferase level, were considered to develop and validate the predictive model. The regression model achieved AUROCs ranging from 82.1% to 88.5%, and the nomograms model achieved AUROCs ranging from 82.1% to 86.6%. In comparison, our model can predict HCC risk 1 year ahead as opposed to a longer 5-10 year period; in this way, patients at high risk are more likely to undergo further medical treatment for the more immediate hazard instead of putting it off.
Clinical Implications
This deep learning-based model works by analyzing the pattern relationships of existing data. The CNN model with multiple hidden layers has already shown remarkable success for image classification [21]. However, there is still no deep learning-based HCC risk predictive model that uses EHR data. As EHRs are a rich source of patient data, CNN models can organize these high-dimensional data sets to provide greater prediction for patients with HCC. Making use of artificial intelligence to facilitate HCC prediction is beneficial because current clinical guidelines indeed have little effect on predicting those patients with HCC 1 year earlier and usually require complementary laboratory data.
Preventing HCC is the main target in the care of a patient with multiple risk factors. A prevention strategy should focus on reducing the development of HCC risk factors or treating them in the early stage [22]. The best approaches in HCC prevention usually include identifying high-risk factors and eliminating these factors if possible. This study presents the diseases and the drugs with high weighting in the model as well as those with higher ORs. These have also been reported in other studies. A significant amount of literature has already indicated that age, gender [7], and diseases like viral hepatitis, peptic ulcer, chronic liver disease, and cirrhosis [23][24][25] are associated with the development of HCC. Also, some studies found evidence for a relation between vitamins and liver diseases such as fibrosis [26] or nonalcoholic fatty liver disease [27]. Mineralocorticoid receptor activation could play a role in hepatic fibrogenesis, and its modulation could be beneficial for nonalcoholic steatohepatitis [28]. Moreover, a liver drug, silymarin, has been used to good effect in different liver disorders due to its antioxidant, anti-inflammatory, and antifibrotic properties [29].
Previous studies have shown that the use of antacids promotes liver disease [30], and the high impact of antacids (see Table 3) should be further investigated to determine whether a causal relationship exists.
Another noteworthy finding was that some variables had high ORs for HCC but were not in the list of highly weighted variables. This may be because the number of the patients diagnosed with these variables was not large enough to garner heavy weighting. For example, ICD-9 code 456 (varicose veins of other sites) had an OR as high as 22.47, but the AUROC loss for it was less than 0.1% because there were only 246 patients with this code out of the 9553 patients with HCC and the total population of 47,945.
As correlation is not necessarily causation [31], it cannot be concluded that those variables with high ORs induce HCC: they are only positively correlated with it. However, these can still be considered significant variables and be used to predict HCC risk. For example, we cannot claim antacids with antiflatulents induce HCC despite their OR for HCC being as high as 10.38. However, the patients taking these drug do have a higher probability of having HCC due to its relationship with HCC.
On the other hand, OR of screening for malignant neoplasms was less than 0.5, which means it is negatively correlated with HCC. The reason for this correlation is that the neoplasms screening records of the patients with HCC do not increase after day they are diagnosed with HCC, while the non-HCC patients continuously accumulate screening records until the last day of their extracted data. Furthermore, the reason why some diagnoses in Table 2, including endometriosis, symptoms associated with female genital organs, and pregnancy, negatively correlate with HCC is that they are more commonly associated with young females who have the opposing traits to those considered as high-risk factors of HCC: being old and male. The inspection of the ORs corresponding to the highly weighted variables also helps us to understand how the predictive model works.
Strengths and Limitations
Our model has several strengths. First, this is the first study to use a deep learning-based predictive model to stratify patients with HCC 1 year in advance via the claim database. Second, our study achieved a higher performance than did previous studies all while using a minimal number of features from standardized and widely available clinical data of EHRs. Despite the promising results in stratifying HCC patients, our study has several limitations that should be addressed. First, laboratory data inclusion may enable more accurate deep learning models to be trained and validated with higher confidence. Second, several variables such as genetic data, ethnicity, family history, alcohol consumption, smoking, dietary habit, vital signs, and BMI were not considered in our predictive model, the inclusion of which may improve the prediction; nonetheless, our model achieved a high performance with the currently available variables in EHRs. Finally, external validation on other data sets are warranted to the ensure generalizability of our current model. | 5,546.8 | 2020-05-02T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
(E)-4-(2-Chlorobenzylideneamino)-3-(2-chlorophenyl)-1H-1,2,4-triazole-5(4H)-thione–(E)-1,5-bis(2-chlorobenzylidene)thiocarbonohydrazide–methanol (1/1/1)
In the title compound, C15H12Cl2N4S·C15H10Cl2N4S·C2H6O, the two chlorophenyl rings of the triazole derivative form dihedral angles of 65.7 (2) and 44.2 (2)° with the triazole ring. In the thiocarbonohydrazide derivative, the dihedral angle between the two chlorophenyl rings is 5.4 (2)°. In the crystal, the triazole, thiocarbonohydrazide and methanol molecules are linked by N—H⋯O, N—H⋯S and O—H⋯S hydrogen bonds, forming a hexameric unit.
S1. Comment
The synthesis and structural investigation of Schiff base compounds have attracted much attention due to their interesting structures and potential applications. Some of them have biological activities (Liang, 2003;Bacci et al., 2005). They also play an important role in the development of coordination chemistry as well as inorganic biochemistry, catalysis and optical materials (Ren et al., 1999;Yang et al. 2005;Sen et al. 1998).
The dihedral angle between the C1-C6 and C10-C15 rings is 5.4 (2)°. Two triazole, two thiocarbonohydrazide and two methanol molecules are linked by N-H···O, N-H···S and O-H···S hydrogen bonds to form a hexamer.
S2. Experimental
The Schiff base compound was synthesized according to the modified method of Xia et al. (2007). A mixture of (2chlorophenyl)methanamine and thiourea in methanol (30 ml) was refluxed for 3 h and filtered. The filtrate was placed for sevaral days yielding colourless block-shaped crystals. (yield 79%). Elemental analysis: Calculated for C 32 H 28 Cl 4 N 8 OS 2 : C 51.48, H 3.78, N 15.01; found: C 51.51, H 3.49, N 15.13.
S3. Refinement
The H atoms were found in a difference map, then placed in idealized positions (C-H = 0.93-0.97 Å, N-H = 0.86 Å and O-H = 0.82 Å), and refined using a riding model, with U iso (H) = 1.2U eq (C,N) and 1.5U eq (O,C methyl ).
Figure 1
The asymmetric unit of the title compound, with atom labels and 30% probability displacement ellipsoids for non-H atoms. H atoms have been omitted for clarity.
(E)-4-(2-Chlorobenzylideneamino)-3-(2-chlorophenyl)-1H-1,2,4-triazole-5(4H)-thione-(E)-1,5-bis(2chlorobenzylidene)thiocarbonohydrazide-methanol (1/1/1)
where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max = 0.001 Δρ max = 0.28 e Å −3 Δρ min = −0.25 e Å −3 Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 718.2 | 2009-12-16T00:00:00.000 | [
"Chemistry"
] |
Magnetic relaxation in terms of microscopic energy barriers in a model of dipolar interacting nanoparticles
The magnetic relaxation and hysteresis of a system of single domain particles with dipolar interactions are studied by Monte Carlo simulations. We model the system by a chain of Heisenberg classical spins with randomly oriented easy-axis and log-normal distribution of anisotropy constants interacting through dipole-dipole interactions. Extending the so-called $T\ln(t/\tau_0)$ method to interacting systems, we show how to relate the simulated relaxation curves to the effective energy barrier distributions responsible for the long-time relaxation. We find that the relaxation law changes from quasi-logarithmic to power-law when increasing the interaction strength. This fact is shown to be due to the appearence of an increasing number of small energy barriers caused by the reduction of the anisotropy energy barriers as the local dipolar fields increase.
I. INTRODUCTION
Long-range dipolar interactions are at the heart of the explanation of many peculiar or anomalous phenomena observed in magnetic nanostructured materials. Whereas in atomic magnetic materials the exchange interaction usually dominates over dipolar interactions, the opposite happens in many nanoscale particle or clustered magnetic systems, for which the interparticle interactions are mainly of dipolar origin.
Among the wide variety of artificially prepared systems containing nanosized magnetic clusters, some are particularly interesting for the study of the dipolar interaction in a controlled manner. Among them, we have granular metal solids consisting of fine magnetic particles embedded in a nonmagnetic matrix, in this case, for insulating matrices (for which RKKY interactions are absent), the dipolar interaction between the granules dominates over exchange via tunneling mechanisms 1,2,3,4,5 . In these materials, the interactions can be tuned because the metal volume fraction and average size of the granules can be varied in a controlled way. Frozen ferrofluids consisting of nanosized magnetic particles dispersed in a carrier liquid have also been extensively studied 6,7,8,9,10,11 . These are considered as experimental models of random magnet systems and, in this case, the strength of the interactions can be tuned easily by controlling the concentration of particles in the ferrofluid.
In systems with reduced dimensionality, the effects of dipolar interactions are even more relevant since they allow the existence of long-range ordered phases at low temperature 12,13 . Among two dimensional systems, we find patterned media composed by regular arrays of nanoelements 14,15 of different shapes and self-ordered magnetic arrays of nanoparticles 16,17,18,19,20 , both of potential use in ultra-high density magnetic storage. In this kind of materials, interparticle interactions have to be prepared with a high control over the size, shape and interparticle distances in order to minimize the interparticle interactions since they could induce demagnetiza-tion of the stored information 21,22,23 . Finally, dipolar interations has proven to be essential to elucidate the ferromagnetic order and hysteresis of one-dimensional structures such as nanostripes 24,25 , monoatomic metal chains 26,27,28,29 , nanowires 30,31 and others 32,33,34 . They also play a crucial role in the quantum relaxation phenomena of molecular clusters 35 .
While dilute systems are well understood, experimental results for dense systems are still a matter of controversy. Some of their peculiar magnetic properties have been attributed to dipolar interactions although many of the issues are still object of debate. Different experimental results measuring the same physical quantities give contradictory results and theoretical explanations are many times inconclusive or unclear, in what follows we briefly outline the main subjects to be clarified. The complexity of dipolar interactions and the frustration provided by the randomness in particle positions and anisotropy axes directions present in highly concentrated ferrofluids seem enough ingredients to create a collective glassy dynamics in these kind of systems. Experiments probing the relaxation of the thermoremanent magnetization 11,36,37 have evidenced magnetic aging and studies of the dynamic and nonlinear susceptibilities 2,37,38 also find evidence of a critical behaviour typical of a spin-glass-like freezing. All these studies have attributed this collective spin-glass behaviour to dipolar interactions, although surface exchange may also be at the origin of this phenomenon. However, MC simulations of a system of interacting monodomain particles 39 show that, while the dependence of ZFC/FC curves on interaction and cooling rate are reminiscent of a spin glass transition at T B , the relaxational behaviour is not in accordance with the picture of cooperative freezing. Moreover, it is still not clear what is the dependence of the blocking temperature and remanent magnetization with concentration, ε, in ferrofluids: while most experiments 6,7,8,39,40,41,42,43 , find an increase of T B and a decrease of M R with ε, others 9,10 observe the contrary variation in similar systems. Finally, for disordered systems, the dipolar interaction ususally diminishes the coercive field 40,41 .
The purpose of this paper is to present the results of Monte Carlo simulations of a model of a system of nanoparticles simple enough to capture the main features observed in experiments. In particular, we will show that the spin-glass phenomenology described above is present even in a simple model consisting of a spin chain with dipolar interactions and disordered anisotropy easy-axes as the only ingredients. For this purpose, we present the results of simulations of the time dependendence of the magnetization for different values of the strenght of the dipolar interaction and temperatures. With the aim to establish a connection between the microscopic energy landscape of the magnetic system and the observed relaxation laws, we will present an extension of the T ln(t/τ 0 ) scaling method to systems with dipolar interactions that allows us to extract the distribution of energy barriers and of dipolar fields responsible for the relaxation from the relaxation curves.
II. MODEL
The model considered consists of a linear chain of N = 10 000 classical Heisenberg spins S i (i = 1, . . . , N ), each one representing a monodomain particle with magnetic moment µ i = µS i . As depicted in Fig. 1, spins have random uniaxial anisotropyn i , and anisotropy constants K i , distributed according to a distribution function f (K) which we will take as a lognormal of width σ and mean value K 0 . The spins interact via dipolar long-ranged interactions and with an external homogeneous magnetic field H pointing along the direction perpendicular to the chain. Spins are meant to represent the total magnetic moment of the particle, so that we will not take into account the internal structure of the particle. The corresponding Hamiltonian can be written then as: where g = µ 0 µ 2 /4πa 3 characterizes the strength of the dipolar interaction and r ij is the distance separating spins i and j, a is the lattice spacing, here chosen as 1. The direction of the spin vectors will be restricted to lie in the x−z plane and therefore particles are characterized by the angles θ i . This choice has been taken because only in this case exact values of the minima of the energy function and the respective energy barriers can be 1: 1D chain of spins Si with random anisotropy directions ni (dashed lines). H dip jk is the dipolar field generated by the spin S k on the spin Sj. θi, ψi, θ dip , are the angles formed by the magnetic moment, the anisotropy axis and the dipolar field with respect to the z axis.
conputed exactly. Finally, periodic boundary conditions along the chain are considerd, so that we get rid of the possibility of spin reversal at the boundaries of the system because of the reduced coordination there. In what follows, temperature will be measured in reduced units The effect of the dipolar interaction can be more easily understood by defining the dipolar fields acting on each spin i (see Fig. 1) Therefore, rewriting the dipolar energy as the total energy of the system can be expressed in the simple form Now, the system can be thought as an ensemble of non-interacting spins feeling an effective field which is the sum of ans external and a locally changing dipolar field H ef f i = H + H dip i . Note that the first term in Eq. (2) is a demagnetizing term since it is minimized when the spins are antiparallel, while the second one tends to align the spins parallel and along the direction of the chain. For systems of aligned Ising spins only the first term is non-zero and, consequently, the dipolar field tends to induce AF order along the direction of the chain (the ground state configuration for this case). However, for Heisenberg or planar spins, the competition between the two terms gives rise to frustrating interactions, which may induce other equilibrium configurations, depending on the interplay between anisotropy and dipolar energies.
III. COMPUTATIONAL DETAILS
When considering Heisenberg spins with a continuous degree of freedom θ, special care has to be taken in the way the trial steps are done 44,45 . Moreover, independently of the election of the trial step, there are different ways of implementing the Monte Carlo dynamics in this case, that differ essentially in how the energy difference ∆E appearing in the Boltzmann probability is computed. Either ∆E is computed as the energy difference between the current S old and the attempted S new values of the spin or it is chosen as the energy barrier which separates S old and S new . Note that the second choice gives ∆E's that are higher than the first if there is an energy maximum separating the two states. Consequently, the time scale corresponding to one MC step depends crucially both on the trial step election and the chosen dynamics 44,46 .
Since our major interest is to study the connection between the intrinsic energy barrier distributions and the long time relaxation of the magnetization, we have devised a MC algorithm that considers trial jumps only between orientations corresponding to energy minima randomly chosen with equal probabilities. The ∆E in the transition probability are always equal to one of the actual energy barriers of the system. This is possible because in the model considered the spins are restricted to point in the x-z plane and for this case it is possible to find the energy minima and maxima as well as the energy barriers separating them numerically since the energy of a particle can be rewritten as where the θ i , ψ i and θ h i are the angles formed by the magnetic moment, anisotropy axis and effective field with respect to the z axis. Although the energy barriers cannot be analytically calculated for all the values of ψ i and θ h i , it is not difficult to build up an algorithm that finds the minima and maxima of the energy function (6) and their respective energies 47 . Therefore, a MC step consists of the following steps: a spin is chosen at random, the energy barriers are computed following the above mentioned method, a trial jump is attempted and accepted with probability acting on the other particles are recalculated and finally the whole process is repeated N times.
IV. RELAXATION CURVES: T ln(t/τ0) SCALING WITH INTERACTIONS
In this section, we present the results of MC simulations of the thermal relaxation of the magnetization obtained following the protocol described in Sec. III. The main goals are to study the variation of the relaxation law with the interaction strength g and to apply T ln(t/τ 0 ) scaling approach of the relaxation curves to show how the energy barrier distributions can be obtained from this kind of analysis even when interaction among particles is present.
for spin configurations achieved after an equilibration at T = 0 in which spins have been driven itereatively towards the nearest energy minimum direction starting from an initial FM configuration. The system has a lognormal distribution of anisotropy constants (σ = 0.5) and random anisotropy axes directions.
A. Initial configurations and effective energy barrier distributions
The studied relaxation processes are intended to mimic experiments in which the decay of the magnetization after the application of a saturating magnetic field is recorded. Therefore, the initial spin configuration should be chosen so that all spins in the chain are pointing along the z axis. However, this configuration is highly metastable even at T = 0 because, due to the randomness in anisotropy axes, the spins will not be pointing along the local energy minima directions. If the system is initially prepared in this way (by the application of a strong external field, for example), the spins will instantaneously reorient their magnetizations so that they lie along the nearest minimum. This accommodation process occurs in a time scale of the order of τ 0 , much shorter than the thermal over-barrier relaxation times τ . Therefore, in real experiments probing magnetization at time scales of the order of 1 − 10 s (i.e. SQUID magnetometry), this will not be observed. In order to get rid of this ultra-fast relaxation during the first steps of the simula- tions, we submit the system to a previous equilibration process at T = 0, during which the spins are consecutively placed in the nearest energy minima. Since the dipolar field after each of this movements changes on all the spins, the energy minima positions change continuously, but, after a certain number of MC steps, the total magnetization stabilizes and the system reaches a final equilibrated state.
The distribution energy barriers f (E b ) of these initial equilibrated configurations can be obtained by sampling the individual energy barriers of all the spins using the algorithm described in Sec. III. The normalized histograms obtained in this way are shown in Fig. 2 for different values of the interaction strength g. For weak interactions (g = 0.1), there are slight changes on the f (E b ) with respect to the non-interacting case. As in the case of an external homogeneous field 48 , the dipolar fields shift the peak of the distribution towards higher values, while its shape is unchanged. However, when increasing g, the smallest energy barriers of particles having the smallest K start to disappear. This leads to the appearance of a peak at zero energy, to an increase in the number of low energy barriers due to the reduction by the field, and also to the appearance of a longer tail at high energies. As the dipolar interaction is increased further (g = 0.3, 0.4), the original peak around E b ≃ 1 is progressively suppressed as more barriers are destroyed, and a secondary subdistribution peaked at high energies appears as a consequence of barriers against rotation out of the effective field direction. The relaxation curves obtained through the computational scheme described in the previous section at different temperatures are shown in Fig. 3 for values of the interaction g parameter ranging form the weak (g = 0.1) to the strong (g = 0.5) interaction regime. We observe that the stronger the interaction, the smaller the magnetization of the initial configuration due to the increasing strength of the local dipolar fields that tend to depart the equilibrium directions from the direction of the anisotropy axis. Thus, we point out that, if relaxation curves for different g at the same T are to be compared, they have to be properly normalized by the corresponding m(0) value. As it is evidenced by the logarithmic time scale used in the figure, the relaxation is slowed down by the intrinsic frustration of the interaction and the randomness of the particle orientations.
More remarkable is the fact that the stronger the interaction is, the magnetization decay is slower, which agrees well with the experimental results of Refs. 9,49,50 . However, at difference with other simulation works 51,52 , the quasi-logarithmic relaxation regime is only found in our simulations in the strong interaction regime, for short times, and within a narrow time window that depends on T . This can be understood because of the short duration of the relaxations in other works compared to ours, which were extended up to 10000 MCS, thus confirming the limitation of the logarithmic approximation to narrow time windows.
C. T ln(t/τ0) scaling in presence of interaction.
We will analyze the relaxation curves at different temperatures following the phenomenological T ln(t/τ 0 ) scaling approach presented in previous works for noninteracting sytems 53,54 and systems in the presence of a magnetic field 48,55 . The method is based on the fact that the dynamics of a system of magnetic entities can be described in terms of thermal activation of the Arrhenius type over effective local energy barriers. Although one could think that this assumption is only valid in noninteracting particle systems, we would like to stress that the T ln(t/τ 0 ) scaling approach was first successfully introduced in studies of spin-glasses, where short range frustrated interactions prevail. In systems with dipolar interactions, although the energy barrier landscape of the system change as the relaxation proceeds due to the long-range of the interaction, we will argue in the following sections that this fact does not preclude the applicability of scaling to low T relaxations. In fact, the accomplishment of the T ln(t/τ 0 ) scaling in interacting systems and the effective energy barrier distributions deduced from the corresponding master curves provide information about the energy barriers that are effectively probed during the relaxation process, even if they keep on changing during the process.
The results of the master curves obtained from Fig. 3 by scaling the curves along the horizontal axis by multiplicative factors T , are presented in Fig. 4 for a range of temperatures covering one order of magnitude. Notice that in a MC simulation τ 0 = 0.5 and it is not an adjustable parameter of the scaling law. First, we observe that, in all the cases, there is a wide range of times for which overlapping of the curves is observed. Below the inflection point of the master curve, the overlap is better for the low T curves, whereas high T curves overlap only at long times above the inflection point, as in the non-interacting case 53 . Moreover, it seems that scaling is accomplished over a wider range of T the stronger the interaction is, whereas in the weak interaction regime, scaling is fulfilled over a narrower range of times and T . As we will explain latter, this fact is due to the different variation of the effective energy barriers contributing to the relaxation in the two regimes.
In order to see the influence of g on the relaxation laws, we have plotted in Fig. 5 the master relaxation curves for different values of the interaction parameter g after a smoothing and filtering of the curves in Fig. 4. A qualitative change in the relaxation law can be clearly seen when increasing g. In the weak interaction regime (g = 0.1, 0.2), the magnetization decays to the equilibrium state with an inflection point around which the decay law is quasi-logarithmic. In the strong interaction regime (g ≥ 0. This power-law behaviour has also been found by Ribas et al. 56 in a 1D model of Ising spins and by Sampaio et al. 23,57 and Toloza et al. 58 in Monte Carlo simulations of the time dependence of the magnetic relaxation of 2D array of Ising spins under a reversed magnetic field. It has also been observed experimentally in arrays of micromagnetic dots tracked by focused ion beam irradiation on a Co layer with perpendicular anisotropy 21,22 , and also in discontinuous multilayers 59 .
V. EVOLUTION OF f eff (E b ) AND OF DIPOLAR FIELDS
In order to gain some insight on what are the microscopic mechanisms that rule the different relaxation laws in the weak and strong interaction regimes, we will examine how the distribution of energy barriers and the distribution of dipolar fields change during the relaxation process. Due to the distribution of anisotropy constants and easy-axes orientations and the non-uniformity of the T = 0 equilibrated states, it is not easy to infer the microscopic origin of the initial distributions of energy barriers shown in Fig. 2a. It turns out that histograms of the strengh of the dipolar fields across the system for different values of g turn to be useful to stablish this connection as, at low T , the direction and values of the local H dip mainly determine the first stages of the relaxation process. Let us also notice that the distribution of dipolar fields is only sensitive to the spin orientations and their positions in the lattice and does not depend on the anisotropy or easy-axis directions of the particles, The computed dipolar field distibutions f (H dip ) obtained by a procedure similar to that used to compute the energy barrier distributions are displayed in Fig. 2b, where the dipolar fields having a component in the neg-ative y direction have been given a negative sign. Since most of the spins after the equilibration process are pointing along the minima closer to the positive y axis, local H dip pointing along the negative y direction will give a higher probability for the spin to jump from a metastable state to the equilibrium state.
For weak interaction (g = 0.1), the initial f (H dip ) is strongly peaked at a value which is very close to the dipolar field for a FM configuration H ⊥ dip = −2ζ(3) = −2.404g. Dipolar fields pointing in the negative direction are scarce, indicating that the equilibrated configuration is not far from the initial FM one. In this case, the spins remain close to the anisotropy axis since the energy minima and the energy barriers between them do not depart appreciably from the non-interacting case. This is also corroborated by the shape of f (E b ) which resembles that for g = 0.
However, in the strong interaction regime, some of the local dipolar fields are strong enough to destroy the energy barriers of the particles with lower K, and therefore the numerous negative dipolar fields are originated by particles that have rotated into the local field direction. There are still positive fields, but now the peak due to collinear spins blurs out with increasing g (it is visible at H dip ≈ 0.5, 0.7 for g = 0.2, 0.3 respectively). At the same time, a second peak, centered at higher field values, starts to appear and finally swallows the first (see the case g = 0.5). This last peak tends to a value equal to H dip = ∓ 4.808 g with increasing g, which corresponds to FM alignment of the spins along the chain direction. All these features are also supported by the distributions of dipolar field angles (see the inset in Fig. 2b), which progressively peak around θ dip = ±π/2 when increasing the interaction strength. This indicates the above mentioned tendency of spins to order along the chain direction when only one minimum is present.
In order to gain a deeper insight into the microscopic evolution of the system during the relaxation, the histograms of energy barriers and dipolar fields at intermediate stages during the relaxation have been recorded after different MC steps. The results for the f (E b ) and f (H dip ) evolution during a relaxation at an intermediate temperature T = 0.1 are presented in Fig.6. The evolutions are markedly different in the two interaction regimes.
In the weak interaction regime, the relaxation is dominated by anisotropy barriers, so that the distributions are similar to the non-interacting case. As time elapses, particles with the lowest energy barriers relax towards a state with higher energy barriers. However, although during the relaxation process the energy barriers change locally, this change is compensated by the average over the anisotropy distribution and random orientations of the easy-axes. Thus, the global f (E b ) does not change significantly as the system relaxes, although at the final stages of the relaxation the system is in a much more disordered configuration than initially. In spite of this, the distribution of dipolar fields, which is more sensitive to the local changes in spin configuration, presents evident changes with time as can be seen in Fig. 7. As relaxation proceeds, the high peak of positive H dip progressively flattens, since it corresponds to particles whose magnetization is not pointing along the equilibrium direction. Particles that have already relaxed, create dipolar fields in the negative direction which are reflected in a subdistribution of negative H dip of increasing importance as time evolves. Near the equilibrium state of quasi-zero magnetization, the relative contribution of positive and negative fields tend to be equal, since, in average, there are equal number of "up" and "down" pointing spins.
In the strong interaction regime (g = 0.4 in Figs. 6 and 7), dipolar fields are stronger than anisotropy fields (H anis ) for the majority of the particles, even at the earlier stages of the relaxation process. As time elapses, the number of small energy barriers, corresponding to the particles with smaller anisotropies, continuously diminishes as they are overcome by thermal activation. When relaxing to their equilibrium state, now closer to the dipolar field direction, the particles with initially small E b give rise to higher energy barriers and also higher dipolar fields on their neighbours. This is reflected in the increasingly higher peak in the f (E b ) that practically does not relax as time elapses, causing the final distribution to be completely different from the initial one. What is more, as more particles relax, more particles feel an H dip > H anis and, therefore, a higher E b for reversal against the local field. This leads to faster changes in the dipolar field distribution and also is at the origin of the power-law character of the relaxations. Equilibrium is reached when f (H dip ) presents equal sharp peaked contributions from negative and positive fields, since in this case there is an equal number of particles with magnetizations with positive and negative components along the y axis.
VI. EFFECTIVE ENERGY BARRIER DISTRIBUTIONS FROM T ln(t/τ0) SCALING
Our next goal is to extract the effective distributions of energy barriers from the master curves obtained from the T ln(t/τ 0 ) scaling method and to understand what kind of microscopic information can be inferred from them in the case of interacting systems. In previous works 48,54,55 , we have shown that in the range of va- lidity of the T ln(t/τ 0 ) scaling the effective distribution of energy barriers contributing to the long time relaxation process can be obtained from the master relaxation curve simply by performing its logarithmic time derivative S(t) = dM (t)/d ln(t). The obtained distribution f ef f (E b ) represents a time independent distribution that would give rise to a relaxation curve identical to the master curve. At difference from non-interacting systems (for which the T ln(t/τ 0 ) scaling formalism was initially introduced), the f ef f (E b ) does not necessarily match the real energy barrier distribution for the case at hand. Fig. 8 presents the f ef f (E b ) for different values of g obtained from the master curves of Fig. 5. For weak interaction (g = 0.1), the effective distribution of energy barriers has essentially the same shape as for the noninteracting case. However, as g increases, the distribution becomes wider with respect to the non-interacting case and the mean effective barrier is shifted towards lower values of the scaling variable until for g > ∼ 0.1 a contribution of almost zero energies dominates. In a sense, this features resemble the situation for a non-interacting particle system in an external magnetic field, for which the shift of f ef f (E b ) with increasing H is associated to the decrease of the energy barriers for rotation towards the field direction 48,60 .
When entering the strong interacting regime, the effective distribution is clearly distorted with respect to the non-interacting case, becoming a decreasing function of the energy at high g. In this regime, dipolar interactions do not only modify the existing anisotropy barriers but also create high energy barriers, that result in a more uniform effective distribution spreading to higher energy values. This change in f ef f (E b ) is clearly related to the power-law behaviour of the relaxation law in the strong g regime and, therefore, a genuine effect of the dipolar interaction. This striking behaviour has important consequences on the experimental interpretation of relaxation curves. A parameter oftenly used to characterize thermal contributions to magnetic relaxation is the so-called magnetic viscosity (S(T ) i.e., the slope of the magnetic relaxation curve at a given T in the logarithmic dependence range).
This change of behaviour in the effective energy barrier distributions has been observed experimentally in ensembles of Ba ferrite fine particles 49,61 , in which evidence of T ln(t/τ 0 ) scaling of the relaxation curves was demonstrated and the relevance of demagnetizing interactions in this sample was established by means of Henkel plots at different T . In this experiment, the authors also studied relaxation processes after different cooling fields and found that when increasing the cooling field, the effective distributions changed from a function with a maximum that extends to high enegies to a narrower distribution with a peak at much lower energy scales for high cooling fields. The effective distribution at high H FC , which was there argued to be given by the intrinsic anisotropy barriers of the particles, appears shifted towards lower energy values with respect to the anisotropy distribution as derived from TEM due to the demagnetizing dipolar fields generated by the almost aligned spin configuration induced by the H FC . From magnetic noise measurements on self-assembled lattices of Co particles, Woods et al. 62 also extracted anisotropy energy distributions wider than nanoparticle volume distributions, an effect that can be ascribed to the strong dipolar interactions among the closely packed particle lattices. Finally, a widening of the measured barrier distributions with increasing intergranular magnetostatic interactions has been observed in a FePt nanoparticle systems 63 and perpendicular media for magnetic recording 64 , which is also in agreement with the results of our simulations.
By direct comparison of curves in Fig. 5 with those in Fig. 2, it is clear that the effective energy barrier distributions derived form the master relaxation curves do not coincide with the real energy barrier distributions. In order to unveil the information given by f ef f (E b ), we have computed the cumulative histograms of energy barriers that have been really jumped during the relaxation process. The corresponding results are presented in Fig. 9 for systems in the weak and strong interaction regimes and T = 0.1, 0.2. Although in principle one could think that the derivative of the master curves collects jumped energy barriers of the order of T ln(t/τ 0 ) as time elapses, direct comparison of the curves in Fig. 9 with those in Fig. 8, reveals that the cumulative histograms overcount the number of small energy barriers at all the studied T and g. This small energy barriers that are not seen by the relaxation correspond to the those jumped by the superparamagnetic (SP) particles, which are not blocked.
In fact, when the cumulative histograms are computed by counting only the E b jumped by particles that have not jumped up to a given time t (blocked particles), the contribution of SP particles that have already relaxed to the equilibrium state is no longer taken into account. The histograms computed in this way are presented in 10. Here, we see that when only the energy barriers jumped by the blocked particles are taken into account, the resulting histograms at advanced stages of the relaxation process tend to the effective energy barriers derived from the master relaxation curves (dashed lines in the panels at t = 10000 MCS). The difference between both quantities at high energy values is due to the existence of very high energy barriers, that can only be surmounted at temperatures higher than those considered here or at very long times.
VII. CONCLUSIONS AND DISCUSSION
We have studied the magnetic relaxation of a simple model consisting of a spin chain with dipolar interactions, showing that they are responsible for the long time dependence of the magnetization observed in many experiments. As the strength of dipolar interactions g is increased, the relaxation law changes from quasilogarithmic to a power law as g increases, due to the intrinsic disorder of the system and the frustration induced by the dipolar interactions. This power-law decay has been observed in relaxation experiments under a reversal field of increasing magnitude in arrays of magnetic dots produced by high fluence ion irradiation 22,23 . MC simulations mimicking the experiments demonstrated 23,57 that this was due to the long-range character of the interaction. Recent studies on granular multilayers 59 have also revealed power-law relaxations of the thermoremanent magnetization for nominal thicknesses of the magnetic layer t n ≥ 1.2 nm, for which superferromagnetic behaviour order between magnetic clusters was observed 65 . In this case, the power-law behaviour was attributed to the relaxation of superspins with random anisotropy axes and distribution of anisotropies inside domains, towards more perfect collinearity. Moreover, they found a finite residual magnetization at long times that is also observed in our relaxation curves (see Figs. 3 and 4) as a consequence of the competition between the randomness in anisotropy axis orientations and the frustration induced by the dipolar interactions. Another simulation work have observed power-law decays of the magnetization in systems of ferromagnetic nanoparticles with dipolar interactions at high concentrations 66 that approached a finite remanent magnetization. The authors explained these result by assuming that the relaxation rate followed a power-law decay with time. Here, instead, we have been able to deduce directly the distribution of energy barriers responsible for this spin-glass like time dependence and to see that it coincides with the distribution deduced from the master relaxation curve. This energy barrier distribution is broadened and has an increasing contribution of small energy barriers as the dipolar interaction g increases. These two features are in accordance with the experimentally observed broadening of the relaxation rates with respect to noninteracting particle systems found in relaxation experiments on nanosized maghemite particles 67 and granular multilayers 65 .
Although our results have been obtained for a one dimensional chain of magnetic entities, we believe that similar conclusions can be drawn for systems with higher effective dimensionality as long as their magnetic behaviour is dictated by long-range interactions.
We have proved that, in the scope of our model, the T ln(t/τ 0 ) scaling phenomenological model presented in previous works for non-interacting 53,54 systems and for systems relaxing in the presence of a magnetic field 48,55 is also valid for interacting systems within limits similar as those for non-interacting systems. From the master relaxation curves obtained by the application of this method, we have shown that effective energy barrier distributions can be obtained, giving valuable information about the microscopic energy barriers responsible for the relaxation. Moreover, with this method, the variation of these energy barrier distributions can be monitored as a function of the dipolar interaction strength, an informa-tion that cannot be directly measured.
For weak interactions (diluted systems), the effective energy barrier distributions shift towards lower E b values with respect to the non-interacting case and become wider as the strength of the dipolar interaction g increases, in qualitative agreement with experimental results. However, for strong interactions (dense systems), the energy barrier distributions become a decreasing function of energy with an increasing contribution of quasi-zero barriers as g increases. We believe that both behaviours can reconcile the contradictory explanations 43,68,69 given to account for the variation of the blocking temperature T B with particle concentration in terms of energy barrier models. For weakly interacting systems, the energy barriers relevant to the observation time window decrease with increasing interaction and consequently the same behaviour is expected for T B . This is corresponds to the observations by Mørup and collegues 9,70 in Mössbauer experiments on maghemite nanoparticles.
However, when interparticle interactions are strong enough to dominate over the disorder induced by the distribution of anisotropy axes, we have shown that the dynamic effects are ruled out by an effective energy distribution that broadens towards higher energies as g increases. Consequently, an increase in the blocking temperature is expected as observed in the ac susceptibility measurements on Co clusters by Luis et al. 69 . | 8,531.6 | 2003-11-06T00:00:00.000 | [
"Physics"
] |
Hermite Interpolation Based Interval Shannon-Cosine Wavelet and Its Application in Sparse Representation of Curve
: Using the wavelet transform defined in the infinite domain to process the signal defined in finite interval, the wavelet transform coefficients at the boundary are usually very large. It will bring severe boundary effect, which reduces the calculation accuracy. The construction of interval wavelet is the most common method to reduce the boundary effect. By studying the properties of Shannon-Cosine interpolation wavelet, an improved version of the wavelet function is proposed, and the corresponding interval interpolation wavelet based on Hermite interpolation extension and variational principle is designed, which possesses almost all of the excellent properties such as interpolation, smoothness, compact support and normalization. Then, the multi-scale interpolation operator is constructed, which can be applied to select the sparse feature points and reconstruct signal based on these sparse points adaptively. To validate the effectiveness of the proposed method, we compare the proposed method with Shannon-Cosine interpolation wavelet method, Akima method, Bezier method and cubic spline method by taking infinitesimal derivable function cos ( x ) and irregular piecewise function as an example. In the reconstruction of cos ( x ) and piecewise function, the proposed method reduces the boundary effect at the endpoints. When the interpolation points are the same, the maximum error, average absolute error, mean square error and running time are 1.20 × 10 − 4 , 2.52 × 10 − 3 , 2.76 × 10 − 5 , 1.68 × 10 − 2 and 4.02 × 10 − 3 , 4.94 × 10 − 4 , 1.11 × 10 − 3 , 9.27 × 10 − 3 , respectively. The four indicators mentioned above are all lower than the other three methods. When reconstructing an infinitely derivable function, the curve reconstructed by our method is smoother, and it satisfies C 2 and G 2 continuity. Therefore, the proposed method can better realize the reconstruction of smooth curves, improve the reconstruction efficiency and provide new ideas to the curve reconstruction method.
Introduction
In modern industrial manufacturing technology, curve and surface modeling technology is one of the very important technologies, and its plays an important role in numerical fitting and approximation; computer vision; streamlined design of crafts, outline design of aircraft, automobiles and ships; restoration of cultural relics; geological surface interpolation [1]; reverse engineering [2,3]; and other fields. Besides, it is the core of computer-aided geometric design (CAGD) [4].
In practical applications, interpolation techniques [5,6] are often used to reconstruct signals accurately on some interpolation points. The common interpolation methods include Bezier method [7], B-Spline method [8,9] and non-uniform rational B-Spline (NURBS) method [10,11]. Some other methods are based on these classical methods: interpolation basis function method [12,13], interpolation geometry iteration method [14], interpolation subdivision method [15], etc. The subdivision method is applicable for any topology structure, but unfortunately it is difficult to obtain the analytical expression of the curve. Subdivision method is a subdivision iterative process from coarse to fine. This process generates a multi-level sequence of the model, that is, the model is transformed from low resolution to high resolution, which is similar to multi-resolution analysis [16,17]. In fact, wavelet analysis is an important way to realize multi-resolution analysis. There have been many studies using wavelet analysis theory [18,19] with interpolation characteristics for approximation of different smoothness curves, which have made effective progress in effectively suppressing the boundary effect brought by wavelet transform and further in improving the approximation accuracy.
Shannon-Cosine wavelet [20][21][22] has been proposed recently, it not only has excellent properties such as normalization, interpolation, double scale, compact support, smoothness and analytical expression, but also the support interval and smoothness of wavelet function can be controlled through iterative parameters adaptively. The excellence of Shannon-Cosine interpolation wavelet has been verified in the solution of fractional partial differential equations [20].
However, wavelet transform is defined on an infinite interval and the signals are defined on a finite interval; direct use of wavelet transform will produce a large-scale boundary effect [23,24]. Constructing interval wavelet is the most common method to solve the boundary effect. Interval wavelet based on generalized variational principle is constructed based on boundary extension, that is, the extension is mapped into wavelet function by generalized variational principle. Common extension methods [25,26] include symmetric extension [27,28], zero extension [29], periodic extension and mirror extension [30]. Each method has its applicable scope, for example, periodic extension is only suitable for periodic functions and mirror extension is only suitable for signals with Neumman boundary conditions. There exist several common methods for constructing interval interpolation wavelets on bounded intervals, such as extrapolation [31], spline interpolation [32][33][34], Newton interpolation, polynomial method [35][36][37] and central affine change method. Lagrange extrapolation method [38] is the most commonly used method for constructing interval wavelets. However, when the gradient of approximation function is large, interpolation points need to be added, and the resulting Gibbs phenomenon [39] will also cause errors. Chebyshev-polynomial wavelets [35] require weights in their scalar products, which may lead to difficulties in balancing the relative significance of their coefficients. Mei et al. [24] constructed a dynamic interval wavelet based on the Newton interpolation method, which can dynamically select the extrapolation points in the interval wavelet and limit the boundary effect without increasing calculation amount. Bin and Michelle [28] constructed a interval wavelet based on symmetry on interval [0, 1]. Wei et al. [40,41] constructed the interval wavelet based on central affine transformation extension method. This extension method ensures smooth and continuous at the extension boundary, and the smoothness of the extension function in the extension interval is consistent with the original signal. However, it does not decay to zero in the extension interval and the signal boundary may also contain high-frequency signals, which is equivalent to moving the boundary effect in the effective interval into the extension interval. To avoid the boundary effect from backing to the effective interval of the signal again, the extension interval scope needs to be increased, which leads to a large increase in calculation amount and a decrease in efficiency. Therefore, our paper constructs an interval wavelet based on Hermite interpolation, so that it attenuates to zero in the extension interval, and essentially reduces the influence of boundary effects.
The purpose of this research is to construct a novel algorithm for the reconstruction of curves by the means of the Shannon-Cosine interpolation wavelet and Hermite interpolation extension. In this scheme, the Shannon-Cosine interpolation wavelet possesses many excellent numerical properties such as the interpolation, the compact support, symmetry, smoothness analytical expression and so on. The Hermite interpolation extension can eliminate the boundary effect as it is smooth in the interval [−∞, ∞]. This paper is organized as follows. We review the properties of Shannon-Cosine wavelet first. Then, we construct the interval Shannon-Cosine interpolation wavelet based on Hermite interpolation and variational principle. Third, we design a multi-scale interpolation operator based on interval Shannon-Cosine interpolation wavelet and some properties are proved. Finally, we perform some numerical examples in reconstructing curves.
Shannon-Cosine Interpolation Wavelet
Shannon-like wavelet possesses almost all the excellent numerical properties such as interpolation, smoothness, continuity, orthogonality, fast calculation speed and infinite differentiability, except the compact support domain, which limits its applications greatly.
To make use of the excellent properties of Shannon wavelet, researchers proposed many methods to improve it. The methods are usually used to improve the compact support by introducing windows functions [42,43], such as Meyer window [44], Nuttall window, Blackman window [45], Gauss window [46,47], etc. Similar to windowed Fourier transform, windowing function accelerates the delay speed of Shannon wavelet function, but it also destroys the normalization characteristic of Shannon function. Hoffman and Wei constructed Shannon-Gabor wavelet [48] by introducing Gauss window to Shannon wavelet, which is called quasi-wavelet because it does not satisfy compact support characteristic and normalization characteristic. When approximating the signal, the original signal will be amplified or reduced, which leads to the limited application scope of Shannon-Gabor wavelet. Therefore, windowing is not recommended [20,21].
To satisfy the normalization condition and improve the compact support of Shannon wavelet, we take the linear combination of the cosine functions instead of Gaussian to improve the compact support property Shannon. The Shannon-Cosine wavelet scale function used in this paper can be expressed as follows is a linear combination of cosine functions, a n is a linear combination coefficient controlling the smoothness of the function and χ(x) is a Heaviside function, defined as follows It should be pointed out that the Shannon-Cosine wavelet function is different from one proposed in Reference [20], in which the parameter a n should be used to control the smoothness of S c (x). S c (x) in Reference [20] is a continuous differentiable function in the interval − N 2 , N 2 , but it is not always continuous at the endpoints of the interval. To satisfy the continuity at the end points x = ±N/2, let d n dx n S c (x) x= N 2 = 0, n = 0, 1, . . . , m + 1 (5) Since sinc(x) is infinitely differentiable, R N (x) is just a truncated function, and Equation (5) is equivalent to the following equation As an interpolation function, it satisfies Substituting x = N/2 (or x = −N/2) and x = 0 into Equations (1) and (6), we can get the linear algebraic equations about parameter a i (i = 0, 1, . . . , m). Mei and Gao [20] gave the recurrence formula of a n in detail a n = where a 0 can be obtained by solving equation m ∑ i=0 a i = 1, and the value of a i (i = 1, 2, . . . ) is related to the value of m, which is given in Table 1. It is obvious that the wavelet function S c (x) is equvilent with one proposed in Reference [20] as the parameter m ≤ 3. Generally, when m = 3, the smoothness of the curve can be guaranteed. We select m = 3 in our experiment. Proof of Theorem 1. According to the definition of the smooth function, which refers to an infinitely differentiable function, we only need to prove that S c (x) is infinitely differentiable. Let . Obviously, f (x) and g(x) are infinitely differentiable functions.
In the interval − N 2 , N 2 , R N (x) = 1. S c (x) can be described as (9) According to the definition of Leibniz formula, we know that where, It can be proved that S c (x) is a smooth function in the interval − N 2 , N 2 .
Then, we prove the continuity of the functions S c (x) at the endpoints x = ±N/2. To the Equation (2) proposed in this paper is obvious. To the function S c (x) in Reference [20], we also give its proof as follows.
It is known that when m = 3, a 0 = 5 16 , Thus, S c (x)is continuous at x = N/2, and similarly we can prove that S c (x) is continuous at x = −N/2. It can be proved that S c (x) is a smooth function in the support This completes the proof. Figure 1 is a comparison between Shannon scale function and Shannon-Cosine interpolation wavelet scale function. We can see from the diagram that Shannon-Cosine interpolation wavelet is a real compact support function, not a quasi-wavelet as Shannon-Gabor. (a)
Construction of Interval Shannon-Cosine Interpolation Wavelet Based on Hermite Interpolation Extension
The wavelet transform is defined by convolution operation and the wavelet basis function appears as a smooth function. When the signal and the wavelet function perform a convolution operation, if the end of the signal is not zero, then one side has a value and the other side is zero in the support interval of the wavelet, which will produce high frequency information and cause boundary effect. In this paper, we construct interval interpolation wavelet based on Hermite interpolation extension and variational principle. We assume that the first-order derivative value and function value at the left end of the extension interval are all zero, and the first-order derivative value and the function value at the right end of the extension interval are equal to the function value and first-order derivative value of the signal. Then, a smooth function is obtained by using two-point cubic piecewise Hermite interpolation, which can keep the signals in the extension interval and the effective interval smooth and continuous at the boundary and decay to zero smoothly in the extension interval, thus the boundary effect is reduced by wavelet transform greatly.
Extension Method Based on Hermite Interpolation
Piecewise cubic Hermite interpolation is a basic method for function fitting and interpolation. For a series of given n + 1 at interpolation points x 0 < x 1 < . . . < x n , the function value y 0 , y 1 , . . . , y n and derivative value m 0 , m 1 , . . . , m 2 of the function f (x) on these interpolation points, for any i = 0, 1, 2, . . . , n, constructing interpolation polynomials with a degree not exceeding three times in the interval is the piecewise cubic Hermite interpolation function at the interpolation points (x 0 , y 0 ), (x 1 , y 1 ), . . . , (x n , y n ). H(a − c) = 0, H (a − c) = 0, because we need to ensure that the continuation function is smooth and decays to zero in the continuation interval. According to the known conditions, Equation (12) can be simplified as The extension function at the other end can be found in the same way.
To evaluate the generality of the proposed method, we find three cases: the first-order derivative values of the signal which are needed to be processed at the boundary are positive, zero and negative, respectively. The blue curve represents the original signal and the red represents the continuation function. The results are illustrated in Figure 2. As illustrated in Figure 2, this extension method makes the signal smoother at the boundary in the three conditions. Moreover, it can make the signals at both ends of the curve smoothly decay to zero, which can effectively reduce the boundary effect.
Multi-Scale Interpolation Wavelet
Let φ(x) be the basis function of the interpolation wavelet; the scale function sequence can be defined as By means of the definition of the basis function, we construct the subspace sequence of L 2 (0, 1) as follows Because the scale function φ(x) has the interpolation property, we obtain φ j,k (n2 −j ) = δ n,k . The interpolation operator can be defined as follows By means of the definition of V j , wavelet function space W j ⊂ V j+1 can be defined as where ψ jk = φ j+1,2k+1 . Let y jk = x j+1,2k+1 , then, we can easily obtain the expression of wavelet function ψ jk as follows Obviously, V j+1 = V j W j , V j constitutes a multi-resolution analysis.
For any function f (x) ∈ L 2 (0, 1), we can always find a large enough J to make f J ∈ V J infinitely approximating to f (x) . Assuming the coefficients of wavelet function and scale function are α j,k and β j 0 ,k , respectively, we obtain where β j 0 ,k = δ j 0 ,k f (x j 0 ,k ) = f (x j 0 ,k ), x j 0 ,k is the feature point of wavelet on layer j 0 and the Wavelet coefficients α j,k can be expressed as where y j,k = x j+1,2k+1 and Q j represents the wavelet interpolation operator on the j layer. According to Equation (20), we know the interpolated wavelet coefficients α j,k have intuitive geometric meaning: it is the error between the actual value and the reconstructed value at y j,k . Theorem 3. The multi-scale interpolative wavelet transform matrix based on Shannon-Cosine wavelet can be defined as When j = j 0 , we have k,n denote the multi-scale interpolative wavelet transform matrix; the wavelet transform coefficient can be expressed as According to Equations (19) and (20), we obtain the calculation formula of interpolation wavelet coefficient expression as follows To construct a uniform multi-level interpolation wavelet operator, the coefficient α j,k of the interpolation wavelet needs to be expressed in the form of a weighted sum for the wavelet features on the J layer, so a restriction operator [49] needs to be introduced, which can be expressed as follows [50] R j,J k,n = 1, x j,k = x J,n 0, others By means of the definition of the restricted operation, we have Substituting Equation (26) into Equation (24), we obtain where k, n ∈ 0, 1, 2, . . . , 2 j , 0 ≤ j ≤ J − 1. Substituting Equation (23) into Equation (27), we obtain Obviously, taking off the sum operator f (x J , n) on both sides of Equation (28), Equation (21) is obtained. This completes the proof.
Proof of Theorem 4. According to Equation (20), we know that Substituting Equations (23) and (26) into Equation (19), we obtain The expression of approximating function on the jth layer can be regarded as the result of the operation of the wavelet collocation points according to the interpolation operator on this layer. We have According to Equations (31) and (32), we have Obviously, through taking off the same operator of both sides of Equation (33), we obtain Equation (29). This completes the proof.
Construction of Interval Interpolation Wavelet
Shannon-Cosine interpolation wavelet has excellent compact support property, so we can obtain interpolation basis functions as follows Assuming the approximating function as f (x), the solution domain as [a, b], take R = 2 j + 1(j ∈ Z) discrete points x 0 , x 1 , . . . , x 2 j in the given interval, and take L extension points x −L−1 , x −L , . . . , x −1 and x R , x R+1 , . . . , x R+L on both sides of the boundary. We can obtain the interpolation basis function as follows In Equation (35), a nk is a vector composed of the approximate value of the unknown function f (x) at discrete points x and b nk is a vector composed of the extension points on both sides of the boundary of the function f (x). We have φ(x) is an auto-correlative function, and its expression is φ(x) = ∞ −∞ φ(y)φ(y − x)dy. φ(y) is a scaling function. Therefore, the approximate function expression f (x) of interpolation wavelet can be expressed as follows The interval wavelet function schematic diagram is shown in Figure 3, in which curve I represents the original signal, I 1 represents the left extension interval wavelet function and I 2 represents the right extension interval wavelet function. It can be observed that the interval Shannon-Cosine interpolation wavelet makes the original signal continuous and smooth at the boundary and smoothly decay to zero. The error estimation formula can be expressed as follows where f j (x k ) is the given value at the adaptive reservation point x k amd HS L (x n ) and HS R (x n ) correspond to the external points on the left and right sides, respectively, which are obtained by Hermite interpolation. According to the above formula, it can be concluded that the error is related to the gradient at the boundary.
Results and Discussion
The purpose of the proposed method is to construct a novel sparse representation method for the reconstruction of curves. Therefore, we take the infinitely differentiable smooth function cos(x) and irregular piecewise function to evaluate the performance of the proposed method To fully evaluate the proposed algorithm, we take subjective evaluation methods and objective methods in this paper. Objective evaluation methods include error analysis and smoothness analysis.
To intuitively describe the error between the reconstructed curve and the original curve, we calculate the absolute value error, the average absolute error and the mean square error between the reconstructed curve and the original curve at every certain step h.
There are two different ways to evaluate the smoothness of the reconstructed curves: one is parameter continuity [51] and the other is geometric continuity [52].
Parametric continuity means that if a curve has equal left and right derivatives up to kth-order derivative at an interpolation point. Thus, it is kth-order parameter continuity at this point, which is denoted as C k continuity.
(1) C 0 Continuity: The curve is continuous without breaks at the interpolation point.
(2) C 1 Continuity: Two adjacent curves on both sides of the interpolation point have the same first-order derivative at the interpolation point. (3) C 2 Continuity: Two adjacent curves on both sides of the interpolation point have the same first-order derivative and second-order derivative at the interpolation point.
Geometric continuity means that, if a curve is proportional to the kth-order derivative at a certain interpolation point, then it is kth-order geometrically continuous at this point, which is recorded as G k continuity.
(1) G 0 Continuity: The curve is continuous without breakpoints at interpolation points, which means that G 0 continuity is consistent with C 0 continuity.
Selection of Interval Wavelet
If we directly use the Shannon-Cosine interpolation wavelet and the Shannon-Gabor wavelet to reconstruct the curve with non-zero function values at the endpoints, large errors will be generated at the endpoints and transmitted to the interval when doing interpolation wavelet transformation, which will be transmitted to the interval, and the effect is not ideal. All the above are illustrated in Figure 4. Indeed, if we use the interval Shannon-Cosine wavelet and the interval Shannon-Gabor wavelet to reconstruct the smooth curve, we can obtain better results, as shown in Figure 5.
As shown in Figure 5, the effect of interval Shannon-Gabor wavelet is much better, but the curve is only the first-order derivative continuous and the curvature has a breakpoint at the boundary, so it only satisfies C 2 continuity. The first-order derivative and curvature of the curve constructed by the proposed method are both continuous, and so it satisfies C 2 continuity and G 2 continuity, the reconstructed curve is also smoother, indicating that the proposed method is indeed feasible.
Adaptive Selection of Extension Intervals and Interpolation Points
The size of the extension interval directly affects the accuracy and speed of curve reconstruction. If the extension interval is too small, the boundary effect cannot be solved well; if the extension interval is too large, the calculation amount will increase greatly and the efficiency will decrease. Therefore, appropriate selection of the extension interval can improve the accuracy and speed of curve reconstruction.
At different extension intervals, the multi-scale interpolation operator can automatically adjust the scale according to the change of the gradient. As is known, wavelet transform has the function of singularity detection, that is, when the wavelet transform operation is performed on a singular function, the wavelet transform coefficients obtained at the singular point are very large. Because wavelet has localized characteristics, it can check the position of singularity. Wavelet adaptive sampling is automatically encrypted configuration points near singularities and sparse configuration points at smooth areas. The specific algorithm mainly sets the wavelet coefficient threshold ε, and the configuration points with wavelet coefficient less than the threshold ε can be discarded.
For the convenience of description, we change Equation (19) as follows Theorem 5 ([53]). For any ε, there is a positive integerC that satisfies Theorem 5 shows that, in approximating Equation (42), the part where the wavelet coefficient is less than the threshold can be omitted, that is, the wavelet whose wavelet coefficient satisfies the following formula, the formula is as follows where a represents the scale number, which is normally taken as a = 2. By omitting wavelets whose wavelet coefficients are less than the threshold, the configuration points corresponding to these wavelets are also omitted. According to Theorem 5, the wavelet multi-scale interpolation operator can densely take points in places with large gradients changes and sparsely take points in places with small gradients changes, as illustrated in Figure 6.
As shown in Figure 6a, when the extension interval is 1, the number of adaptive interpolation points is 64; as shown in Figure 6b, when the extension interval is 2, the number of adaptive partition points is 34; as shown in Figure 6c, when the extension interval is 3, the number of adaptive interpolation points is 21; and as shown in Figure 6d, when the extension interval is 4, the number of adaptive partition points is 22. We can obtain that the wavelet multi-scale interpolation operator can construct a smooth curve with as few points as possible, and the points are more reasonable.
Numerical Examples
To validate the effectiveness of the proposed method, we conducted two numerical examples and compared the numerical results with three classical curve construction methods: Akima method, Bezier method and cubic spline method under the experimental conditions (OS: Windows; CPU: Inter(R) Core(TM) i5-1035G1<EMAIL_ADDRESS>GHz 1.19 GHz, Memory: 16 GB, MATLAB with version 9.4.0.813654 (R2018a)).
The reconstruction of infinitely differentiable smooth function by the general interpolation method usually results in serious boundary effects. In this paper, we use four different methods for reconstruction. Visually, when the interpolation points of the four methods are 21 and 34, respectively, the reconstructed curves are smooth and continuous without obvious difference with origin curve. Due to the slight difference in visual effects, it is not easy to analyze the effectiveness of the proposed method. Besides, we compare the four methods more intuitively from the perspective of numerical error. The results are shown in Figure 7. Figure 7 shows the comparison of the errors between the four reconstruction curves with the original curve. Figure 7a shows the comparison of the errors between the reconstruction curve and the original curve when interpolation points is 21 and Figure 7b shows the reconstruction curve errors when interpolation points of the other three methods are 34. When the number of interpolation points is 21, at the boundary, the proposed method has the smallest error compared with the other three methods, which effectively reduces the boundary effect. When interpolation points number is increased to 34, the accuracy of the other three methods improved, but the error is still higher than that of the proposed method at the boundary. To accurately evaluate the performance of the proposed method, we set the step length h = 0.01 to take the points, calculate the maximum error, average absolute error, mean square error of the reconstructed curve and the original curve, in the interval [0, 2π], and calculate the running time. The experimental results are shown in Tables 2 and 3. Tables 2 and 3 show the numerical errors and running time of the four curve reconstruction methods. When interpolation points number is 21, the maximum value, mean absolute error and mean square error of the reconstructed curve are smaller than those of the other methods. When the interpolation points of the other three methods are increased to 34, although the precision of Akima method and Bezier method are improved, it is still lower than that of the proposed method. The reconstruction accuracy of cubic spline method is slightly higher than that of the proposed method. Moreover, the increase of interpolation points leads to the increased computation and longer running time. To evaluate the smoothness of the reconstructed curve, we compare the smoothness of the curve reconstructed by the four methods through parameter and geometric continuity. The results are shown in Figures 8 and 9. As shown in Figures 8 and 9, when interpolation points number is 21, the first derivative of the curve reconstructed by the Akima method shows strong volatility, resulting in discontinuity of the curve. Thus, it is unpractical because of the lowest smoothness of the reconstructed curve. The first derivative of the curve reconstructed by Bezier method is relatively smooth, but the second derivative of it is discontinuous. The Bezier method requires inverse calculation of the control vertex to pass through the interpolation points, which increases the amount of calculation and results in a low speed of operation. The first and second derivatives of the curve reconstructed by cubic spline method are relatively smooth, but the middle part of the second derivative and curvature are still not completely smooth. When interpolation points number of the other method is increased to 34, the first derivative of the curve reconstructed by Akima method is relatively smooth, but the second derivative of the curve is not. The first and second derivatives of the curve reconstructed by cubic spline method are relatively smooth, but the calculation amount increases with the increase of interpolation points, and the running speed is reduced. Compared with the other three methods, the first derivative, second derivative and curvature of the curve reconstructed by the proposed method are smooth. Therefore, the reconstructed curve satisfies C 2 and G 2 .
In summary, compared with the other three methods, the proposed method has the lowest error, shorter running time and meets the requirements of C 2 and G 2 . Therefore, the proposed method is more suitable for reconstructing infinitely differentiable smooth functions.
Irregular piecewise functions are continuous but derivable, and the general method will produce serious boundary effects at the end and rough points. In this paper, we use four different methods for reconstruction. Visually, when the interpolation points of the four methods are 9 and 17, respectively, there is no significant difference between the reconstructed and original curve. Due to the slight difference in visual effects, it is not easy to analyze the effectiveness of the proposed method. Besides, we compare the four methods more intuitively from the perspective of numerical error. The results are shown in Figure 10. Figure 10 shows the comparison of the errors between the four reconstruction curves and the original curve. Figure 10a shows the comparison of the errors between the reconstructed curve and the original curve when interpolation points number is 9. Figure 10b shows the difference between the reconstructed curve and the original curve when interpolation points number is 17. When the number of interpolation points is 9 and 17, the error of the proposed method is less than that of the other three methods at the boundary and rough points. To evaluate the total error of the method, we set the step length h = 0.01 to take the points, calculate the maximum error, average absolute error, mean square error of the reconstructed curve and the original curve, in interval [−0.5, 0.5], so as to accurately evaluate the accuracy of the proposed method. The experimental results are shown in Tables 4 and 5. Tables 4 and 5 show the numerical errors of the four curve reconstruction methods. When the interpolation points number is 9, the maximum value, mean absolute error and mean square error of the reconstructed curve are smaller than those of other methods. When interpolation points number of other methods is increased to 17, although the errors produced by the four methods are very close, the running time of the proposed method is lower than the other methods.Thus, it can be seen that the proposed method has the smaller error and shorter running time for reconstruction of irregular curves. In summary, compared with the other three methods, the proposed method has the smaller error and shorter running time, so it is more suitable for reconstruction of irregular curves.
Synthesizing Numerical Examples 1 and 2, we can see that the proposed method is not only suitable for infinitely derivable smooth functions, but also for irregular functions. The proposed method has less error, shorter running time and better flexibility when reconstructing the curve.
Conclusions
In this paper, we propose an interval Shannon-Cosine interpolation wavelet based on Hermite interpolation for sparse reconstruction of curve. First, we construct the interval Shannon-Cosine interpolation wavelet based on the Hermite interpolation extension and variational principle. Second, we construct a multi-scale interpolation operator based on the interval wavelet to reconstruct curve accurately and sparsely. Compared with the typical curve reconstruction methods, the proposed method can better realize the curve reconstruction. According to numerical experiment results, we can draw the following conclusions: (1) Compared with Shannon-Cosine interpolation wavelet method, the interval wavelet constructed in this paper reduces the boundary effect and avoids the phenomenon of infinite oscillation. (2) The wavelet multi-scale interpolation operator constructed in this paper is sensitive to the change of the gradient. According to this character, sparse feature interpolation points can be obtained adaptively.
Numerical Experiments 1 and 2 show that the proposed method is suitable for the reconstruction of infinitely derivable smooth and irregular functions. When the number of interpolation points is the same, the proposed method has smaller maximum error, absolute mean error, mean square error and running time. When achieving close accuracy, the other methods need to add more interpolation points, which increases the running time. The proposed method can reconstruct smooth curve with as few points as possible, and improve the efficiency of reconstruction.
The infinitely derivable smooth function reconstructed by the proposed method is smoother and satisfies C 2 and G 2 continuity. | 7,802.2 | 2020-12-22T00:00:00.000 | [
"Mathematics"
] |
Strengthening / Retrofitting Techniques on Unreinforced Masonry Structure / Element Subjected to Seismic Loads : A Literature Review
Masonry structures commonly exist in reality and still are popular all over the world. It has been reported and studied that these buildings are vulnerable to strong external loadings imposed by earthquake, strong wind, blast etc. In the past few decades, different seismic retrofitting and strengthening approaches for masonry structures/elements have been developed and implemented. In this paper, the previous studies on the strengthening/retrofitting techniques for Unreinforced Masonry (URM) buildings subjected to seismic and extreme loads are reviewed and summarized. The fundamental concept of strengthening/retrofitting approaches is to (i) reduce the influence of external loading, (ii) upgrade the individual element’s load-carrying capacity and (iii) improve the integrity of masonry structure. A comparison and assessment of the advantages and disadvantages of each method is presented to identify the most suitable method in different cases. It is expected that this paper will provide some helpful information and guidance for the engineers and householders in choosing an appropriate technique in strengthening/retrofitting URM structures.
INTRODUCTION
Masonry is a composite material made of masonry units and mortar, which has been used for centuries.Though Masonry is an old and out-of-date construction material, it is still common and popular in some countries.Unreinforced Masonry (URM) buildings still possess a big portion (about 70%) of existing buildings [1].The most frequently seen type of URM is the Masonry Heritage Structures (MHS), which presents value contexts such as aesthetic, social, archaeological, cultural, economic and technological, making them a real treasury of human civilization [2].The design, construction technologies and the initial materials used in those masonry heritages are often drastically vulnerable subjected to present days' hazard.Therefore, the retrofitting work to help those structures survive under seismic and extreme loads is essential.In addition, masonry walls are often used as infill in reinforced concrete frames, which has been observed from experimental observations and analytical studies that the lateral load-carrying capacity of bare frame can be greatly improved if the frame is fully infilled with masonry.Nevertheless, observations from past earthquakes also displayed that catastrophes and loss of life could occur in such buildings.The collapse is more likely to occur in the out-of-plane direction or in the partially infilled RC frame, which has led to the idea that this type of structure possesses some poor seismic performance [3].Normally, the masonry infill is not taken as a structural element but as secondary.However, it should be noted that the masonry infills can contribute in causing casualties if the buildings are subjected to strong external loadings, especially out-of-plane loading.
A large number of masonry structures were built only under empirical rules (there were no corresponding building codes at that time), and the seismic actions were not taken into consideration when the structures were constructed, which led its inability to absorb the seismic load induced by an earthquake [4].Consequently, taking this reason into account, URM buildings or masonry elements often require to be strengthened prior to seismic actions or retrofitted following earthquake events to guarantee that they can dissipate the energy and relieve the forces induced by earthquakes.Investigators have developed and implemented various types of technical approaches to strengthen/retrofit the mechanical performance of URM wall panels as well as whole structures.However, many strengthening or retrofitting techniques have only been studied on individual cases, which means the result cannot be extended and applied directly to other cases with different types of construction materials or systems.The analytical techniques are not reliable enough to assess the seismic performance of strengthened/retrofitted masonry structures as the strengthening or retrofitting method may work differently in masonry structures made of different materials.Furthermore, there is little literature on the effect of strengthened/retrofitted masonry infill on the whole structure as most of the studies were carried out on the individual elements.It should be noted that the strengthened/retrofitted masonry element may change the structural period of the original structure, thus changing its dynamic performance under an earthquake.It is more meaningful if the whole structure is considered when doing the strengthening/retrofitting.Besides, the numerical simulation method was not applied widely in the research of strengthening/retrofitting on masonry.Most of the previous studies were only carried out experimentally.One of the biggest issues that cause its unpopularity is the mechanical performance, especially the long-term behavior between the masonry-strengthening interface is not known clearly.The numerical simulation can be a powerful tool in such studies and can be applied to analyze the mechanical performance of retrofitted/strengthened masonry regarding the efficiency of retrofitting/strengthening.
In this paper, a review on the previous strengthening/retrofitting methods has been compared and assessed aiming to provide an overall understanding on the state-of-the-art strengthening/retrofitting techniques on masonry buildings and masonry elements.
STRENGTHENING/RETROFITTING TECHNIQUES FOR URM
So far, massive research has been carried out to investigate reinforcing or strengthening of URM structures.The aim of the retrofitting is to improve their load carrying capacity or increase their collapse time under unexpectedly large external loading.There are three concepts in retrofitting masonry structures: i) to reduce the external force; ii) to upgrade the existing building; and iii) to improve the integrity.The first two concepts have been summarized and demonstrated in a few research documents [5,6] while the third one has been barely mentioned.Those concepts and their practical application will be presented in detail in this section.
Base Isolation
The idea of base isolation is to uncouple the masonry building and the foundation by placing flexible pads between them, thus preventing the earthquake motions from transmitting up through the building, or at least reducing them greatly [7], as demonstrated in Fig. (1).When the ground shakes, only a small portion of the shaking from the base will be transmitted to the superstructure.Previous research has presented that the appropriately chosen flexible pads can reduce the forces induced by earthquake 5 to 6 times compared with the structures without using base isolation.In the experimental result of [8], the masonry part retrofitted with isolators experiences a 2.8 up to 24 times smaller displacement and the forces are reduced from 1.5 to 15 times compared with the one registered in the fixed foundation.The base isolation technique can be perfectly applied in low to mid-rise masonry buildings.Particularly, the base isolation is a suitable retrofitting strategy for the heritages of historical importance as it can preserve their original appearance while the conventional rehabilitation would be destructive to the appearance of the buildings [9].However, it is so far not easy to implement this technique under the existing building.The base isolation system is usually seen as a presentation of maximum capacity to resist seismic loads, which could be achieved for the structure without additional invasive retrofitting measures [10].[11] retrofitted an old masonry chapel building using base isolation technology, which used the laminated rubber bearing and damper together.It was found out that this feasible technique could resist seismic loading and keep its architectural feature.Similarly, [12] found that the retrofitted masonry building using pure friction base isolation system has a 50% reduction in maximum roof acceleration in comparison to the conventional fixed base structure.The base isolation technique could be applied in combination with other strengthening materials, Fibre Reinforced Polymer (FRP) for instance, [13] concluded that the improvement of this technique alone is not sufficient, while the use of horizontal and vertical CFRP laminate strips greatly improves the seismic behaviour.This technique cannot be only applied to the structural masonry buildings, but also to the heavy non-structural monolithic objects, such as the pinnacles.[14] implemented this system on a pinnacle to resist earthquake action expressed through base isolation, whose spectrum is compatible with the design seismic action.The best application of this technique is on the newly built building as the construction of the flexible pad would be easier.Though it would be perfect if this technique can be implemented on the masonry heritage as it can preserve its authenticity, the mechanical work would be cumbersome which may destroy the whole structure out of a sudden.Besides, the cost would be huge as the expense to install a hybrid base isolation system can be as much as 3% of the total cost of the building [15].It also should be noted that this technique cannot be implemented on tall buildings.
Seismic Damper
The seismic damper is a mechanical device to dissipate the energy caused by the earthquake in a building.During an earthquake, the seismic energy will be transmitted from the substructure to the superstructure.A portion of the energy will be absorbed and dissipated by dampers, and in such case, the shaking of the building is damped.Damper was first used in tall buildings to resist wind effects.Later it was introduced to the buildings against earthquake effects.Quite a lot of different types of seismic dampers have been developed and applied, and the most commonly applied types in low-rise buildings include viscous dampers, friction dampers and yielding dampers.In an earthquake, some energy will be dissipated in the forms of heat and friction by the viscous dampers, thus resulting in less likelihood of failure.For the friction damper, it maintains the integrity by redressing the floors back to their relative positions in the beginning, while yielding dampers absorb the energy and yield before the floors do.In such a way, the structural failure of the building itself will be prevented.The use of seismic dampers can significantly reduce seismic vulnerability as well as encounter the complicated effects of the unknown as well as uncertain interventions during the building's lifetime [16].So far, most of the application of seismic dampers are conducted on the framed structure, only a small number of research is carried out on masonry buildings [17,18] as it is suitable for insertion within a chevron bracing system.[19] improved the seismic response on a historical chimney by using Tuned Mass Damper (TMD) and this technique improves the seismic response in terms of compressive stress value, base shear and top displacement [20].rehabilitated an old unreinforced masonry building using hysteric dampers system (Fig. 2).[21] has also proposed this technique to retrofit stone masonry buildings, and significant improvement has been achieved.Conventional retrofitting with seismic damper requires heavy demolition and long construction times on masonry buildings, therefore, [21] proposed an alternative damper called Added Damping and Stiffness (ADAS) dampers, which is characterized by the addition of new external concrete walls equipped with ADAS dampers, thus reducing the intervention on the initial building.Moreover, [22] retrofitted a masonry building using a combination of steel bracing and dampers.The results found that the dampers dissipated a huge amount of seismic energy and prevented the masonry of excessive deformation and cracking.However, [13] argued that this technique might not be efficient as the seismic damper requires large deformations to be efficient, while the masonry structure is generally rigid.So far, as previously stated, the seismic dampers are more commonly applied in rehabilitating framed structures but not that popular in masonry structures.
To Upgrade the Element Strength
For the masonry structures or masonry bearing walls, including the vertical and horizontal masonry elements, to upgrade the element strength can improve the load resistance of the whole structure, thus improving the possibility of a masonry structure or masonry element to resist unexpected external loadings.This concept is the most frequently applied in retrofitting/strengthening masonry structures.
Surface Treatment
The general surface treatment involves attaching strengthening materials to the original structure and tied together by using mortar or steel links.The most frequently used approach in surface treatment is shotcrete and ferrocement.
Shotcrete is applied by spraying shotcrete over a mesh of wire installed on the surface of the masonry wall.(Fig. 3a).In general, the thickness of the overlay ranges from 70mm to 150mm [23,24].Normally, the shear dowels need to be used as well for the sake of transferring shear between the masonry-shotcrete interface.Before the application of shotcrete, the removal of wythes of bricks and filling the voids should be carried out first.[25] did a series of experiments on masonry wall panels using this technique.The ultimate lateral strength was increased by nearly 3.6 times.Furthermore, the stiffness at peak loading was increased by a factor of 3, though the initial stiffness was unaffected by this method.[24] found that shotcrete jacketing on both surfaces can reduce masonry tension on an average of 50% while one-sided shotcrete jacketing reduces the tension by about one-third.Besides, the roughness of the masonry surface plays an important role as well in determining the effectiveness of retrofitting.The retrofitting of shotcrete will be improved if the substrate surface is particularly rough after removal of loose or deteriorate portions [26].The ferrocement consists of closely spaced multiple layers of hardware embedded in a high strength (15-30MPa) cement mortar layer (10-15mm thickness) [27], shown in Fig. (3b).The mechanical properties of ferrocement depend on mesh properties as the mesh improves the in-plane inelastic deformation capacity by confining the masonry units after cracking.In a static cyclic test [28], this retrofitting technique increased the in-plane lateral resistance about 150%.[29] found that only 0.29% reinforcement in the longitudinal direction can increase the strength of masonry wall panel in out-of-plane direction by more than 10 times.
In general, the surface treatment method can significantly improve the strength and stiffness of masonry structure.Furthermore, they both increase the wall height-to-ratio, the in-plane lateral resistance, out-of-plane stability and arching action are increased accordingly [30].Obviously, this technique is suitable for the vertical masonry element and it will be harmful if being implemented on the horizontal masonry elements like arches.Nevertheless, the shortcomings of this method are the much time consumption in the application and it destroys the original aesthetics.Therefore, this technique is not suitable for the retrofitting of masonry heritage,
Mortar Joint Treatment
Sometimes, the masonry units in the buildings are still of good quality but the mortar is poor or it was not fully filled.Therefore, the mortar could be replaced or refilled by new bonding material with higher strength.Grout injection and re-pointing are the most often used techniques.
Grout injection is implemented by filling the voids and cracks [31] developed different types of grouts for filling spaces ranging in size from very narrow cracks to large voids and empty joints.This technique has been found to be effective at restoring the initial stiffness and strength of masonry, but no significant improvement in the initial stiffness or strength.Though the grout can be replaced with material of higher strength, the improvement is still not remarkable [32] found that the addition of 2% Ordinary Portland Cement to the mortar made little or no difference to the ultimate acceleration resistance.However, the effectiveness of this technique can be improved if it is used in combination with other techniques [33] conducted a study by combining FRP rods and re-pointing technique on masonry structure.The results displayed that using re-pointing technique combined with FRP laminates is the most effective retrofitting technique.It should be noted that this approach will work efficiently only if the mechanical property of the mix and its physical and chemical compatibility with the masonry to be retrofitted has been achieved [34].
In the retrofitting of masonry heritages, the preservation of the original aesthetics and the compatibility in terms of physicochemical and mechanical characteristics are the most important concerns [35].The former means that the authenticity of masonry heritages need to be preversed after retrofitting while the latter means that masonry and the retrofitting material should have a good compatibility in terms of physicochemical and mechanical performance.The use of incompatible retrofitting materials may initiate decay mechanisms or even lead to catastrophic results [36].The application of grout injection and re-pointing can preserve the original appearance of masonry heritage.As previously stated, the physical and chemical compatibility between masonry heritage and retrofitting materials is critical, while the interaction between retrofitting material and masonry is still not known clearly.Therefore, recent research regarding the design and selection or restoration mortar is interlinked with compatibility assessment to ensure the long-term durability of masonry heritage [35] presented a methodological approach for the selection of restoration mortars regarding fragility analysis.In the paper, selection of the optimum mortar, complying with the set compatibility and performance requirement can be accomplished by setting requirements during the characterization of the retrofitting materials and the investigation of masonry heritage.This technique is suitable for most masonry buildings, especially for the masonry heritage as the authenticity can be preversed after retrofitting.The prerequisite is that the retrofitting mortar does not have a detrimental effect on the initial masonry.Another ideal area of application is multi-leaf masonry walls where the connection between different layers are poor as well as the voids in the dry rubble stone's inner core.This method becomes popular and practical because of its minimal cost and ease of implementation, and most importantly, its sustainability.
External Steel Reinforcement
The application of this technique is to install steel elements next to the original masonry element, which might be tied together or not.During an earthquake, small cracks are expected to occur and they will develop and propagate if external loading exceeds its load carrying capacity.However, the new steel system has a considerably large stiffness and will stop the cracking on the masonry wall from propagating [37,38].In such cases, the external load will be carried by the stronger steel system while the initial masonry system may work as a structural element instead of carrying loads.[39] conducted a research by attaching steel members directly on the masonry wall, and the results presented that the lateral strength in in-plane direction of the reinforced wall was improved about 4.5 times.Other studies concluded that this steel reinforcing system is significantly effective in improving the masonry structure's resistance, ductility, and energy absorption [38,39].This technique is very effective in improving the load resistance of a structure as steel is a strong retrofitting material.Therefore, this approach is applicable for the weak masonry structures or the structures that need to be improved remarkably.However, as the appearance of steel will change the aesthetics of the original masonry structure, thus, it is not a suitable retrofitting approach for masonry heritage.Furthermore, the high cost is another concern of its implementation in developing countries.
Post-tensioning
In the post-tensioning strengthening method, the pre-stressed reinforcements are placed along the vertical elements for the sake of improving the strength and ductility of the lateral load resisting frame of the structure [37].In detail, this method is carried out by drilling a hole through the masonry wallets, and vertically placing pre-stressed reinforcement in the drilled hole.The compressive force will be provided by the pre-stressed reinforcement, which can counteract the tensile force occurred in masonry wall, thus improving its load carrying capacity.Experiments illustrated that the lateral load resistance of masonry walls can be doubled [40,41] implemented this approach on masonry walls in out-of-plane direction to analyse the flexural behaviour.Though the results presented that the ductility of the reinforced masonry panel was not improved, the strength and stiffness were increased remarkably.However, in the results of experiments conducted by [42], some difference regarding the ductility was found.The maximum strength can be increased by a factor of 2.1 to 2.8, while the ductility by values was improved with an average value of 2.7 times.Similarly, [43] found that the shear capacity and ductility can be improved significantly, while the ability of energy dissipation is also increased remarkably.Besides the application on bare masonry panels, this technique can be used to improve the seismic performance of RC frame infilled with masonry wall [44] conducted such a study to find that the engagement between RC frames and masonry infill has been improved through this retrofitting technique, thus leading to a postponed failure mechanism.It should be noted that the axial force provided by the pre-stressing bar and the vertical load should be smaller than a certain limit.Exceeding the limit, the ductility will decrease [43].
The center core is similar to the post-tensioning technique to some extent, which involves of installing a reinforced core in the vertical direction of the masonry walls.The differences between this technique and post-tensioning are that the steel bar is not pre-stressed and the drilled hole is much bigger.The idea of this technique is to improve the masonry's ability to resist cracking and increase the ductility while keeps the stiffness unchanged [45].This method has successfully doubled the strength of a masonry panel in a static cyclic test [28,46] has successfully applied this technique on more than 60 projects to mitigate the earthquake hazard.
Mesh Reinforcement
Some of the shortcomings in the above approaches, adding mass, for instance, can be overcome by using mesh reinforcement.FRP is the most commonly used mesh reinforcement to reinforce URM structures (Fig. 4).FRP composite was first used to retrofit or strengthen the existing concrete structures.It has been extended and applied on other (masonry, timber) structures and extensively studied as well [47].
In general, strengthening/retrofitting of URM walls using FRP composites can improve the strength of masonry wallets about 1.1 to 3 times [34,48] found that the resistance of the wallets can be improved by 13-84% by doing an analysis on masonry wallets retrofitted with carbon fibre.The improvement might be much more dependent on the to be retrofitted structure.In the study of [49], FRP has been found to improve the shear resistance of the masonry buildings by 3.25 times.The study concluded that if the economy and mechanical behaviour are both concerned, it is better to choose unidirectional FRP laminates or fabric strips instead of using two-dimensional fabrics [50] carried out research to investigate the effectiveness of the FRP with different configurations: grid arrangement and diagonal strips.It is noted that the asymmetrical application of the reinforcing is not effective in improving the shear resistance of masonry walls.Moreover, it is found that the grid strips provide a better stress redistribution, which results in a less brittle failure, while the diagonal strips are more effective in enhancing the shear capacity.FRP can be applied in combination with other strengthening/retrofitting materials [51] found that the combined use of FRP and PP-band performed much better than the application of either individual usage.In terms of the failure type of masonry panels retrofitted with FRP, the detachment of FRP from masonry surface plays an important role [51].The effectiveness of retrofitting will be lost if the FRP starts debonding from the masonry surface [52] found that the failure modes are masonry crushing, FRP rupture and debonding.The reinforcing of masonry panels using FRP possesses the merits of little added mass, low disturbance and relatively high improvement in strength.Nevertheless, the shortcomings of this technique are that it is expensive, requires high technical skill and changes the structure's appearance.The initial cost of FRP material is about 5 to 10 times more expensive than steel [53], which is a big concern in choosing the retrofitting approaches.In addition, the property and performance, especially their long-term behaviour, of FRP materials have not been thoroughly understood [54].Moreover, the FRP is normally applied by externally attaching the strips or sheets on the surface of masonry wall, and a water-proof barrier and prevention of the natural transpiration of the masonry structure might be created.Finally, this type of strengthened structures will be particularly weak if the epoxy-based bonding material is used in the strengthening with FRP composites [30].
If FRP is too expensive to afford in the developing countries, the Polypropylene (PP) band and bamboo meshes can be alternatives.PP band is a universal cheap packing material having considerable elongation capacity, which has been introduced as a cheap reinforcing approach in Japan.[55] did a test on both retrofitted and unretrofitted masonry panels, and the result displayed that the reinforced panel with PP mesh provide a higher residual strength after the occurrence of cracks.This strengthening approach is often used in reinforcing adobe masonry structure.[56] applied this technique on the non-engineered (adobe) masonry in rural Nepal.This retrofitting approach has found to be helpful in preventing the loss of material and maintaining wall integrity.This approach possesses the same merit that it is suitable on the lowstrength masonry structure/element.With the application on high-strength masonry structure, the effectiveness will be much less significant.[51] retrofitted a brick masonry house using this technique, and the results showed that the retrofitted masonry structure was unable to withstand a severe shaking condition.With the bamboo meshes.[57] conducted a research on retrofitting an adobe house using bamboo band meshes.The retrofitted adobe house could withstand over twice larger input energy than the non-retrofitted specimen.The advantages of pp-band and bamboo F RP b e n d i n g F RP s h e a r i n g meshes are their low cost and easy availability.
The mesh reinforcement is not only effective in reinforcing the vertical masonry structures but also significant in the strengthening of horizontal elements like vaults and arches, especially the composite materials made with textile fibers both with polymeric and cementitious matrices [58] strengthened masonry arches and vaults with different composites (TRM, SRG and FRP), all the experimental results showed that all the reinforcement systems were very efficient in increasing the maximum load [59] used the post-strengthening method with C-FRP on masonry vaults, and a similar result was found.More research about retrofitting/strengthening on masonry vaults and arches can be found in the work of [58,60,61] reviewed the strengthening of masonry arches using composite regarding the reinforcement position, and the result pointed out that the best type of reinforcement position is continuous at the intrados and extrados of the arches.
Though the polymer-reinforces fibers are the most commonly used to strengthen vaults and arches, they still possess the disadvantages of lack of water vapor permeability [62] proposed an alternative method by embedding long steel fibers and basalt textiles in the mortar to provide a steel-basalt reinforce mortar-based composite.The retrofitting results are compared with the polymer composites and both cases are effective in strengthening masonry vaults in terms of increasing of load and deformation capacity [6] agrees with the work of [62] that basalt on a mortar matrix provides both higher capacity and better ductility compared with polymer matrix.
Reticulatus System
The reticulatus system, was recently proposed by [63] to retrofit/reinforce rubble stone masonry.This system is carried out by inserting a continuous mesh of high strength reinforcement in the mortar joints, which are striped off about 40-60mm.Then, the mesh of reinforcement will be anchored to the masonry panel with transverse metal bars with the number of 5-6 per m 2 .After that, the reinforcement and anchoring bars will be covered by re-pointing mortar back to the joints.The dimension of the reinforcement mesh normally ranges from 300-500mm, and it must be smaller than the thickness of the panel [64].The detailed configuration of a typical reticulatus system is displayed in Fig. (5).5).Configuration of reticulatus system (extracted from [63]).
In the study of [65], the reticulatus system has been applied on historic masonry to investigate the flexural strengthening.The results have proved the improvement and the potential application of this technique.However, the improvement of bending force and the initial stiffness was realized only under appropriate pre-tension.In the work of [66], the reticulatus system made of fibre-reinforcement was used to retrofit an ancient building to provide a crossinterlock to resist against tensile strength produced by lateral forces.Though the compression, shear and flexural strength of stone or rubble masonry wall reinforced by the reticulatus system can be increased, the effectiveness of this reinforcing technique relies on the reinforcement mesh embedded in the mortar joints [63].Besides the difference of improvement on retrofitting materials, the improvement of same retrofitting material on different masonry structures varies as well.[64] found that the shear strength of the retrofitted pebble masonry was improved about 40% while 17% for stone masonry by using the same retrofitting technique.As this reinforcing system can keep the original aesthetic of the building, it is therefore suitable in reinforcing the fair-face masonry or the masonry heritages.Furthermore, this technique fits the masonry panels of both regular and irregular shape.Similar with FRP, the reticulatus system does not put on much extra load.So far, this technique is only applied on the stone/rubble/pebble masonry structure.Further investigation of the application of this technique on other masonry structures, such as brick masonry, should be conducted.
Confinement of URM with Constructional Columns
This technique involves constructional columns confining the masonry walls at all corners and wall intersections as well as the vertical borders of door and windows openings [27].The integrity will be improved much more remarkably if the constructional columns relate to ring beams at floors levels.Both the constructional columns and ring beams confine the masonry structure at the same storey.This method could improve the resistance ability in both directions (out-of-plane and in-plane) [67] found that this approach could improve the lateral resistance about 1.5 times and the lateral deformations and energy dissipation about 50%.[68] has investigated this technique on half-scale specimens under cyclic loading, and the tests demonstrated that the energy dissipation of the wall has been improved as well as the deformability in in-plane direction.It is recommended to use this confined system for newly built masonry structure in Eurocode 8 as the integrity of the building can be guaranteed.The application of this technique on existed buildings will be hard and high cost.
Confinement of URM with Ring Beam
The reinforced concrete ring-beams were normally used in masonry structures to improve its mechanical behaviour.The masonry structures confined with constructional columns and ring beams are expected to perform well in earthquakes [69] concluded in a study on confined masonry structures that the mechanical performance (ductility and strength) of the masonry panels are maintained mostly by the confining elements.Furthermore, more strength of the masonry structure will be preserved during an earthquake with higher reinforcement ratio and more confining elements.In some cases, if the existing ring beam is damaged or weak initially, retrofitting/strengthening can be done on the ring beam to restore its original function [70] retrofitted the masonry building with masonry ring-beam reinforced with composites.The result showed that the masonry ring-beam reinforced with composites performs well in terms of load carrying capacity.Similarly with constructional columns, this technique is easy to install on newly built buildings.
Tie Bars
Tie bars can also be applied to increase the integrity of the masonry building.The function of the tie bar is to provide compression stress on the masonry wall horizontally or vertically, which is quite similar with the post-tension technique.In some cases where the foundation settled unevenly and building inclined, and the tie bars can be applied to redress the inclined parts back to its original level [42] did a series of tests on masonry panels retrofitted with vertical steel ties.The outcomes presented that the vertical ties can remarkably increase the seismic capacity of masonry structure both in strength and ductility.It should be noted that surface treatment of the bar should be carried out carefully to avoid corrosion.
Fibre/Textile-reinforced Mortar
Generally, the mortar in a masonry structure is too weak to consider its tensile strength.Therefore, the tensile and flexural strength of a masonry element are always ignored compared with its compressive strength.Mortar mixed with fibre/textile can possibly be used to improve the tension and flexural resistance, and accordingly improve the integrity of a masonry structure.Fibre/textile additives used in the mortar help to improve mortar's tensile strength.[71] strengthened the masonry infill walls using plaster and hybrid glass fibres.The results demonstrated that it is not only effective in avoiding expulsion in out-of-plane direction of masonry panels, but also reducing the global in-plane damage.Similarly, [72] applied Textile-Reinforced Mortar technique (TRM) for the sake of preventing brittle failure.The result displayed that the ductility has been enhanced as well as the strength improved in out-of-plane direction.However, it should be noted that the improvement in integrity is not as remarkable as the above-mentioned methods.
[73] employed Steel Reinforced Grout (SRG) which is made by embedding ultra-high tensile strength steel chords in the mortar in a convex masonry substrate.However, as the performance of SRG depends on the roughness of the masonry surface, as well as the curing conditions.Therefore, the result of this study was proved to be insufficient for a comprehensive understanding of this technique.This technique is very similar with the re-pointing and grout injection approaches.
NUMERICAL MODEL OF RETROFITTED/STRENGTHENED MASONRY
The numerical approach to model URM has been developed and applied in masonry research.The reason for its usage is that it provides the opportunity to study the mechanical behaviour of URM more thoroughly.Moreover, the numerical simulation can reduce the number of experiments needed to investigate its mechanical manner.Though over the past few decades, a huge number of experiments on retrofitting/strengthening have been carried out, only a small portion of them have been numerically simulated.Most of the simulation work carried out were on the investigation of masonry alone.The most frequently applied numerical approaches to model masonry include Finite Element Method (FEM) and Discrete Element Method.The case studies of modelling masonry using FEM can be found in the work of [74 -78], while the researches on modelling masonry using DEM are explained in detail in the work of [79 -82].Most of the previous review papers on retrofitting/strengthening masonry structure are focused on the characteristics of each approach or technique, such as cost, application and sustainability etc.The property of masonry material is very complicated to model precisely, let alone to take the retrofitting/strengthening material into consideration.Moreover, the mechanical behavior of the interface between the original surface and the newly added surface provided by retrofitted material needs to be determined as well.
In the simulation work of retrofitted masonry, the original masonry element and retrofitting masonry are normally modelled separately.In the work of Wang et al. [54,83], the strengthening layer was modelled as the same with the original masonry wall.The collar joint, which combines the two masonry leaves together, was modelled as a cohesive interface element.This numerical model works well only when the property of the retrofitting/strengthening materials is quite close to the URM.Mobarake et al. [84] presented a numerical platform to assess the seismic performance of unreinforced masonry buildings.The platform comprised a basic macro-element to model the solid bricks and a rigidinterface macro-element to model the nodal regions.In this model, the constitutive equations and specifications were calibrated and characterized based on the results of past experiments.In the study of Kalliontzis and Schultz [85,86], finite element analyses has been employed to simulate the retrofitted masonry panel using post-tension technique.The masonry assemblies are modeled with three-dimensional stress eight-node linear elements while the post-tension bars with two-node linear 3D truss elements.Those models have the issue that the parameters haven't been explained clearly.Besides the model applied on simulating masonry structures, some models have been used to analyze the RC frame infilled with masonry panels.In the research of Soltanzadeh et al. [44], an FE model was applied to simulate the retrofitted RC frame infilled with masonry wall using post-tension technique.In the model, the concrete and brick were modeled by the smeared isotropic damage-plasticity law while the mortar by a damage-based cohesive element with a finite sliding formulation.Besides, the reinforcement used in the post-tension technique was modeled with a 2-node 3-D truss element.The material properties were calibrated by comparing the numerical and experimental results.
Most recently, masonry wall retrofitted with FRP has become quite common and popular.In these retrofitting cases, the FRP-substrate interface is usually modelled as a zero-thickness interface.However, it should be noted that the constitutive model of this zero-thickness interface plays a significant role in simulating the retrofitted masonry structures, and its mechanical behavior should be determined carefully in advance.Maruccio et al. [87] modelled the interface using zero-thickness elements and the mechanical behaviour of the interface element is modelled with the incremental plasticity theory.Moreover, Malena et al. [88] applied fracture Mode-II Cohesive Material Law (CML) on the interface element, which is defined in terms of shear stress and slip at the interface.Besides, Gattulli et al. [89] modelled the FRP reinforcement by adopting truss elements.These truss elements were considered to carry only tensile forces, whose tensile response performed in a linear elastic behaviour in the beginning and then in an exponential degradation.A similar numerical model was applied in the work of Gattesco and Boem [90] as well.In the numerical model, the masonry is modeled with homogeneous material represented by smeared crack model while the GFRP wires modeled with truss element.The interface between the GFRP coating and the masonry was assumed to be perfectly attached.Similar with other researches, some of the parameters are estimated and assumed.
The mechanical performance of the original material, the retrofitting material and the interface between the new and old surfaces should be investigated and determined carefully in advance before the simulation.The numerical result might be influenced remarkably if the properties are over-or under-estimated, as well as the choice of numerical model.In Table 1, the numerical approaches reviewed in this paper are summarized and listed to provide a more visual comparison.
Kalliontzis and Schultz
Masonry are modeled with eight-node linear element, the posttension bars are modeled with two-node linear truss element.
The post-tension bars are unbonded, therefore, there is no interface effect between bars and surrounding masonry.The mortar is modeled with a damage-based cohesive element with a finite sliding formulation.
Maruccio el al.
Brick and mortar are modeled with eight-node quadrilateral isoparametric plane stress.The FRP is simulated by a curved beam element.
The interface is modeled with six-node curved zero-thickness elements and the constitutive law is represented with incremental plasticity theory.
Malena et al.
Interface element is modeled with fracture Mode-II cohesive material law.
Gattulli et al.
FRP is modeled as truss element, and carry only tensile forces.
Total strain rotating crack model to simulate masonry panel.
The interface elements are not used in this model.
Gattesco and Boem
Masonry is modeled with homogeneous material represented by smeared crack model.GFRP wires are modeled with truss element.
The interface between the GFRP coating and the masonry was assumed to be perfectly attached.
The appropriate modelling of masonry building is a prerequisite to carry out the design and assessment of masonry structures against earthquake and extreme loads, therefore, the selection of modelling approach is critical.[91] proposed a stochastic computational framework for the seismic assessment of masonry heritages.The proposed methodology consists of ten distinct steps [92]: 1) to obtain historical and experimental documentation about masonry heritage; 2) material characteristics composing the structure; 3) to select the structural model (3D FEM applied in the authors' researches); 4) which actions to resist; 5) analysis is performed; 6) failure criterion and damage index should be established; 7) seismic vulnerability assessment conducted by applying fragility analysis; 8) repairing/strengthening decisions should be made based on the results of previous steps and re-analysis the repaired/strengthened structure; 9) to make the final decision about the most suitable and effective restoration scenario; 10) the explanatory report includes all the information mentioned in the previous steps should be presented in fully detail.Based on this methodology, a ranking method is offered, which supports authorities and engineers in choosing the ones which present the highest levels of vulnerability among a plethora of structures [92].Moreover, it helps to provide the optimal repairing scenario among many competing scenarios.
COMPARISON OF DIFFERENT STRENGTHENING/RETROFITTING METHODS
In the assessment of strengthening/retrofitting of masonry structures, effectiveness and cost are the most concerned factors.As the developed or selected method should be effective in reinforcing the initial masonry structures it should also be in a cost-effective way.Therefore, the comparison of different strengthening/retrofitting methods will be discussed in detail in terms of effectiveness and cost, which are shown in Tables 2 and 3.According to Tables 2 and 3, it can be revealed that the increase of different approaches differs.Each technique possesses its own merits and shortcomings, and there is no best approach.In order to provide an effective strengthening/retrofitting approach, engineers and householders should follow the following procedures to reinforce masonry buildings, and it is:
Methods Effectiveness
To reduce force Base isolation Forces will be reduced by a factor of 5 to 6 times.
Seismic damper
It can significantly reduce the seismic induced vibrations and improve the overall behaviour of the structure by increasing its internal damping through the energy dissipated by the seismic dampers.
Methods Effectiveness
To upgrade the existing buildings
Surface treatment
The shotcrete increases the lateral strength about 3.6 times and improves the stability in out-ofplane direction, while the ferrocement increases the lateral resistance about 150%.
Mortar joint treatment
Both the grout injection and re-point technique can only restore its original stiffness and strength.
External steel reinforcement The lateral in-plane resistance was improved by a factor of 4.5.
Post-tension
It increases the lateral stiffness and strength up to a factor of 2, It also increase the strength in outof-plane direction.The center core technique can double its load carrying capacity.
Mesh reinforcement FRP increases the lateral resistance by a factor 1.1 to 3. It can also improve the out-of-plane stability.
Reticulatus system
The shear strength can be increased by 15 to 170% depends on the masonry to be retrofitted.
To improve the integrity Constructional columns
The lateral resistance can be increased about 1.5 times and improve the deformability and energy adsorption about 50% Ring beams It improves the integrity of the structure and reserve much strength after earthquake.
Tie bars
It works quite similarly with the steel bars in post-tension method.The seismic capacity of masonry structure can be significantly improved.
Fibre/textile-reinforced mortar The tensile strength of mortar will be improved, thus preventing the out-of-plane expulsion.
Post-tension
This technique is somewhat highly costly.
Mesh reinforcement
It is expensive as FRP is 5 to 10 times more expensive than steel.
Reticulatus system
It is cost-effective as it requires small portion of reinforcement mesh and some re-pointing mortar.
To improve the integrity Constructional columns It is expensive to apply this technique as it requires demolition and reconstruction on the original structure.
Ring beams
It is expensive to construct ring beams on URM structure.
Tie bars
The cost of this technique is close to that of post-tensioning Fibre/textile-reinforced mortar The cost of this technique is somewhat cheap as only mortar and fibres should be considered.
To Understand the behaviour of the buildings.It should be known that masonry structures made of different 1.
masonry materials perform totally different.Brick masonry structure inherently has a stronger performance than adobe masonry structure.The strengthening/retrofitting technique works on adobe masonry may not be effective on brick masonry structure.
To identify the broken or weak parts.In some occasions, only a small part of the structure is broken, therefore, 2.
there is no need to retrofit the whole structure.However, if the overall structure is weak, it is necessary to retrofit the overall structure.Determination of the external loads to be resisted.As presented in Section 2, masonry structures may be 3.
subjected to different types of external loadings.The strengthening techniques will be different according to the external loading.The masonry structure should be strengthened in the purpose to resist the certain event.
Selection of retrofitting/strengthening approaches.There are various types of strengthening/retrofitted methods 4.
for masonry structures.Both the effectiveness and cost should be considered in the selection of the most appropriate and cost-effective method.Application of connector.Connector is another important factor in determining the effectiveness of the selected 5.
approach.For example, the detachment of strengthening material from the surface of masonry wall when using FRP, shotcrete and ferrocement will significantly reduce the strengthening effectiveness.Therefore, the connection details should be carried out by skilled worker following instruction guide.Re-evaluating the strengthened/retrofitted approach.The strengthened/retrofitted approach should be assessed 6.
by engineers to determine whether it can resist the designed external loading.In the assessment of the retrofitted/strengthened structure, some small specimens should be tested to determine its mechanical performance via experiments.
According to Tables 2 and 3, it can be revealed that the improvement of different methods varies.Each approach has its own advantages and disadvantages, and there is no best approach.One strengthening approach that works in one structure doesn't mean it works efficiently in another structure.Therefore, based on the material type and system type, a unique method should be selected.Based on the presented review, the suitable application of each strengthening/retrofitting approaches has been suggested, listed in Table 4.However, it should be noted that the application is only a suggestion, which is not unchangeable.The real structure is complex and it may encounter different issues.Therefore, the final decision should be made by both engineer and asset manager together.
Methods Application
To reduce forces Base isolation This technique is suitable for newly built buildings, historical heritages and low-rise newly built structure as well as the structure located on hard foundation.
Seismic damper
It is desirable for the frames structures located in the seismic prone area, but not common on masonry buildings.
To upgrade the existing buildings
Surface treatment It is suitable for the structures whose function and appearance is not highly concerned while the reinforcement is more concerned, like the residential buildings.
Mortar joint treatment It suits the exterior masonry wallets because collar jointed wall system is applied to improve thermal insulation and prevent water penetration.It also suits the masonry heritages as it preserves its authenticity.
External steel reinforcement
It suits the low-rise and weak building which lacks extensively ductility, it is not applicable on the masonry heritage..
Post-tension
It is desirable for the masonry structure built with perforated masonry units as there is no need to drill the masonry walls.
Mesh reinforcement
It is desirable in the developed countries and with skilled technicians.It is also ideal for the structure to resist blast.It is preferable in strengthening vaults and arches.
Reticulatus system
It is suitable for fair-face masonry and masonry heritages of both regular and irregular shape.
To improve the integrity
Constructional columns It is suitable for newly built structures.
Ring beams
It is suitable for newly built structures or the FRP can be used to retrofit/strengthen the already damaged ring beam.
Tie bars
It fits the similar situation as the post-tension if the tie bar can be fixed along the masonry wall both vertically and horizontally.
Fibre/textilereinforced mortar It fits the same cases with grout injection or re-pointing if the mortar joint is weak or can be easily replaced.
CONCLUSION
The existing strengthening/retrofitting techniques for URM has been reviewed and discussed in this paper, and the results demonstrate that the efficiency of different strengthening approaches differs.Each method possesses its own merits and shortcomings.It is impossible to determine the best strengthening/retrofitting approach.The significance of the improvement of each reinforcing method is dependent on the material that made the original building, as well as the material used to strengthen.When the implemented approach has been found to be effective and economic for a certain type of structure, it cannot be extended and applied to other buildings.Therefore, the selection of reinforcing approaches should be made based on the factors that are most concerned.For example, if the effectiveness is of the most concern, then FRP is an appropriate approach.However, if the cost is the biggest issue, then ferrocement might be more suitable.After the strengthening/retrofitting work has been done, the retrofitted/strengthened masonry structure should be re-evaluated using the reviewed numerical simulation approaches.The most appropriate numerical model can be selected based on the types of retrofitting materials and the masonry construction system.No matter which simulation approach is chosen, the interface element between the brick and mortar or the interface between the brick surface and retrofitting surface is extremely important.The constitutive law and parameters should be determined and calibrated carefully in order to obtain a more accurate result.The research on the mechanical behaviour, especially the non-linear hysteretic behaviour, as well as the calibration of the parameters of the interface element haven't been conducted thoroughly, more and deeper studies on these fields should be carried out in further research.Both experiments and numerical approaches should be applied in order to obtain the constitutive law and failure mechanism between masonry panel and retrofitting material.
In addition, in order to help and guide the engineers in selecting a strengthening or retrofitting method on a masonry structure/element, a procedure is provided.Furthermore, the suggestion on the application of each strengthening/retrofitting techniques has also been proposed.It should be noted that the real structure is complex, therefore, the selection should be decided together by both the house owner and engineer.All in all, a good reinforcing technique must consider the factors of aesthetics, function, strength, ductility and stiffness and the cost requirements [83].
Soltanzadeh
et al. concrete and brick were modeled with smeared isotropic damage-plasticity law.The post-tensioning bars are modeled with truss element.
Table 3 . Summary of the cost of the strengthening/retrofitting approaches.
It is cheap to use this technique as the mortar and masonry units are easy and cheap to obtain. | 11,167.8 | 2018-10-26T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
CHD7 promotes glioblastoma cell motility and invasiveness through transcriptional modulation of an invasion signature
Chromatin remodeler proteins exert an important function in promoting dynamic modifications in the chromatin architecture, performing a central role in regulating gene transcription. Deregulation of these molecular machines may lead to striking perturbations in normal cell function. The CHD7 gene is a member of the chromodomain helicase DNA-binding family and, when mutated, has been shown to be the cause of the CHARGE syndrome, a severe developmental human disorder. Moreover, CHD7 has been described to be essential for neural stem cells and it is also highly expressed or mutated in a number of human cancers. However, its potential role in glioblastoma has not yet been tested. Here, we show that CHD7 is up-regulated in human glioma tissues and we demonstrate that CHD7 knockout (KO) in LN-229 glioblastoma cells suppresses anchorage-independent growth and spheroid invasion in vitro. Additionally, CHD7 KO impairs tumor growth and increases overall survival in an orthotopic mouse xenograft model. Conversely, ectopic overexpression of CHD7 in LN-428 and A172 glioblastoma cell lines increases cell motility and invasiveness in vitro and promotes LN-428 tumor growth in vivo. Finally, RNA-seq analysis revealed that CHD7 modulates a specific transcriptional signature of invasion-related target genes. Further studies should explore clinical-translational implications for glioblastoma treatment.
CHD7 expression is up-regulated in gliomas.
To investigate a potential role for CHD7 in human glioblastoma, we first examined CHD7 mRNA levels across all glioma grades 23 using the Cancer Genome Atlas Project (TCGA) database. Public microarray database analyses revealed that CHD7 is up-regulated in tumor samples, when compared to normal brain tissue (NBT) (Fig. 1A), even though no significant alteration in genetic copy number was detected (see supplementary Fig. S1). Moreover, we found that CHD7 exhibited different expression patterns when comparing the four transcriptionally defined glioblastoma subtypes 24 with higher levels in the proneural tumor samples (Fig. 1B).
Consistent with the TCGA interrogation, we confirmed increased CHD7 mRNA levels in glioma tissues by qRT-PCR (Fig. 1C). Next, we examined the presence of CHD7 expressing cells by immunohistochemistry in Values are presented as log 2 transformation gene normalized by median-centered Log 2 ratios. (C) Relative CHD7 mRNA levels of macro-dissected brain tissue samples from normal brain tissue (NBT) and from resected glioma specimens was assessed by qRT-PCR. Values are presented as linear on a logarithmic scale (log10). HPRT1 levels were used as internal control for normalization. Bars represent the mean value. * p < 0.05, ** p < 0.01, *** p < 0.001; non-parametric analysis of variance (Kruskal-Wallis test) followed by Dunn's test for post hoc comparison were used for statistical analysis. (D) Representative CHD7 immunohistochemistry in NBT and in ZH276 glioblastoma patient sample. Isotype IgG was used as negative control. Scale bar = 20 μm. glioblastoma patient samples. We show that cells displaying high level of CHD7 protein are found within the tumor mass in the three different samples analyzed ( Fig. 1D and supplementary Fig. S1). Altogether, these results show that CHD7 is up-regulated in at least a subset of gliomas irrespective of the grade.
CHD7 expression is highly heterogeneous in human glioblastoma-derived cell lines in vitro.
To further characterize CHD7 expression in glioblastoma, we used the CD133 cell surface marker to enrich for the glioblastoma-initiating cell (GIC) population 25 from freshly dissected tumors. As measured by qRT-PCR, CHD7 mRNA levels were higher in CD133 neg sub-populations ( Fig. 2A).
We next analyzed CHD7 mRNA and protein levels in a panel of eight human long-term glioblastoma cell lines (LTCs) and five GIC lines. We found that CHD7 is expressed in the vast majority of human glioblastoma-derived cell lines in vitro (Fig. 2B). In order to assess the CHD7 protein subcellular localization, we optimized immunoblotting with fractionated cytoplasmic and nuclear cell extracts, confirming CHD7 protein localization in the nucleus (Fig. 2C). Since CHD7 is mainly concentrated in the nucleus, we heretofore adopted blotting nuclear cell extracts instead of whole cell lysates in order to enhance CHD7 protein detection. Among the LTCs, the highest CHD7 protein levels were found in LN-229 cells, whereas T-269 cells displayed higher CHD7 protein levels among the GICs (Fig. 2D). Taken together, these results suggest that CHD7 is up-regulated in gliomas and its expression is highly heterogeneous in cultured glioblastoma cell lines.
CHD7 deletion attenuates anchorage-independent growth and spheroid invasion in LN-229 glioblastoma cell clones in vitro. Although CHD7 expression was not significantly correlated to patient prognosis (see supplementary Fig. S2), the great heterogeneity within glioblastoma tumors prompted us to further examine the functional impact of CHD7 in glioblastoma cells endogenously expressing contrasting levels of (A) Relative CHD7 mRNA levels of freshly dissociated CD133 pos and CD133 neg tumor cells were assessed by qRT-PCR. Cell fractions represent matched sub-populations from the same patient. Results are expressed as average ± SEM for technical replicates. *** p < 0.001; 2-way ANOVA followed by Bonferroni posttest. (B) Relative CHD7 mRNA levels in different LTCs and GICs, determined by qRT-PCR. The results represent average ± SEM from two independent experiments. ## p < 0.01, *** p < 0.001; one-way ANOVA followed by Bonferroni correction for multiple tests compared with LN-229 and all the other cell lines, respectively. (C) CHD7 immunoblotting of fractionated nuclear extracts (NE) and cytoplasmic extracts (CE) of LN-229 and LN-319 cell lines. PARP1 and HSP90 were used as nuclear and cytoplasmic markers, respectively, examined consecutively in the same blot as CHD7, after membrane stripping. (D) CHD7 immunoblotting of nuclear extracts of human glioblastoma-derived cell lines. Left panel shows the result from LTCs. In a separate gel, right panel, is the blot from the GICs. Actin was used as loading control for each of the gels. Due to the great difference in the protein sizes, the loading control was examined in a separate gel, loaded under the same conditions. Total protein extract of 293 T cells transfected with the empty vector and CHD7 overexpression plasmids were used as negative and positive controls, respectively. www.nature.com/scientificreports www.nature.com/scientificreports/ this protein. Initially, we set out to investigate the effect of CHD7 loss-of-function in glioblastoma lines naturally expressing high levels of CHD7. For that purpose, we used the CRISPR/Cas9 genome editing technique to abrogate its expression in LN-229 cells.
Cells were transiently transfected with a combination of two different sgRNA/Cas9 constructs, targeting the initial and final region of the CHD7 gene, aiming to delete most coding exons of CHD7 gene or generate frame shift mutations. After confirming, by PCR, the deletion of genomic DNA in the cell population, we undertook (A) Strategy used to generate CHD7 KO cell clones. LN-229 cell line was co-transfected with two sgRNAs and selected with puromycin for 48 h. After confirmation of genomic editing by PCR in the mixed population selected, we performed clonal isolation. (i-ii) indicate clones carrying CHD7 mutations, which may lead to abrogation of CHD7 expression, and (iii) indicates isolated clones which still exhibit CHD7 expression. (B) Scheme indicating the sgRNA sequences targeting the 5′ and 3′ regions of the CHD7 gene. (C) CHD7 immunoblotting of nuclear extracts from two WT and two KO isolated clones. PARP1 was used as the loading control, examined in the membrane as CHD7. (D) Growth curves of LN-229 clones. 1 × 10 4 cells were plated in 12 well plates in triplicates for each time point. Experiments were performed three times. Results are expressed as average ± SEM for three wells of a single experiment. * p < 0.05; one-way ANOVA followed by Dunnett's test in comparison with WT-1. (E) 1 × 10 4 LN-229 cells suspended in soft agar were layered onto the bottom agar in 24-well plates in triplicates. Representative images of cell colonies grown in culture medium for two weeks. Scale bar: 200 μm. The graph represents total number of colonies per well, greater than 50 μm diameter. Results are expressed as average ± SEM from three independent experiments. * p < 0.05, ** p < 0.01; one-way ANOVA followed by Dunnett's test in comparison with WT-1. (F) Spheroids of WT and KO clones were placed in a 3D-collagen I matrix and the area covered by invading cells was measured for quantification after 24, 48 and 72 h. Representative images show multicellular spheroids at the time point 0 h and 24 h. Scale bar: 400 µm. (G) Experiments were performed three times in quadruplicates for each cell clone. Results from a single representative experiment are presented. Results are expressed as average ± SEM. * p < 0.05, ** p < 0.01, *** p < 0.001; 2-way ANOVA followed by Bonferroni test in comparison with WT-1. (2019) 9:3952 | https://doi.org/10.1038/s41598-019-39564-w www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ single cell sorting in order to isolate cell clones (Fig. 3A,B). A total of 50 clones were expanded and PCR genotyped, showing that only a few clones presented the fragment deletion, while the amplification targeting the exon 3 sequence was also detected ( Supplementary Fig. S3). Even though CHD7 deletion did not seem to occur in both alleles, CHD7 immunoblotting showed that several samples did not express the CHD7 protein corresponding to the canonical transcript. Two independent cell clones, in which CHD7 protein was rendered undetectable (KO), along with two control clones, in which CHD7 expression was not altered (WT), were isolated for further analyses (Fig. 3C). All of these cell clones were successfully expanded in culture without any obvious loss in viability, although clone KO-1 displayed a slightly decreased, albeit significant, difference in growth rate (Fig. 3D).
Next, we analyzed anchorage-independent growth in a soft-agar colony formation assay. The total number of colonies greater than 50 µm diameter, was consistently decreased in the KO cell clones, when compared to the WT clones (Fig. 3E). Since anchorage-independent cell growth is associated with neoplastic transformation and metastatic potential, we asked whether the invasion capacity would be affected in the LN-229 KO clones. To that end, we used a 3D-collagen invasion assay to assess invasion in the LN-229 multicellular spheroids, at 24, 48 and 72 h culturing in a serum-containing collagen matrix (Fig. 3F). The area covered by the invading cells was reduced by about two-fold in the KO clones after 24 h, when compared to the CHD7 expressing clones. The invaded area remained significantly reduced over time, indicating that the invasive potential of the cells was impaired upon CHD7 deletion. Therefore, we demonstrated that CHD7 is not essential for LN-229 cell survival in vitro, however, its deletion affects their anchorage-independent growth and invasiveness.
Ectopic CHD7 overexpression elicits LN-428 cell migration and invasion in vitro.
To determine whether the reduced cell invasion capacity observed in LN-229 KO cell clones might originate from a direct effect of CHD7 deletion, we selected the LN-428 cell line, which expresses low level of endogenous CHD7 protein, to generate a cell population that constitutively overexpresses CHD7 (OE). To this end, the full-length CHD7 cDNA was cloned into the mammalian pCXN2-DEST expression vector harboring a very strong promoter for mammalian cells (see supplementary Fig. S4). LN-428 cells were transfected with the empty vector (EV) or the CHD7-expressing construct and the G418-resistant polyclonal cell populations were selected and expanded for further analysis. Quantification by qRT-PCR and immunoblotting confirmed CHD7 overexpression in the LN-428 cell line (Fig. 4A).
We first performed a scratch wound healing assay and we observed that OE cells possessed approximately 30% increased migration potential compared to EV (Fig. 4B). We also found that CHD7 ectopic expression increased, by almost two-fold, the transwell migration capacity of LN-428 cells (Fig. 4C). We further investigated the role of CHD7 in modulating tumor cell invasion using the Matrigel invasion assay. CHD7 significantly enhanced, by about six-fold, the invasion capacity of LN-428 cells across the transwell chamber, when compared to cells transfected with the EV (Fig. 4D).
Immunofluorescence staining of actin filaments was carried out to evaluate whether cytoskeletal alterations could be associated with altered cell motility capacity. We observed over 40% increase in the number of cells displaying stress fibers in OE cells when compared to cells which do not express high levels of the protein (Fig. 4E,F and supplementary Fig. S5).
Similarly, A172 OE cells display significant changes in cell motility and invasiveness, although in a lesser extend (see supplementary Fig. S6). Together, these data strongly indicate that CHD7 plays an important role in glioblastoma cell migration and invasion.
CHD7 modulates tumor growth in orthotopic xenograft mouse glioma models. To investigate
whether CHD7 is relevant for tumor development and progression, we analyzed whether perturbation of CHD7 protein levels affects the tumorigenic potential of glioblastoma cells in an orthotopic xenograft murine glioma model. Athymic nude mice were used for stereotactical implantation of cells derived from one LN-229 WT clone and two KO clones (n = 8). To measure tumor volume, three pre-randomized mice of each group were sacrificed on the day when the first animal developed neurological symptoms. Analysis of brain sections showed that none of the animals injected with KO cell clones developed big tumors at the time point analyzed, suggesting a delay in tumor growth progression (Fig. 5A,B). Likewise, mice inoculated with the KO-1 clone experienced prolonged survival rate compared to the WT clone, whereas animals injected with clone KO-2 cells showed a similar effect, although the differences were not statistically significant when compared to the WT clone (Fig. 5C).
In a similar setting, animals were inoculated with LN-428 OE and EV and the tumor volume was measured (n = 3). The tumor size was significantly increased in OE group, when compared to EV (Fig. 5D). Immunohistochemistry of brain sections showed that cells displaying high protein levels are located at the border of the tumor (Fig. 5E) suggesting a correlation between the levels of ectopic CHD7 overexpression and the migration and invasion phenotypes of LN-428 cells in vivo.
Altogether, our results demonstrate that functional deletion of CHD7 in human glioblastoma cells that express high levels of CHD7, may lead to decreased tumor progression, whereas ectopic overexpression of this protein in human glioblastoma cells which express low levels of CHD7, enhances tumor growth and increases cell invasiveness in vivo.
does not display high CHD7 protein level, in comparison with OE cells. Images were captured using confocal microscope. CHD7 (red), Actin filaments (green) and nuclei (blue). Scale bar: 20 µm. (F) Graph shows the percentage of LN-428 cells with low and high CHD7 protein levels which display evident stress fibers. EV and OE cells were plated in three different passages and four independent fields at 20x magnification objective were counted for each well. *** p < 0.001; Student's t-test.
www.nature.com/scientificreports www.nature.com/scientificreports/ Modulation in CHD7 levels altered the expression of adhesion molecules. Due to the apparent association of CHD7 with glioblastoma pathogenesis, as well as the previously described function of CHD7 in gene transcription 10,11,13 , we set out to perform gene expression profiling aimed at gathering molecular insights of the role of CHD7 in human glioblastoma. For that purpose, we carried out whole transcriptome/RNA-seq analysis of those same engineered cell lines in which CHD7 expression has been perturbed.
Notably, 58 transcripts were commonly regulated in both cell lines, whereas 18 presented alterations in opposite directions in the expression levels between these two groups ( Table 1). www.nature.com/scientificreports www.nature.com/scientificreports/ Even though CHD7 seems to regulate distinct genes in LN-229 and LN-428 cells, the altered genes were highly associated with pathways such as "biological adhesion", "cell adhesion" and "locomotion" in gene ontology (GO) analysis (Fig. 6A,B and supplementary Fig. S7).
The heat-maps indicate the top 30 DEGs in both groups (Fig. 6C,D). To independently validate these results, changes in the expression of 40 genes associated with tumorigenesis, cell motility or invasiveness were analyzed by qRT-PCR (Fig. 6E,F).
Next, we sought to compare the differentially expressed genes found in our model with previously described CHD7 localization determined by chromatin immunoprecipitation in NSCs 13 . These authors revealed approximately 16,000 binding sites near or in the gene sequences. We found that 28% (85 genes) of the DEGs in KO and 30% (259 genes) of the genes modulated in OE coincide with CHD7 binding sites previously mapped in mouse NSCs (see supplementary Table S6). These data suggest that CHD7 might have binding sites and/or transcriptional targets that are conserved in between mouse NSCs and human glioblastoma cells.
Discussion
CHD7 is known to be essential for organogenesis 26 and in the non-pathological brain, it was shown to be crucial for NSC function [13][14][15]27 . On the other hand, the involvement of CHD7 in tumorigenesis has just begun to be described, gaining considerable attention in the last few years.
In silico analysis of 32 tumor types revealed that CHD7 is the most commonly gained/amplified and mutated gene among the CHD members. The same study also showed that overexpression of CHD7 was more prevalent in aggressive subtypes of breast cancer, being significantly correlated with high tumor grade and poor prognosis 16 . Consistent with these findings, frequent CHD7 mutations have been reported in stomach and colon cancers 19 and aberrations in CHD7 activity was suggested to contribute to colorectal carcinoma CpG island methylator phenotype 1 18 . Moreover, low CHD7 expression in the G4 meduloblastoma subtype, in combination with BMI1 overexpression, has recently been shown to contribute to tumor formation 17 . Mechanistically, CHD7 inactivation favors chromatin accessibility at BMI1 target genes, which, in turn, leads to ERK over-activity and increased cell proliferation. Thus, these data strongly suggest the association of CHD7 with the pathology of different human cancers.
In the present study, we provide evidence that CHD7 plays an important role in glioblastoma pathogenesis, which is the most common and deadliest of malignant primary brain tumors in adults 28 . Despite the multimodality treatment, which typically includes surgery, ionizing radiation, and cytotoxic chemotherapy, the average overall survival rate remains at only ∼15 months, highlighting the urgent need for more effective targeted therapeutics 29 . www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ Our results indicate that CHD7 is highly expressed in glioma patient samples, as previously suggested 30 . Additionally, using the CD133 cell surface marker, we observed that CHD7 is more expressed in the CD133 negative cell population in a subgroup of tumor samples. We also demonstrated that CHD7 knockout not only inhibited anchorage-independent growth in LN-229 cells, but also reduced cell invasion ability. Interestingly, CD133 immunostaining in glioma patient samples showed that CD133 expression was significantly reduced in migrating tumor cells in the tumor periphery compared to tumor cells in the core region 31 . However, the same study discusses that CD133 was found at lower level compared to other stem cell markers, such as Nestin, Musashi-1 and SOX-2.
In vitro, CHD7 expression was found to be highly heterogeneous in the panel of human glioblastoma-derived cell lines analyzed consistent to what we observed in our set of clinical samples as well as in the TCGA public database. Importantly, the cell line presenting the highest CHD7 protein level was the T-269 GIC. Ohta and colleagues have also shown that CHD7 is found to be highly expressed in different established GICs, when compared to normal human astrocytes 30 , however, what specifically determines the great variability in CHD7 expression among these cell lines remains elusive but a warranted field of future investigation.
Studies on CHD7 domain-specific functions and overexpression phenotypes are still scarce in the literature, possibly due to the length of the coding region (2,997 aa) and technical difficulties in generating expression vectors 32 . A recent study showed that introduction of a mRNA encoding the CHD7 isoform 2 (948 aa), in KhES-1 human embryonic stem cells, induced spontaneous cell differentiation in vitro and CHD7-overexpressing cell culture could not be maintained 33 . Here, we amplified the CHD7 coding sequence as three overlapping fragments and the full-length was then assembled and inserted into the expression vector by Gateway ® -assisted sub-cloning.
We demonstrated that ectopic CHD7 expression enhanced migration and invasion capacity of LN-428 cells, potentially by regulating stress fiber assembly and adhesion dynamics. Actin stress fiber formation is one of the critical steps associated with cell invasion 34 , but the precise mechanism by which CHD7 promotes reorganization of the cell cytoskeleton remains to be elucidated. We were unable to explore the role of CHD7 in GICs using the same approach, due to difficulties in selecting and expanding modified cells for experiments.
Interestingly, CHD7 duplications have been suggested to be a driver mutation identified in small-cell lung cancer, one of the most highly metastatic and aggressive types of cancer 21 . Additionally, using iPS-derived neural crest cells from CHARGE patients, CHD7 mutations were also found to promote defective cell migration 35 . This study showed modulation of several genes related to cell adhesion and migration, such as CTGF, COL3A1, SERPINE1 and THBS1, all of which we found to be modulated in glioblastoma cells. CHD7 has also been implicated in regulation of neural crest cell migration during embryogenesis in Xenopus 7 . In this model and also in human neural crest cells, CHD7 was shown to associate with PBAF to modulate SOX9 and TWIST1 gene expression, which are essential for proper cell migration of this cell type. We did not observe significant changes in those genes in our model, indicating that CHD7 may regulate cell motility by different mechanisms, possibly by association with cell-specific interaction partners.
In fact, it has been previously shown that CHD7 binding sites present high variability among different cell types and CHD7 binding itself is context-dependent (e.g., embryonic cell in the differentiated state showed only 30% overlap in the binding sites) 10 . In our study, 18 genes were commonly regulated in opposite directions in LN-229 and LN-428 cell lines. Moreover, the comparison with previously described CHD7 binding sites reveled that genes modulated by CHD7 perturbation in glioblastoma cells, such as NRCAM, CNTN1 and EMILIN2, have significantly higher enrichment of CHD7 occupancy in mouse NSCs 13 , suggesting direct transcription modulation of these targets. Importantly, several of these genes have been implicated in glioblastoma invasiveness 36 .
One could argue that each cell type in a given tissue might have unique CHD7 binding sites and protein complexes, which vary over time 32 . Our findings demonstrate the diversity of CHD7-regulated genes and suggest a broader function for CHD7 as a master regulator of cell migration and invasion. It will be of great interest to investigate, for example, the underlying mechanism regulating differential CHD7 expression in glioblastoma cells and whether these pathways are amenable to manipulation by molecular intervention aiming at clinical therapeutic trials.
Conclusions
The invasive behavior of malignant gliomas is one of the most important characteristics which contributes to tumor recurrence after surgery 37 . Our data provides functional and molecular evidences for a novel oncogenic role of CHD7 as a transcriptional regulator of pro-invasive and motility factors in glioblastoma cells (Fig. 6G). Further studies may warrant important clinical-translational implications for glioblastoma treatment.
Methods
Gene expression and survival analysis using The Cancer Genome Atlas (TCGA) dataset. Overall expression analysis within the TCGA database (http://cancergenome.nih.gov) was undertaken using the single gene expression analysis module of the R2: microarray analysis and visualization platform (http://www.r2.amc.nl). CHD7 expression clusters were generated across 276 samples (MAS5.0 -u133p2 dataset) and analyzed with k-means algorithm and log 2 transformation of gene expression. Analysis of CHD7 expression relative to the glioblastoma subtype 24 was carried out using 435 samples classified into these groups within the subtype track mode and z-score transformation. Kaplan-Meier survival analysis and log-rank tests were carried out as detailed in Supplementary Information. Patient samples. Brain tissue samples from temporal lobectomy epileptic patients and from resected astrocytoma specimens were macro-dissected and immediately snap-frozen in liquid nitrogen 38 . The specimens were categorized according to the 2007 WHO classification 23 . This project has the approval of the Ethical Committee of the University of São Paulo School of Medicine (CAPPesq, 691/05), and informed consent was obtained from www.nature.com/scientificreports www.nature.com/scientificreports/ all patients. The CD133 pos and CD133 neg cells were isolated from freshly resected human glioblastoma tumor tissue (ZH-419, ZH-445, ZH-456, ZH-464, ZH-496 and ZH-525) after written informed consent of the patients and approval by the Institutional Review Board of the University Hospital Zurich. Detailed protocol described in Supplementary Information. In vitro assays. Details on cell lines, reagents, real-time quantitative reverse transcription-PCR (qRT-PCR) and primers, are provided in Information. Detailed protocol for immunofluorescence, anchorage-independent clonal growth, migration and invasion assays is summarized in Supplementary Information. CRISPR/Cas9 knockout of CHD7. CHD7 knockout clones were generated according to the protocol described by Ran and colleagues 39 . Briefly, small guide RNAs (sgRNAs) were designed, using an CRISPR Design Tool (http://tools.genome-engineering.org) and then cloned (guide sequences in Supplementary Material and Methods Table S2) into the pSpCas9(BB)-2A-Puro (PX459), a gift from Feng Zhang (Addgene plasmid #48139, Addgene, Massachusetts, USA). Lipofectamine 2000 (Life Technologies) was used for transient co-transfection of two sgRNAs constructs at a 1:1 ratio. Cells were selected with 3 μg/mL puromycin (Life Technologies) for 48 h and genomic DNA was extracted using QIAamp DNA Kit (Qiagen, Venlo, Netherlands) for detection of the CHD7 deletion by PCR (primer sequences available in Supplementary Information Table S3). Transfected cells were then isolated by BD FACSARIA II (BD Bioscience) single cell sorting in 96 well plates. Cell clones were expanded for genomic DNA extraction and genotyping ( Supplementary Fig. S3). Selected clones were further expanded for nuclear protein extraction and tested by immunoblotting. CDH7 overexpression. A 9 Kb cDNA, comprising the ORF full-length of the human CHD7 gene (GenBank Accession #NM_017780.3) was amplified by long RT-PCR from the OVCAR8 human ovarian cancer cell line as three overlapping ~3 kb fragments. Briefly, total RNA was purified from OVCAR8 cells (RNeasy RNA Purification Kit, Qiagen) and 1 μg RNA was used as the template for reverse transcription with SuperScriptIII ® (Life Technologies). PCRs were carried out using Phusion ® High Fidelity DNA Polymerase (New England Biolabs, Ipswich, MA) and the resulting PCR products were cloned using TOPO ® Zero Blunt cloning kit (Thermofisher). Clones displaying the correct sequence, as judged by Sanger sequencing, were assembled by Gibson ® cloning into the full length ORF using the unique AflII and MfeI restriction sites of the CHD7 cDNA. A Kozak consensus sequence was added juxtaposed to the initial ATG codon for optimal expression levels in mammalian cells. The final full-length 9Kb CHD7 cDNA was cloned into the pCXN2 expression vector 40 using the Gateway ® -assisted sub-cloning.
To generate OE cell populations, cells were transfected with the pCXN2_CHD7 construct or with the empty vector using Lipofectamine 2000 (Life Technologies). LN-428 and A172 transfected cells were selected with 750 µg/mL and 200 µg/mL of Geneticin G418 Sulfate (Gibco, Thermo Scientific), respectively. Histology and immunohistochemistry. Three glioblastoma patient samples (ZH149, ZH265, ZH276) and one normal brain tissue were used to investigate CHD7 protein levels. Samples were de-paraffinized and rehydrated tumor tissue sections were boiled in EDTA buffer, pre-treated with 1% H 2 O 2 and blocked in blocking solution (Candor Biosciences, Germany). Sections were incubated with primary anti-CHD7 antibodies (ab31824, 1/200) (Abcam, Cambridge, UK) at 4 °C overnight. Simultaneously and under the same conditions, matching rabbit IgG isotype control (ab27478, 1/200) was used in place of the CHD7 primary antibody for accurate interpretation of immunostaining results. After washing, samples were incubated with goat anti-rabbit IgG-AP (sc2007, 1/200) (Santa Cruz, Texas, USA) at room temperature for 30 min (under protection from light). The DAB+ (#K3468, Dako) chromogen substrate was used as the detection system and the sections were counterstained with Mayer's hematoxylin to visualize the nuclei.
Immunoblotting. Total cellular extracts were obtained by lysing cells with RIPA buffer (150 mM, NaCl, 1% NP-40, 0.5% SDS, 50 mM Tris pH 8.2, 1 mM EDTA). Cytoplasm and nuclear protein lysates were prepared with the NE-PER Nuclear and Cytoplasmic Extraction kit (Thermo Scientific). Proteins (30 µg per lane) were resolved on a 3 to 7% Tris-acetate gel (Life Technologies) to detect CHD7 and PARP1. Actin was evaluated in a 10% SDS-PAGE. Gels were transferred to a nitrocellulose membrane (Life Technologies). After blocking with 0.5% non-fat milk in TBS containing 0.5% Tween 20 (TBST), the membrane was incubated in blocking solution with primary antibody overnight at 4 °C. After washing and incubation with the HRP-conjugated secondary antibody (1/5,000, Sigma Aldrich), the protein bands were detected with enhanced chemoluminescence (ECL, Thermo Scientific). Original blots are presented in Supplementary Information, Fig. S8.
RNA-seq experiment and data analysis. The next-generation sequencing (NGS) libraries were prepared according to Illumina TruSeq Stranded mRNA LT protocol. Quality control of the amplified products before and after fragmentation and labeling was analyzed using the Agilent Bioanalyzer. Samples were sequenced on Illumina NextSeq. 550 (2 × 76 bp paired-end sequencing) operated by the Biomedical Institute Facility Center CEFAP of the University of São Paulo (USP). All calculations were carried out as described in Supplementary Information. Animal studies. All experiments were carried out according to the Swiss Federal Law on the Protection of Animals, the Swiss Federal Ordinance on the Protection of Animals, and the guidelines of the Swiss confederation (permission #ZH062/15). FoxN1 nu/nu mice (Charles River, Sulzfeld, Germany) aged between 6-12 weeks (2019) 9:3952 | https://doi.org/10.1038/s41598-019-39564-w www.nature.com/scientificreports www.nature.com/scientificreports/ were anaesthetized and placed in a stereotaxic fixation device. A burr hole was drilled in the skull 2 mm lateral and 1 mm posterior to the bregma. The needle of a Hamilton syringe was introduced into a depth of 3 mm 41 . LN-229 (7.5 × 10 4 ) and LN-428 (1 × 10 5 ) cells were resuspended in PBSA and then injected into the right striatum. Animals were clinically assessed three times per week and sacrificed upon developing neurological symptoms, justifying euthanasia (score 2).
Statistics. Analysis of the relative mRNA levels between different glioma grades and glioblastoma samples were carried out by a non-parametric analysis of variance (Kruskal-Wallis test) with Dunn test for post-hoc comparison. In vitro experiments were performed in biological and technical replicates. Results are expressed as the mean and SEM of triplicate determinations. The statistical analyses were performed by unpaired Student's t-test or ANOVA for multiple comparison tests. Animal survival statistics was assessed using Gehan-Breslow-Wilcoxon test. All statistical analyses were carried out using Prism 5 (GraphPad Software, La Jolla, CA).
Ethics approval and consent to participate. Clinical human samples were obtained from patients undergoing surgical resection. All the procedures were performed in accordance with the guidelines and regulations as determined by the Ethical Committee of the University of São Paulo School of Medicine (CAPPesq, 691/05), and informed consents were obtained from all patients. The CD133 pos and CD133 neg cells were isolated from freshly resected human glioblastoma tumor tissue after written informed consent of the patients and in accordance with the guidelines and regulations determined by the Institutional Review Board of the University Hospital Zurich. All animal experiments were performed in accordance with protocols approved by the Swiss Federal Law on the Protection of Animals, the Swiss Federal Ordinance on the Protection of Animals, and the guidelines of the Swiss confederation (permission #ZH062/15). | 7,196 | 2019-03-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Green’s function for the lossy wave equation (Função de Green para a equação da onda dispersiva)
Using an integral representation for the first kind Hankel (Hankel-Bessel Integral Representation) function we obtain the so-called Basset formula, an integral representation for the second kind modified Bessel function. Using the Sonine-Bessel integral representation we obtain the Fourier cosine integral transform of the zero order Bessel function. As an application we present the calculation of the Green's function associated with a second-order partial differential equation, particularly a wave equation for a lossy two-dimensional medium. This application is associated with the transient electromagnetic field radiated by a pulsed source in the presence of dispersive media, which is of great importance in the theory of geophysical prospecting, lightning studies and development of pulsed antenna systems.
Introduction
In the study of classical special functions, e.g., Bessel functions and Legendre polynomials, two fundamental methods must be mentioned: Rodrigues type formula where the particular special function is presented in terms of derivatives, and integral representations where the particular special function is given by an integral in the complex plane.We mention in passing that all classical special function can be presented by a Frobeniustype series.
Several authors prefer to work with Rodrigues formula [1][2][3].This method is convenient to study several properties of special functions as, for example, their recurrence relations.Other authors prefer to work with a suitable integral representation in the complex plane [4].This method is best suited when one uses integral transforms (Laplace, Fourier, Hankel and Mellin integral transforms) to solve an ordinary or partial differential equation.
Here we follow the second method because we are interested in calculating the Green's function associated with the wave equation.Moreover, we use the Laplace and Hankel integral transforms (the joint transform method) to solve a non-homogeneous second order partial differential equation.
As possible applications we may cite the radiation problem in a curved homogeneous dielectric slabwaveguide, which was investigated by Chang-Barnes [5], and the transient electromagnetic field radiated by a pulsed source in the presence of dispersive media, which is of great importance in the theory of geophysical prospecting, lightning studies and in the development of pulsed antenna systems.We remark that for this last problem Kuester [6] has obtained an exact integral representation for the transient field of a pulsed line source above a plane reflecting surface which can be expressed as a finite integral over the transient planewave solution for complex angles of incidence.
This paper is organized as follows: in section 2 we present some applications as a motivation in section 3 we discuss an integral representation for the first kind Hankel function, in order to obtain an integral representation for the Bessel function, the so-called Basset formula.In section 4, using the Sonine-Bessel integral representation we calculate an integral involving a Bessel function, which can be interpreted as a Fourier cosine transform of the zero order Bessel function.In section 5, using Laplace and Hankel integral transforms, the so-called joint transform method, we discuss a nonhomogeneous second order partial differential equation with constant coefficients and as an application we obtain, in a closed form, the Green's function, associated with the wave equation for a lossy two-dimensional medium.Finally we present our concluding remarks.
Some applications
In this paper we derived the Green's function of a nonhomogeneous second order partial differential equation in three independent variables with constant coefficients, i.e.
where a and b are positive constants and f (x, y) is a continuous function of the form f (x, y) = f ( x 2 + y 2 ), a radial function.The equation above is the wave equation.
Waves are important phenomena nowadays.Wave equations appear in many applications of mathematics in physical sciences, like geophysical prospecting, lightning studies and development of pulsed antenna systems.For example, we know that homogeneous Maxwell's equations satisfy the wave equation.Using this fact Maxwell discovered that the solutions of his homogeneous system were electromagnetic waves that propagate in the empty space (vacuum) [7].
As an application on geophysical prospecting we cite the methods of migration.Bleistein [8] says that migration is "the dominant method for reflector imaging from seismic data in geophysics today.The objective of this method might be viewed as moving the reflectors from their time location to their spatial location."So,from seismic data, that are in the time domain, we use this method to recover its shape in the depth domain.One of them is the wavefield migration.This method uses the solution of wave equation to recover the shape of the subsurface.For lightning studies we can find an application using the wave equation to study the transient electromagnetic field of a pulsed line source.
So, as we can see that waves are important in several problems in many different areas.Sezginer and Chew [9] solved the problem of finding the Green's funcion associated with equation above.We used another mathematical technique to solve this problem.
An integral representation
In Watson's book [4] it was shown that several contour integrals can be obtained as generalisations of Poisson's integral and several integral representations for Bessel and Hankel (Bessel function of the third kind) functions were obtained with convenient modifications of Hankel's contour integrals.A particular such representation is with |arg (z)| < π/2 and ( 1 2 − ν) > 0, where and K μ (x) is a second kind modified Bessel function.Considering in Eq. ( 2) x as a positive number and z as a complex number with |arg (z)| < π/2 we can write Eq. ( 1) as with (ν + 1/2) > 0. Now, when (ν + 1/2) ≥ 0, the integral, taken on arcs of a circle from ρ to ρ e ±πi/2−θ , where θ =arg (z), tends to zero as ρ → ∞, by Jordan's lemma [10].Hence, by Cauchy's theorem, the path of integration may be opened out until it becomes the line on which (zt) = 0.If we write zt = iu, the phase of [−(u 2 /z 2 ) − 1] is −π at the origin in the u-plane [4].Using Eq. ( 2) and the parity of the integral we can write Eq. (3) in the following form with (ν) ≥ −1/2, x > 0 and |arg (z)| < π/2.This expression, an integral representation for second kind modified Bessel function, is known by the name of Basset formula [4].As a particular case of Eq. ( 4) we consider ν = 0 and then we obtain with x > 0 and |arg (z)| < π/2.We refer that another way to calculate this integral representation is to consider the following convenient limit where Q n (z) is the second kind Legendre function, given by the integral representation with |z| > 1.This integral representation can be obtained by means of an integral representation for a hypergeometric function [11] and the expression where ξ = z + √ z 2 − 1 and one must take the positive square root for |z| > 1.
Fourier cosine transform of Bessel function
In this section we calculate an integral involving a Bessel function, an integral that can be interpreted as a Fourier cosine transform of the zero order Bessel function by using Sonine's integral representation.We are interested in calculating the integral where r > 0, t > 0 and is a positive real parameter.
To perform this integral we begin with the integral representation, in the complex plane, for the Bessel function ) dξ , (7) with c > 0, which is known as Sonine's integral [4].We note that in this expression the contour is the so-called Bromwich contour, the same contour used in calculations of inverse Laplace integral transform [11].
Here we are interested in the case ν = 0, only.Then, taking ν = 0 and z = +t √ u 2 − 2 we can write for the Bessel function where Introducing Eq. ( 8) in Eq. ( 6) and changing the order of integration (both integrals are uniformly convergent) we get where γ = t 2 /4ξ.
To perform the integral in variable u we complete the square and obtain where The above equation can be identified with Eq. ( 7), for ν = −1/2 and we get Using the following relation between Bessel and modified Bessel functions and, for the particular value ν = −1/2, the relation we finally obtain for our initial integral, Eq. ( 7) for 0 < r < t and zero otherwise.
Green's function
In this section we introduce Laplace and Hankel integral transforms (the joint transform method) to solve a nonhomogeneous second order partial differential equation in three independent variables with constant coefficients, i.e.
where a and b are positive constants and f (x, y) is a continuous function of the form f (x, y) = f ( x 2 + y 2 ), a radial function.
To solve this partial differential equation it is sufficient to look for the associated Green's function, which is the solution of the non-homogeneous partial differential equation where we consider the causality condition where a = 1/c 2 and c is a constant (velocity of light).
Introducing polar coordinates (translational invariance) x = r cos θ and y = r sin θ we can rewrite Eq. ( 11) as where r = |r − r | and τ = t − t .This equation is the same equation discussed by Sezginer and Chew [9] in a paper where they obtain a closed form expression of the Green's function for the time-domain wave equation for a lossy two-dimensional medium, using Fourier transform.This equation is satisfied for the electric field due to a line current source parallel to the z axis in a conductive medium. 2o calculate the Green's function we must introduce the boundary and initial conditions.For the initial conditions we take (homogeneous conditions) and use as boundary conditions which guarantee the existence of Laplace and Hankel integral transforms.We note that in Ref. [9] the authors obtain an integral representation for the Green's function in terms of Hankel function.
We observe that the importance in solving this problem by means of Hankel transform resides in the fact that the integral representations for second kind modified Bessel functions have a well known representation as we have discussed in section 3, i.e., our Eq.( 4).
Firstly we introduce the Laplace integral transform, g(r, s), in the time variable τ , g(r, s) = ∞ 0 g(r, τ ) e −sτ dτ , with (s) > 0; using the initial conditions we get the non-homogeneous ordinary differential equation where c and ζ were introduced in footnote 2.
Next, we introduce the Hankel integral transform g(k, s) in the radial variable r, ∞ 0 g(r, s) r J 0 (kr) dr = g(k, s) , where J 0 (x) is a Bessel function, and using boundary conditions we obtain an algebraic equation for g(k, s) which has the solution where β −1 = c ζ and s/c → s.Now, our procedure is to evaluate the respective inverse transforms.The inverse Hankel integral transform is given by [12] g(r, s) where K 0 (x) is a second kind modified Bessel function.
Another way to calculate this integral is by means of the residue theorem [10].
To recover our Green's function we must calculate the inverse Laplace integral transform, i.e.
To perform this integral we can use a suitable contour in the complex plane (modified Bromwich contour) or use the integral representation obtained in section 3. Firstly, we can write the above equation as and using Eq. ( 5) we get Both integrals are convergent and we can write and Laplace inverse, L −1 [F (s)], transforms are related by means of [11] L The remaining integral is performed with the help of Eq. ( 10) and we finally get for 0 < r < t and zero otherwise.In order to simplify this expression we introduce Heaviside theta function and then we obtain which is the same expression obtained in Ref. [9] by using another procedure.
Taking the limit → 0 in the equation above we get which is the Green's function associated with the lossless case.We conclude this section calling that, in general, our inverse Laplace transform given by Eq. (13) can be written as where Γ is the modified Bromwich contour [10] and d ≥ 0. Taking the same limit as above we can get which can be interpreted as the inverse Laplace transform of the second kind modified Bessel function.
Concluding remarks
In this paper we point out the importance of Hankel integral transform in the calculation of a Green's function associated with a problem involving propagation in two independent variables.Using an integral representation for the second kind modified Bessel function we calculate an integral involving a zero order Bessel function which can be interpretated as a Fourier cosine transform of Bessel function and then we obtain the Green's function associated with the wave equation.
We note that our result can be used in the calculation of the Green's function associated with the wave equation for a damped oscillator and the telegraph equations.This will be done in forthcoming paper. | 3,033.8 | 2008-01-01T00:00:00.000 | [
"Physics"
] |
Influence of Softening Mechanisms on Base Materials Plastic Behaviour and Defects Formation in Friction Stir Lap Welding
: The AA6082-T6 and AA5754-H22 aluminium alloys were selected as the base materials to fabricate similar and dissimilar friction stir lap welds. Three lap configurations, AA6082 / AA5754, AA5754 / AA6082 and AA6082 / AA6082, were produced using three pin profiles and tested to analyse the role of the plastic behaviours of the base materials on the welding conditions. The macrostructural characterisation was carried out to understand the material flow response and hook defect formation. The mechanical characterisation of the joints was done by microhardness and lap tensile shear testing. The finite element analysis and phase simulation were conducted to predict the phase dissolution temperatures and the softening kinetics. The welding torque and axial forces registered were analysed to quantify di ff erences in the alloy’s flowability during welding. The analysis of the welding machine outputs enabled to conclude that higher axial forces were registered when the AA5754 alloy was placed at the top of the dissimilar lap joint, showing that the non-heat-treatable alloy has lower flowability than the heat-treatable alloy. These results were associated with the flow-softening of the AA6082 alloy in plastic deformation at high temperatures. The coupled experimental and numerical analysis revealed that the plastic behaviour of the base materials strongly influenced the material flow and, in this way, the hook defect formation and the shear tensile properties of the welds. alloy. These data were coupled with the maximum temperatures previewed by COMET in order to determine the precipitation kinetics in the di ff erent weld regions and to associate it with local base material softening mechanisms. The weight percentage of the alloy elements of the AA6082 aluminium alloy was used as the input for the metallurgical simulation to plot the phase dissolution curves.
Introduction
The outstanding property combinations of aluminium makes it a very relevant material for many industries, such as the automotive, shipbuilding, railway, etc. With the development of solid-state welding processes, such as friction stir welding (FSW), the joining of aluminium has become more efficient and much less complex. However, although a very relevant increase in knowledge has been achieved in the FSW of aluminium and its alloys over the last 30 years, there are many aspects that still need to be more explored, especially in which concerns the joining of aluminium alloys with different properties, such as the heat-treatable AA6XXX and non-heat-treatable AA5XXX aluminium alloys. The optimisation of the welding parameters requires the full understanding of the thermomechanical phenomena occurring during the welding of these alloys, which strongly depends on their plastic behaviour at high temperatures and strain rates [1,2]. In spite of significant research that has been conducted on dissimilar welding of AA5XXX and AA6XXX aluminium, most of the published works were focused on butt joining. For this joint configuration, numerical and experimental studies have been tools, with different pin geometries, were used in the investigation. In the text, each tool will be identified according to the pin design (CN and CL for the conical and cylindrical pin geometry, respectively) and the pin tip diameter. The welding parameters, which are displayed in Table 2, were defined based on the work conducted by Costa et al. [19]. In the next section, the similar welds (S) will be labelled as S6, and the dissimilar welds (D) will be identified according to the base materials positioned in the lap joint. So, the D65 nomenclature will be used to identify the joints produced with the AA6082 and AA5754 alloys as the top and the bottom plates in the joint, respectively, and the D56 for the welds produced with the reverse-base materials positioning. After welding, cross-section samples were collected from the welds and prepared following the standard metallographic procedures. The morphological characterisations of the welds were conducted by optical microscopy (OM) and scanning electron microscopy (SEM) using Leica DM4000M LED (Leica Microsystems, Wetzlar, Germany) and PHILIPS XL30 SE (Philips, Eindhoven, The Netherlands) microscopes, respectively. The local weld properties and the joint strengths were assessed by microhardness and lap tensile-shear testing, respectively. The microhardness measurements (HV0.2) were performed along the transverse cross-section of the welds, at the top and the bottom of the lap joint (at middle thickness), using a Struers Duramin tester (Struers, Ballerup, Denmark). The lap tensile-shear tests were performed in quasistatic loading conditions (5 mm/min), using a 5-kN universal testing machine Shimadzu Autograph AG-X (Shimadzu, Kyoto, Japan). Two loading modes were tested in order to quantify the strength mismatch between the advancing (AS) and retreating (RS) sides of the welds. Schematics of both loading modes (AS and RS loading) are illustrated in Figure 2. During testing, the strain distribution in the specimens was acquired by digital image correlation (DIC) using GOM Aramis 5M (GOM, Braunschweig, Germany). The specimens were prepared following the procedures reported in Leitão et al. [1]. After welding, cross-section samples were collected from the welds and prepared following the standard metallographic procedures. The morphological characterisations of the welds were conducted by optical microscopy (OM) and scanning electron microscopy (SEM) using Leica DM4000M LED (Leica Microsystems, Wetzlar, Germany) and PHILIPS XL30 SE (Philips, Eindhoven, The Netherlands) microscopes, respectively. The local weld properties and the joint strengths were assessed by microhardness and lap tensile-shear testing, respectively. The microhardness measurements (HV0.2) were performed along the transverse cross-section of the welds, at the top and the bottom of the lap joint (at middle thickness), using a Struers Duramin tester (Struers, Ballerup, Denmark). The lap tensile-shear tests were performed in quasistatic loading conditions (5 mm/min), using a 5-kN universal testing machine Shimadzu Autograph AG-X (Shimadzu, Kyoto, Japan). Two loading modes were tested in order to quantify the strength mismatch between the advancing (AS) and retreating (RS) sides of the welds. Schematics of both loading modes (AS and RS loading) are illustrated in Figure 2. During testing, the strain distribution in the specimens was acquired by digital image correlation (DIC) using GOM Aramis 5M (GOM, Braunschweig, Germany). The specimens were prepared following the procedures reported in Leitão et al. [1].
Numerical Simulation Methodology
A thermomechanical analysis of the AA6082-T6 similar lap welding was done using the finite element (FE) package COMET [20] in order to correlate the maximum temperature distribution in the lap joints with the precipitation kinetics of the alloy in the different lap weld regions. Since, in the analysis of the experimental results, the properties of the S6 weld were used as a reference for comparison with the dissimilar ones, in the numerical simulation work, only the similar lap welding of the heat-treatable aluminium alloy, with the CN8 tool, was simulated.
The base material plastic behaviour was modelled using the Norton-Hoff constitutive model: in which σeq is the equivalent stress, is the equivalent strain rate and µ(T) and m(T) are parameters that determine the strength and the strain rate sensitivity, respectively, of the base material. The constitutive law parameters were taken from Dialami et al. [21], following the recommendations presented by Andrade et al. [22]. The temperature-dependent thermal properties of the base material, i.e., the thermal conductivity, specific heat and density, were also taken from Dialami et al. [21].
Norton's friction law: was used to model the friction between the tool and the workpiece. In Equation (2), is the friction shear stress, ∆ is the relative sliding velocity between the tool and the workpiece, is the sensitivity to the sliding velocity and ( ) is the consistency parameter. Both and ( ) were selected according to Andrade et al. [23].
The model geometry and mesh discretisation are shown in Figure 3. As shown in the figure, in COMET [21], the tool is modelled in a Lagrangian framework, while the stirring zone and the base material are modelled using the Arbitrary Lagrangian/Eulerian (ALE) and Eulerian frameworks, respectively. Based on the experimental results and on the findings from Costa et al. [24], the 1 mm thick lap plates were modelled as a single plate, with 2 mm thickness. The tool was modelled in contact with the upper plate, with only 0.2 mm pin penetration in the bottom half-plate thickness. The full computational model comprised 15,724 nodes and 79,820 tetrahedral elements.
Numerical Simulation Methodology
A thermomechanical analysis of the AA6082-T6 similar lap welding was done using the finite element (FE) package COMET [20] in order to correlate the maximum temperature distribution in the lap joints with the precipitation kinetics of the alloy in the different lap weld regions. Since, in the analysis of the experimental results, the properties of the S6 weld were used as a reference for comparison with the dissimilar ones, in the numerical simulation work, only the similar lap welding of the heat-treatable aluminium alloy, with the CN8 tool, was simulated.
The base material plastic behaviour was modelled using the Norton-Hoff constitutive model: in which σ eq is the equivalent stress, . ε eq is the equivalent strain rate and µ(T) and m(T) are parameters that determine the strength and the strain rate sensitivity, respectively, of the base material. The constitutive law parameters were taken from Dialami et al. [21], following the recommendations presented by Andrade et al. [22]. The temperature-dependent thermal properties of the base material, i.e., the thermal conductivity, specific heat and density, were also taken from Dialami et al. [21].
Norton's friction law: τ = a(T) ∆v s q−1 ∆v s , was used to model the friction between the tool and the workpiece. In Equation (2), τ is the friction shear stress, ∆v s is the relative sliding velocity between the tool and the workpiece, q is the sensitivity to the sliding velocity and a(T) is the consistency parameter. Both q and a(T) were selected according to Andrade et al. [23]. The model geometry and mesh discretisation are shown in Figure 3. As shown in the figure, in COMET [21], the tool is modelled in a Lagrangian framework, while the stirring zone and the base material are modelled using the Arbitrary Lagrangian/Eulerian (ALE) and Eulerian frameworks, respectively. Based on the experimental results and on the findings from Costa et al. [24], the 1 mm thick lap plates were modelled as a single plate, with 2 mm thickness. The tool was modelled in contact with the upper plate, with only 0.2 mm pin penetration in the bottom half-plate thickness. The full computational model comprised 15,724 nodes and 79,820 tetrahedral elements.
The metallurgical simulation package Thermocalc was used for the prediction of the temperatures at which the phase dissolution occurs for the AA6082-T6 alloy. These data were coupled with the maximum temperatures previewed by COMET in order to determine the precipitation kinetics in the different weld regions and to associate it with local base material softening mechanisms. The weight percentage of the alloy elements of the AA6082 aluminium alloy was used as the input for the metallurgical simulation to plot the phase dissolution curves. The metallurgical simulation package Thermocalc was used for the prediction of the temperatures at which the phase dissolution occurs for the AA6082-T6 alloy. These data were coupled with the maximum temperatures previewed by COMET in order to determine the precipitation kinetics in the different weld regions and to associate it with local base material softening mechanisms. The weight percentage of the alloy elements of the AA6082 aluminium alloy was used as the input for the metallurgical simulation to plot the phase dissolution curves.
Results and Discussion
In the next section, the mechanical and morphological characterisation results will be analysed in order to assess the lap weld defect severities and to relate them with the AA6082 and AA5754 alloy properties and relative positioning in the dissimilar welds. Then, the FSW machine output data (torque and axial force) will be used as parameters for measuring the flowability of the two base materials under the stirring action of the tool. Finally, the flowability of the AA6082 will be explained by analysing, through the numerical simulation results, the influence of the thermal cycles and related precipitation kinetics on the alloy strength and plastic properties at high temperatures.
Macrostructure and Morphology
Macro-and micrographs of the advancing side of the similar and dissimilar welds produced with the CN8 tool are shown in Figure 4. Since the morphology of the welds was found to be more influenced by the base material positioning in the lap joint than by pin geometry, the macrographs of the welds produced with the CN6 and CL6 tools are not displayed in the figure. The entire crosssections of the welds are also not shown in the figure, since no macroscopic defects, such as voids or tunnels, were formed in the welds, regardless of the tool geometry or of the base material combinations (S and D welds) and their relative positions in the lap joint (D65 and D56 welds). The morphology of the lap interface, at the retreating side of the welds, was also similar for all the welds and was characterised by a continuous straight interface, with no clear evidence of the base materials stirring across it. Contrary to this, a well-defined hook-shaped interface was observed at the advancing side of the S6 ( Figure 4a) and D56 ( Figure 4c) welds but not in the D65 weld ( Figure 4b).
The material discontinuity associated with the hook-shaped interface is well-illustrated in the SEM micrograph displayed in Figure 4d, which corresponds to the region signalised in the cross-section of the D56 weld. The figure shows that the hook corresponds to an unbonded interface, which results from the upward flow of the lower plate material through the lap interface. The similarities between the hook shape in the S6 and D56 welds, as well as the absence of a well-defined hook in the D65
Results and Discussion
In the next section, the mechanical and morphological characterisation results will be analysed in order to assess the lap weld defect severities and to relate them with the AA6082 and AA5754 alloy properties and relative positioning in the dissimilar welds. Then, the FSW machine output data (torque and axial force) will be used as parameters for measuring the flowability of the two base materials under the stirring action of the tool. Finally, the flowability of the AA6082 will be explained by analysing, through the numerical simulation results, the influence of the thermal cycles and related precipitation kinetics on the alloy strength and plastic properties at high temperatures.
Macrostructure and Morphology
Macro-and micrographs of the advancing side of the similar and dissimilar welds produced with the CN8 tool are shown in Figure 4. Since the morphology of the welds was found to be more influenced by the base material positioning in the lap joint than by pin geometry, the macrographs of the welds produced with the CN6 and CL6 tools are not displayed in the figure. The entire cross-sections of the welds are also not shown in the figure, since no macroscopic defects, such as voids or tunnels, were formed in the welds, regardless of the tool geometry or of the base material combinations (S and D welds) and their relative positions in the lap joint (D65 and D56 welds). The morphology of the lap interface, at the retreating side of the welds, was also similar for all the welds and was characterised by a continuous straight interface, with no clear evidence of the base materials stirring across it. Contrary to this, a well-defined hook-shaped interface was observed at the advancing side of the S6 (Figure 4a) and D56 (Figure 4c) welds but not in the D65 weld (Figure 4b). The material discontinuity associated with the hook-shaped interface is well-illustrated in the SEM micrograph displayed in Figure 4d, which corresponds to the region signalised in the cross-section of the D56 weld. The figure shows that the hook corresponds to an unbonded interface, which results from the upward flow of the lower plate material through the lap interface. The similarities between the hook shape in the S6 and D56 welds, as well as the absence of a well-defined hook in the D65 weld, makes also possible to infer that this defect is influenced by the properties of the base materials being joined, as well as by their relative positioning in the lap joint. Since the precise size of the unbonded interface is very difficult to measure, the severity/size of the hook, in the different welds, will be evaluated by comparing the joint strengths.
Mechanical Properties
The normalised maximum loads (NML) for all the welds produced in this work are compared in Figure 5. For each weld, the NML was calculated as the ratio between the maximum load of the joint, obtained from the tensile-shear tests, and the maximum load, in uniaxial tension, of the AA6082 alloy, which is the reference material in this investigation. From Figure 5a-c, it can be concluded that lower NML values were registered for the S6 welds, regardless of the welding conditions (CN6, CL6 or CN8 tools) or of the loading mode (AS or RS). It is also possible to conclude that the NML values of the AS-loaded specimens were lower than that of the RS-loaded specimens for all the S6 welds, which agrees well with the formation of the hook-shaped discontinuity at the advancing side ( Figure 4a) independently of the tool used to produce it. The hook-shaped discontinuity reduces the effective thickness of the top plate and, consequently, the load-bearing area of the tested specimens, which is the reason this feature is regularly labelled as a hook defect. The presence of the hook defect also promoted an important asymmetry in strength between the AS and RS-loaded specimens for all the dissimilar welds, except that produced with the CN8 tool. Figure 5 also shows that the NML values registered for the RS-loaded dissimilar welds were different
Mechanical Properties
The normalised maximum loads (NML) for all the welds produced in this work are compared in Figure 5. For each weld, the NML was calculated as the ratio between the maximum load of the joint, obtained from the tensile-shear tests, and the maximum load, in uniaxial tension, of the AA6082 alloy, which is the reference material in this investigation. From Figure 5a-c, it can be concluded that lower NML values were registered for the S6 welds, regardless of the welding conditions (CN6, CL6 or CN8 tools) or of the loading mode (AS or RS). It is also possible to conclude that the NML values of the AS-loaded specimens were lower than that of the RS-loaded specimens for all the S6 welds, which agrees well with the formation of the hook-shaped discontinuity at the advancing side ( Figure 4a) independently of the tool used to produce it. The hook-shaped discontinuity reduces the effective thickness of the top plate and, consequently, the load-bearing area of the tested specimens, which is the reason this feature is regularly labelled as a hook defect.
Mechanical Properties
The normalised maximum loads (NML) for all the welds produced in this work are compared in Figure 5. For each weld, the NML was calculated as the ratio between the maximum load of the joint, obtained from the tensile-shear tests, and the maximum load, in uniaxial tension, of the AA6082 alloy, which is the reference material in this investigation. From Figure 5a-c, it can be concluded that lower NML values were registered for the S6 welds, regardless of the welding conditions (CN6, CL6 or CN8 tools) or of the loading mode (AS or RS). It is also possible to conclude that the NML values of the AS-loaded specimens were lower than that of the RS-loaded specimens for all the S6 welds, which agrees well with the formation of the hook-shaped discontinuity at the advancing side ( Figure 4a) independently of the tool used to produce it. The hook-shaped discontinuity reduces the effective thickness of the top plate and, consequently, the load-bearing area of the tested specimens, which is the reason this feature is regularly labelled as a hook defect. The presence of the hook defect also promoted an important asymmetry in strength between the AS and RS-loaded specimens for all the dissimilar welds, except that produced with the CN8 tool. Figure 5 also shows that the NML values registered for the RS-loaded dissimilar welds were different The presence of the hook defect also promoted an important asymmetry in strength between the AS and RS-loaded specimens for all the dissimilar welds, except that produced with the CN8 tool. Figure 5 also shows that the NML values registered for the RS-loaded dissimilar welds were different from those registered for the S6 welds. Despite the similarities in weld morphologies, an improvement in the RS strength, relative to the S6 welds, was noticed for all the welds but mainly for that produced with the CL6 and CN8 tools. These results indicate that mechanical testing is a suitable tool for signalising the presence of lap welding defects not always visible in optical microscopy.
The comparison of the NML values between the dissimilar welds also enables us to conclude that the highest strength values in AS loading were always registered for the D65 welds. These results enabled us to conclude that higher joint strengths can be achieved by combining the AA6082 alloy with the AA5754 alloy and positioning the non-heat-treatable alloy as the bottom plate in the lap joint. The gain in strength associated with this dissimilar base material combination is especially noticeable for the dissimilar joints produced with the CN8 tool, for which a symmetry in strength was registered, as already noticed above. The symmetry in strength between the AS and RS-loaded samples for the CN8 tool is in good agreement with the absence of a well-defined hook-shaped interface in the macrograph of Figure 4b.
The absence of the hook in the D65 welds produced with the CN8 tool is also illustrated in Figure 6, which compares the Von Mises strain maps, at maximum load, for the similar and dissimilar AS-loaded specimens. The figure enables us to observe that the D65 joints had a failure for higher plastic strain values than the S6 and D56 joints due to the absence of severe hook defects acting as stress concentrators. The same did not happen when the AA6082 alloy was positioned as the bottom plate, even when welding with the CN8 tool. from those registered for the S6 welds. Despite the similarities in weld morphologies, an improvement in the RS strength, relative to the S6 welds, was noticed for all the welds but mainly for that produced with the CL6 and CN8 tools. These results indicate that mechanical testing is a suitable tool for signalising the presence of lap welding defects not always visible in optical microscopy. The comparison of the NML values between the dissimilar welds also enables us to conclude that the highest strength values in AS loading were always registered for the D65 welds. These results enabled us to conclude that higher joint strengths can be achieved by combining the AA6082 alloy with the AA5754 alloy and positioning the non-heat-treatable alloy as the bottom plate in the lap joint. The gain in strength associated with this dissimilar base material combination is especially noticeable for the dissimilar joints produced with the CN8 tool, for which a symmetry in strength was registered, as already noticed above. The symmetry in strength between the AS and RS-loaded samples for the CN8 tool is in good agreement with the absence of a well-defined hook-shaped interface in the macrograph of Figure 4b.
The absence of the hook in the D65 welds produced with the CN8 tool is also illustrated in Figure 6, which compares the Von Mises strain maps, at maximum load, for the similar and dissimilar ASloaded specimens. The figure enables us to observe that the D65 joints had a failure for higher plastic strain values than the S6 and D56 joints due to the absence of severe hook defects acting as stress concentrators. The same did not happen when the AA6082 alloy was positioned as the bottom plate, even when welding with the CN8 tool. The microhardness profiles, registered along the top and bottom parts of the welds produced with the CN8 tool, are illustrated in Figure 7. The average hardness values of the base materials are also displayed in the figure using dashed lines. In Figure 7a, corresponding to the S6 welds, it can be observed the W-shaped hardness profiles, usually associated with the AA6082-T6 welds. The hardness decrease relative to the base material in both the upper and lower parts of the weld is one of the reasons for the very low NML values registered in the S6 tensile-shear tests. The similarities between the hardness profiles of the upper and lower parts of the weld also indicate that, although the FSW tool was mostly in contact with the upper plate, there was an almost homogeneous heat distribution across the entire lap joint thickness. This assumption supported the decision of modelling the two plates as a single plate with double thickness in the numerical simulation of the similar AA6082 friction stir lap welding.
For the dissimilar welds, different hardness profiles were obtained for the top and bottom parts of the lap joints (Figure 7b, c). However, while similar W-shaped profiles were registered for the The microhardness profiles, registered along the top and bottom parts of the welds produced with the CN8 tool, are illustrated in Figure 7. The average hardness values of the base materials are also displayed in the figure using dashed lines. In Figure 7a, corresponding to the S6 welds, it can be observed the W-shaped hardness profiles, usually associated with the AA6082-T6 welds. The hardness decrease relative to the base material in both the upper and lower parts of the weld is one of the reasons for the very low NML values registered in the S6 tensile-shear tests. The similarities between the hardness profiles of the upper and lower parts of the weld also indicate that, although the FSW tool was mostly in contact with the upper plate, there was an almost homogeneous heat distribution across the entire lap joint thickness. This assumption supported the decision of modelling the two plates as a single plate with double thickness in the numerical simulation of the similar AA6082 friction stir lap welding. material was registered. In turn, when this alloy was positioned at the bottom of the joint, no significant changes in the hardness relative to the base material were registered. The hardness increase in the D56 combination showed that the AA5457 alloy was slightly hardened due to the plastic deformation promoted by the tool when the alloy was in contact with it. The increase in the strength of the alloy, relative to the base material, indicated that the stirred material was not fully recovered, resulting in a strain hardened microstructure after welding.
Thermomechanical Analysis
In order to understand the thermomechanical conditions experienced during welding, the machine output parameters, i.e., the torque and the axial force, were registered and processed. The average torque and axial load values were used to assess the flowability of the base materials during welding, as all the welds were fabricated in the position control. The average values of the torque and axial load, computed considering only the steady-state part of the torque-time and force-time plots, are displayed in Figure 8. From Figure 8a, it can be concluded that the average torque values registered in the dissimilar welding operations were always higher than those registered in the similar welding of the AA6082 alloy. These results show that the presence of the AA5754 alloy, regardless of being located at the bottom or at the top of the lap joint, promoted an increase in the energy required to perform the weld, indicating a strong influence of the base material properties/interactions on the thermomechanical conditions experienced during welding. Since, according to Table 1, at room temperature, the AA5457 alloy had much lower strength than the AA6082 alloy, the base material properties influencing the energy required for the welding have to be related to important differences in the flowability of the base materials during welding, i.e., with differences in the plastic properties of the base materials at the stirring temperature. Actually, analysing the axial load values displayed in Figure 8b, it is possible to conclude that much lower force values were required to maintain the tool in position when the AA6082 alloy was the top plate and/or the CN8 tool was used. Since, as it is well-known, the tool geometry has a strong influence on the material flow, it is again possible to conclude that there was an important influence of the alloy's flowability (plastic properties) on the required axial force. However, it is also possible to conclude that the load required for maintaining the tool in position is lower when the AA6082 alloy is the top plate, i.e., the base material being stirred by the tool. Lower axial forces must be related to the lower base material resistance to the tool penetration and stirring. Since, as already stated, the AA6082 alloy has a much higher room temperature strength than the AA5457 alloy, the lower resistance of the AA6082 alloy to the stirring has to be related to the thermal softening of the heat-treatable alloy under the thermal cycles imposed during welding. For the dissimilar welds, different hardness profiles were obtained for the top and bottom parts of the lap joints (Figure 7b,c). However, while similar W-shaped profiles were registered for the AA6082 part of the welds, independent of the base material relative positions in the joint; the same was not registered for the AA5457 part. For this alloy, different hardness values relative to the base material were registered in the weld, according to the dissimilar base material combinations, i.e., when the AA5754 alloy was positioned as the top plate, an increase in hardness relative to the base material was registered. In turn, when this alloy was positioned at the bottom of the joint, no significant changes in the hardness relative to the base material were registered. The hardness increase in the D56 combination showed that the AA5457 alloy was slightly hardened due to the plastic deformation promoted by the tool when the alloy was in contact with it. The increase in the strength of the alloy, relative to the base material, indicated that the stirred material was not fully recovered, resulting in a strain hardened microstructure after welding.
Thermomechanical Analysis
In order to understand the thermomechanical conditions experienced during welding, the machine output parameters, i.e., the torque and the axial force, were registered and processed. The average torque and axial load values were used to assess the flowability of the base materials during welding, as all the welds were fabricated in the position control. The average values of the torque and axial load, computed considering only the steady-state part of the torque-time and force-time plots, are displayed in Figure 8. From Figure 8a, it can be concluded that the average torque values registered in the dissimilar welding operations were always higher than those registered in the similar welding of the AA6082 alloy. These results show that the presence of the AA5754 alloy, regardless of being located at the bottom or at the top of the lap joint, promoted an increase in the energy required to perform the weld, indicating a strong influence of the base material properties/interactions on the thermomechanical conditions experienced during welding. Since, according to Table 1, at room temperature, the AA5457 alloy had much lower strength than the AA6082 alloy, the base material properties influencing the energy required for the welding have to be related to important differences in the flowability of the base materials during welding, i.e., with differences in the plastic properties of the base materials at the stirring temperature. Actually, analysing the axial load values displayed in Figure 8b, it is possible to conclude that much lower force values were required to maintain the tool in position when the AA6082 alloy was the top plate and/or the CN8 tool was used. Since, as it is well-known, the tool geometry has a strong influence on the material flow, it is again possible to conclude that there was an important influence of the alloy's flowability (plastic properties) on the required axial force. However, it is also possible to conclude that the load required for maintaining the tool in position is lower when the AA6082 alloy is the top plate, i.e., the base material being stirred by the tool. Lower axial forces must be related to the lower base material resistance to the tool penetration and stirring. Since, as already stated, the AA6082 alloy has a much higher room temperature strength than the AA5457 alloy, the lower resistance of the AA6082 alloy to the stirring has to be related to the thermal softening of the heat-treatable alloy under the thermal cycles imposed during welding. The temperature distributions in the longitudinal and the transverse cross-sections of the similar AA6082 welds, determined using the FE-based analysis, are shown in Figure 9. From the figure, it can be concluded that strong temperature gradients were previewed across the longitudinal and the transverse sections of the welds. It is also possible to observe that peak temperature values of about 550 °C and 450 °C were determined to be reached in the stir zone and at the TMAZ/HAZ interface, respectively. The precipitation kinetics of the AA6082 alloy was predicted, using Thermocalc software, to be as illustrated in Figure 10, where the phase dissolution curves are correlated with the maximum temperatures in the different weld zones, represented by coloured squares. From the figure, it was concluded that the predicted dissolution temperatures of the second-phase particles are 133 °C for the Guinier-Preston (GP) zones and 325 °C, 468 °C and 530 °C for the β", β' and β precipitates, respectively. The figure also shows that the temperatures in the TMAZ were in the same range of the dissolution temperature of the second-phase particles, which is corroborated by the hardness profiles The temperature distributions in the longitudinal and the transverse cross-sections of the similar AA6082 welds, determined using the FE-based analysis, are shown in Figure 9. From the figure, it can be concluded that strong temperature gradients were previewed across the longitudinal and the transverse sections of the welds. It is also possible to observe that peak temperature values of about 550 • C and 450 • C were determined to be reached in the stir zone and at the TMAZ/HAZ interface, respectively. The temperature distributions in the longitudinal and the transverse cross-sections of the similar AA6082 welds, determined using the FE-based analysis, are shown in Figure 9. From the figure, it can be concluded that strong temperature gradients were previewed across the longitudinal and the transverse sections of the welds. It is also possible to observe that peak temperature values of about 550 °C and 450 °C were determined to be reached in the stir zone and at the TMAZ/HAZ interface, respectively. The precipitation kinetics of the AA6082 alloy was predicted, using Thermocalc software, to be as illustrated in Figure 10, where the phase dissolution curves are correlated with the maximum temperatures in the different weld zones, represented by coloured squares. From the figure, it was concluded that the predicted dissolution temperatures of the second-phase particles are 133 °C for the Guinier-Preston (GP) zones and 325 °C, 468 °C and 530 °C for the β", β' and β precipitates, respectively. The figure also shows that the temperatures in the TMAZ were in the same range of the dissolution temperature of the second-phase particles, which is corroborated by the hardness profiles The precipitation kinetics of the AA6082 alloy was predicted, using Thermocalc software, to be as illustrated in Figure 10, where the phase dissolution curves are correlated with the maximum temperatures in the different weld zones, represented by coloured squares. From the figure, it was concluded that the predicted dissolution temperatures of the second-phase particles are 133 • C for the Guinier-Preston (GP) zones and 325 • C, 468 • C and 530 • C for the β", β' and β precipitates, respectively.
The figure also shows that the temperatures in the TMAZ were in the same range of the dissolution temperature of the second-phase particles, which is corroborated by the hardness profiles in Figure 7 for this alloy. The predicted dissolution temperatures also well-agree with the transmission electron metallographic (TEM) study made by Sahil et al, where the microstructure contained dissolution zones and recorded the lowest hardness [25]. Similar observations relating to precipitates with hardness were also reported by Jandaghi et al. [26], and, therefore, the lowest hardness recorded at the TMAZ of AA6082 was attributed to the absence of strengthening precipitates. On the other hand, the SZ recorded a slightly higher hardness than TMAZ; this is due to the attainment of a higher temperature than TMAZ (Figure 9), which is sufficient for dissolution, followed by the formation of a GP zone. This inference is confirmed with the similar observation in a TEM study made by Xu et al. where the formation of GP zones were more related to the hardness increments than the TMAZ zone [27]. Figure 7 for this alloy. The predicted dissolution temperatures also well-agree with the transmission electron metallographic (TEM) study made by Sahil et al, where the microstructure contained dissolution zones and recorded the lowest hardness [25]. Similar observations relating to precipitates with hardness were also reported by Jandaghi et al. [26], and, therefore, the lowest hardness recorded at the TMAZ of AA6082 was attributed to the absence of strengthening precipitates. On the other hand, the SZ recorded a slightly higher hardness than TMAZ; this is due to the attainment of a higher temperature than TMAZ (Figure 9), which is sufficient for dissolution, followed by the formation of a GP zone. This inference is confirmed with the similar observation in a TEM study made by Xu et al. where the formation of GP zones were more related to the hardness increments than the TMAZ zone [27]. Since the hook formation results from the upward material flow at the advancing side of the tool [28], it can be inferred that the formation of this defect will be facilitated when an alloy with very low flow stress is positioned at the bottom of the joint, since it can be easily squeezed upward under the tool pressure and stirring action. Leitão et al. [2], who compared the plastic behaviour of the AA6082 and AA5083 aluminium alloys at high temperatures, concluded that the AA6082 alloy displays flow softening in the temperature range previewed for the TMAZ. The hypothesis of AA6082 softening during welding is corroborated by the previsions of the full second-phase particles dissolution at the temperatures previewed for the TMAZ and is in very good agreement with the lower axial force values registered when welding with this alloy positioned at the top of the lap joint (Figure 8b). Regarding the AA5754 aluminium, since it is a non-heat-treatable alloy, its softening mechanisms are not associated with changes in the structure and density of second-phase particles but with the competing effects of strain-hardening, recovery and recrystallisation phenomena during welding. For this alloy, flow softening is only possible for temperatures well inside the recrystallisation temperature range. However, the hardness increase in the hardness profile of the D56 welds in Figure 7 shows that the temperatures reached during welding were not in the recrystallisation range of this alloy for a long time and that, for these reasons, no flow softening could take place during welding. This is in good agreement with the higher axial forces registered when welding with the AA5457 alloy placed in contact with the tool during dissimilar welding. The absence of base material softening also explains the absence of upwards material flow when welding in the D65 base materials combination. Since the hook formation results from the upward material flow at the advancing side of the tool [28], it can be inferred that the formation of this defect will be facilitated when an alloy with very low flow stress is positioned at the bottom of the joint, since it can be easily squeezed upward under the tool pressure and stirring action. Leitão et al. [2], who compared the plastic behaviour of the AA6082 and AA5083 aluminium alloys at high temperatures, concluded that the AA6082 alloy displays flow softening in the temperature range previewed for the TMAZ. The hypothesis of AA6082 softening during welding is corroborated by the previsions of the full second-phase particles dissolution at the temperatures previewed for the TMAZ and is in very good agreement with the lower axial force values registered when welding with this alloy positioned at the top of the lap joint (Figure 8b). Regarding the AA5754 aluminium, since it is a non-heat-treatable alloy, its softening mechanisms are not associated with changes in the structure and density of second-phase particles but with the competing effects of strain-hardening, recovery and recrystallisation phenomena during welding. For this alloy, flow softening is only possible for temperatures well inside the recrystallisation temperature range. However, the hardness increase in the hardness profile of the D56 welds in Figure 7 shows that the temperatures reached during welding were not in the recrystallisation range of this alloy for a long time and that, for these reasons, no flow softening could take place during welding. This is in good agreement with the higher axial forces registered when welding with the AA5457 alloy placed in contact with the tool during dissimilar welding. The absence of base material softening also explains the absence of upwards material flow when welding in the D65 base materials combination.
Finally, the small differences in the torque values between the D56 and D65 welds may be also explained by the differences in mechanical properties between the two alloys at high temperatures. In fact, according to Andrade et al. [22], the torque registered during welding is influenced by the temperatures reached, due to its influence on base material softening, and by the volume of materials stirred by the tool. Naturally, for a fixed tool geometry, a higher volume of material will be dragged by the tool when the alloy with better flowability, i.e., higher softening, is located at the top of the lap joint. The higher stirred volume prevents a strong decrease in torque during D65 welding, since it balances the effects of the lower flow stress of the AA6082 alloy. | 9,858.8 | 2020-12-13T00:00:00.000 | [
"Materials Science"
] |
Isolation and Characterization of [D-Leu1]microcystin-LY from Microcystis aeruginosa CPCC-464
[D-Leu1]MC-LY (1) ([M + H]+ m/z 1044.5673, Δ 2.0 ppm), a new microcystin, was isolated from Microcystis aeruginosa strain CPCC-464. The compound was characterized by 1H and 13C NMR spectroscopy, liquid chromatography–high resolution tandem mass spectrometry (LC–HRMS/MS) and UV spectroscopy. A calibration reference material was produced after quantitation by 1H NMR spectroscopy and LC with chemiluminescence nitrogen detection. The potency of 1 in a protein phosphatase 2A inhibition assay was essentially the same as for MC-LR (2). Related microcystins, [D-Leu1]MC-LR (3) ([M + H]+ m/z 1037.6041, Δ 1.0 ppm), [D-Leu1]MC-M(O)R (6) ([M + H]+ m/z 1071.5565, Δ 2.0 ppm) and [D-Leu1]MC-MR (7) ([M + H]+ m/z 1055.5617, Δ 2.2 ppm), were also identified in culture extracts, along with traces of [D-Leu1]MC-M(O2)R (8) ([M + H]+ m/z 1087.5510, Δ 1.6 ppm), by a combination of chemical derivatization and LC–HRMS/MS experiments. The relative abundances of 1, 3, 6, 7 and 8 in a freshly extracted culture in the positive ionization mode LC–HRMS were ca. 84, 100, 3.0, 11 and 0.05, respectively. These and other results indicate that [D-Leu1]-containing MCs may be more common in cyanobacterial blooms than is generally appreciated but are easily overlooked with standard targeted LC–MS/MS screening methods.
Introduction
Microcystins (MCs), such as MC-LR (2), are cyclic heptapeptide hepatotoxins ( Figure 1) produced primarily in cyanobacterial genera such as Microcystis, Dolichospermum (Anabaena), Nostoc and 2 dienoic acid (Adda) at position 5 ( Figure 1) [3]. Adda 5 and Glu 6 appear to be primarily responsible for the characteristic biological activity of MCs [2,3]. Protein phosphatase inhibition is directly related to the toxins' mechanism of action and animal studies have demonstrated that MCs are potent tumor promoters [1]. To date, the number of identified MCs continues to increase and more than 250 analogues have been characterized [4]. However, due to a lack of standards for these analogues, very few studies have adequately assessed their distribution in natural waters.
As part of feasibility studies for a cyanobacterial matrix reference material [5], a survey of cyanobacterial cultures from Canada was conducted using LC with UV and MS detection. Among the samples analyzed, two Microcystis aeruginosa cultures from Saskatchewan and Alberta, CPCC-464 and CPCC-299, showed the presence of a new microcystin tentatively identified as [D-Leu 1 ]MC-LY (1) [5] together with the previously reported [6,7] and well-characterized [8] [D-Leu 1 ]MC-LR (3). [D-Leu 1 ]MC-LY (1) was also tentatively identified recently by LC-HRMS/MS in a cyanobacterial bloom sample from southwestern Ontario, Canada [9], indicating that it may be a significant component of natural cyanobacterial blooms in this and other parts of the world. It is therefore necessary to verify the structure, and to evaluate its toxicity relative to other MCs because limited data are available on the toxicological consequences of varying the amino acid at position 1 and 2.
Results and Discussion
The analysis of M. aeruginosa culture CPCC-464 by LC-UV-MS/MS is shown in Figure 2. A very similar profile was observed with culture CPCC-299, with the only differences being in relative peak areas. The LC-UV chromatogram of CPCC-464 ( Figure 2a) showed two major peaks due to 1 and 3. As part of feasibility studies for a cyanobacterial matrix reference material [5], a survey of cyanobacterial cultures from Canada was conducted using LC with UV and MS detection. Among the samples analyzed, two Microcystis aeruginosa cultures from Saskatchewan and Alberta, CPCC-464 and CPCC-299, showed the presence of a new microcystin tentatively identified as [D-Leu 1 ]MC-LY (1) [5] together with the previously reported [6,7] and well-characterized [8] [D-Leu 1 ]MC-LR (3).
[D-Leu 1 ]MC-LY (1) was also tentatively identified recently by LC-HRMS/MS in a cyanobacterial bloom sample from southwestern Ontario, Canada [9], indicating that it may be a significant component of natural cyanobacterial blooms in this and other parts of the world. It is therefore necessary to verify the structure, and to evaluate its toxicity relative to other MCs because limited data are available on the toxicological consequences of varying the amino acid at position 1 and 2.
Results and Discussion
The analysis of M. aeruginosa culture CPCC-464 by LC-UV-MS/MS is shown in Figure 2. A very similar profile was observed with culture CPCC-299, with the only differences being in relative peak areas. The LC-UV chromatogram of CPCC-464 ( Figure 2a) showed two major peaks due to 1 and 3.
In the same experiment, the MS was operated with a precursor scan using the m/z 135 product ion for Adda, which is characteristic of most MCs [10]. Examination of all peaks in the total ion current chromatogram (Figure 2b In the same experiment, the MS was operated with a precursor scan using the m/z 135 product ion for Adda, which is characteristic of most MCs [10]. Examination of all peaks in the total ion current chromatogram ( Large scale culturing of CPCC-464 followed by centrifugation provided 188 g of biomass for purification of 1. This material was extracted with 70% MeOH-H2O, then taken through a preparative isolation procedure consisting of a hexane partitioning, C18 LC, LH-20 gel permeation, C18-flash chromatography, and semi-preparative HPLC. The total yield of 1 was 28. Large scale culturing of CPCC-464 followed by centrifugation provided 188 g of biomass for purification of 1. This material was extracted with 70% MeOH-H 2 O, then taken through a preparative isolation procedure consisting of a hexane partitioning, C18 LC, LH-20 gel permeation, C18-flash chromatography, and semi-preparative HPLC. The total yield of 1 was 28.7 mg containing a small amount of [D-Leu 1 ,D-Glu(OMe) 6 ]MC-LY and a trace of what is believed to be [D-Leu 1 ,(6Z)-Adda 5 ]MC-LY (observed by LC-MS selected reaction monitoring modes).
The structure of 1 was elucidated from NMR spectra acquired in CD 3 OH in order to observe the exchangeable amide protons. The proton NMR spectrum had six resonances in the amide region with a profile similar to that of a peptide. Individual spin systems from each amide resonance were identified and assigned using 2D 1 H-1 H DIPSI-2 and 1 H-1 H COSY correlations (Table 1). Detailed spectra are provided in Figures S3-S10 and an overlay of chemical shifts on the proposed 2-dimensional chemical structure of 1 is shown in Figure S11. Carbon assignments were determined indirectly using 1 H-13 C HSQC and 1 H-13 C HMBC 2D NMR spectra. One carbon resonance was not assigned for Leu 1 (C1) due to spectral overlap. The Adda unit was assembled with the aid of the HMBC data, which determined the positions of the methyl groups. Trans-configuration of the 4,5-double bond is indicated by the large coupling constant between Adda-H4 and -H5 (15.5 Hz) and by the observation of a ROESY correlation between Adda-H4 and Adda-6-Me, and is consistent with the absence of a ROESY correlation between Adda-H4 to Adda-H5. The second double bond was also trans as a ROESY correlation was observed between Adda-H5 and Adda-H7 ( Figure 3). Carbon assignments were determined indirectly using 1 H-13 C HSQC and 1 H-13 C HMBC 2D NMR spectra. One carbon resonance was not assigned for Leu 1 (C1) due to spectral overlap. The Adda unit was assembled with the aid of the HMBC data, which determined the positions of the methyl groups. Trans-configuration of the 4,5-double bond is indicated by the large coupling constant between Adda-H4 and -H5 (15.5 Hz) and by the observation of a ROESY correlation between Adda-H4 and Adda-6-Me, and is consistent with the absence of a ROESY correlation between Adda-H4 to Adda-H5. The second double bond was also trans as a ROESY correlation was observed between Adda-H5 and Adda-H7 ( Figure 3). The relative stereochemistry of C2 and C3 of Adda was determined from the observation of a ROESY correlation between Adda-H3 and both Adda-H5 and Adda-2-Me, while Adda-H2 showed correlations to Adda-NH, Glu-NH, Adda-2-Me and Adda-H4, indicating that H2 and H3 are on the opposite faces of the Adda plane ( Figure 3). This is consistent with the ca. 9.7 Hz coupling constant between Adda-H2 and -H3. The glutamic acid unit (Glu), N-methyldehydroalanine (Mdha) and erthyo-β-methylaspartic acid (Masp) were identified in a similar manner, and their proton and carbon resonances were very similar to those previously published for 3 [8]. Two leucine units were identified by the similarities of their 1 H and 13 C resonances to those in the BMRB database (http://www.bmrb.wisc.edu/ref_info/; accessed September 2011) and those published for 3 [10]. The relative stereochemistry for H2 and H3 of the Masp unit was determined from the absence of Masp-H2 to Masp-3-Me correlation and the presence of a ROESY correlation between Masp-H2 and -H3, which places H2 and H3 on the same side of the plane. The tyrosine unit (Tyr) was assigned from the presence of the two doublets at 6.99 and 6.62 ppm, characteristic of a para-substituted phenyl ring. The aromatic protons, Tyr-H5 and -H9 correlated to a carbon at 36.3 ppm, characteristic of an aromatic amino acid. The accurate mass from LC-HRMS/MS indicated that the substituent on the phenyl ring was a hydroxyl group (Table 2), establishing its identity as Tyr.
The amino acid subunits assigned in the 1 H-1 H DIPSI-2, 1 H-1 H COSY and 1 H-13 C HMBC spectra were linked through correlations observed in the ROESY, NOESY ( Figure 3) and HMBC NMR spectra. In the HMBC spectra, a correlation between the N-methyl of Mdha and the carbonyl of Glu, and between the Masp-NH and the leucine carbonyl at 174.5 ppm, linked Glu 6 to Mdha 7 and Masp 3 to Leu 2 . Additionally, ROESY correlations were observed between Leu 2 -NH and both Leu 1 -NH and Masp 3 -NH. Furthermore, the Tyr-NH showed ROESY or NOESY correlations to Masp 3 -H3 and Adda 5 -NH, Adda 5 -NH showed correlations to Adda-H4 and Adda-H2, and Adda 5 -H2 showed correlations to Adda-H4, Adda-2-Me and Glu 6 -NH. These correlations show 1 to contain Leu-Leu-Masp-Tyr-Adda-Glu-Mdha, and the molecular formula established from LC-HRMS requires an amide linkage between Mdha 7 and Leu 1 moieties. That this linkage is present is demonstrated by the presence of numerous product ions in the HRMS/MS spectrum that are attributable to fragments containing both Leu 1 and Mdha 7 , such as those at m/z 169.1334, 197.1283 and 488.2745 ( Table 2). The ROESY correlations observed for 1 (Figure 3), especially those between the amide protons, were consistent with those expected based on the established 3-dimensional solution structure for MC-LR [12], which is reported to be very similar to that of [D-Leu 1 ]MC-LR (3) [8]. Thus, 1 has the same relative stereochemistry as 2 and 3. This is also supported by the close similarity of the 13 C NMR chemical shifts of 1 to those reported for [D-Leu 1 ]MC-LR (3) in the same solvent (Table S1). The fact both 1 and 3 are biosynthesized together by the MC synthetase of M. aeruginosa strain CPCC-464, and that 1 was subsequently found to have similar inhibitory potency to MC-LR (2) against protein phosphatase 2A (PP2A) (Figure 4), both indicate that 1 has the same absolute stereochemistry as 2 and 3 and that 1 is therefore [D-Leu 1 ]MC-LY ( Figure 1). Calc that was recently also reported by Foss et al. [11] in a cyanobacterial bloom sample, together with 6 and 7. Microcystins 6-8 from this sample and from M. aeruginosa CPCC-464 showed identical retention times and mass spectral characteristics. Methionine sulfoxide analogues of MCs appear to be formed by autoxidation [22], and it appears that the same process can also lead to formation of the corresponding sulfones. The stereochemistry of 6-8 cannot be verified by LC-MS methods. However, because 7 is presumably biosynthesized in the culture by the same synthetase that produces 1 and 3, and that 6 and 8 are autoxidation products of 7, 6-8 can therefore be assumed to have the same stereochemistry as 1 and 3 (Figure 1).
A careful non-targeted LC-MS analysis of a field sample by Foss et al. [11] recently reported more than 20 Leu 1 -containing MCs in a cyanobacterial bloom, with [D-Leu 1 ]MC-LR (3) as the major component, but no 1 was detected. Including the present study, 1 now appears to have been detected Table 2). The negative ion MS/MS spectrum obtained from the FS/DIA (full scan/data independent acquisition) LC-HRMS of 1 showed a prominent product ion at m/z 128.0355, consistent with the presence of a MC containing Glu at position 6, and a neutral loss of m/z 112.0190, consistent with the presence of Masp at position 3 [17]. Careful comparison of the positive ion targeted LC-HRMS/MS spectrum of 1 with those of standards of 4 and 5 (Table 2) showed that product ions in 1 that contained amino acid-1 were consistently heavier by 42.047 Da (Leu vs. Ala) than the corresponding product ions from 5, and 42.047 (Leu 1 vs Ala 1 ), 92.026 (Tyr 4 vs Ala 4 ) or 134.073 Da (Leu 1 and Tyr 4 vs. Ala 1 and Ala 4 ) heavier than the corresponding product ions from 4 that contained amino acid-2, amino acid-4, or both amino acid-1 and -4, respectively ( Table 2, Figures S12-S18). Furthermore, the UV spectrum of 1 obtained during LC-UV analysis was identical to that of 5, and differed from that of 2 ( Figure S19), suggesting the presence of Tyr in 1 and 5 in addition to the UV-absorbing chromophores also present in 2 (i.e., Adda 5 and Mdha 7 ). The LC-MS/MS and LC-UV results are therefore entirely consistent with 1 being [D-Leu 1 ]MC-LY.A portion of the purified 1 was used to prepare a stock solution. This was quantitated using qNMR [18] and LC with chemiluminescence nitrogen detection (CLND) [19], then accurately diluted with 1:1 MeOH-H 2 O to prepare a reference material (RM) (~7.7 µM). LC-UV analysis of this RM showed the relative concentration of [D-Leu 1 ,D-Glu(OMe) 6 ]MC-LY to be 3.1%. The putative [D-Leu 1 ,(6Z)-Adda 5 ]MC-LY was below the limit of quantitation in LC-UV, but the relative concentration was estimated be to below 0.5% using HRMS/MS. Because MCs containing (6Z)-Adda 5 or D-Glu(OMe) 6 do not inhibit protein phosphatases [20], the RM of 1 was used for the PP2A inhibition assay without correcting for impurities. In the PP2A assay, the IC 50 for a certified RM (CRM) of MC-LR (2) was 0.62 nM (0.62 ng/mL), while that for the RM of 1 was 0.76 nM (0.80 ng/mL) (Figure 4). Matthiensen et al. [7] reported that MC-LR (2) and [D-Leu 1 ]MC-LR (3) had similar toxicities to mice when injected intraperitoneally, and that the IC 50 values of 2 and 3 in a PP1 assay were 3.1 and 4.4 nM, respectively. Similarly, Park et al. [6] independently found that 2 and 3 both had the same IC 50 value of 0.3 nM in their PP1 assay. Ikehara et al. [21] found that the IC 50 of MC-LF(9), which differs from MC-LY (5) only by the absence of a phenolic hydroxyl group on residue-2, was 3-fold higher than that of MC-LR (2) (0.096 vs. 0.032 nM) in their PP2A assay. Taken together with the data presented here, these results suggest that the replacement of D-Ala with D-Leu at position 1 in the MC structure has only a minor effect on the toxicity of MCs or on their inhibitory effects on PP1 and PP2A.
Authentic 3 in a cyanobacterial bloom extract from Poplar Island, MD, USA, whose structure has been verified as [D-Leu 1 ]MC-LR by purification and NMR analysis [11], had identical retention time and product ion spectra to the peak for 3 in CPCC-464 when analyzed by LC-HRMS/MS, thus verifying its identity as proposed by Hollingdale et al. [5]. ) were also detected in the culture extract, something that was recently also reported by Foss et al. [11] in a cyanobacterial bloom sample, together with 6 and 7. Microcystins 6-8 from this sample and from M. aeruginosa CPCC-464 showed identical retention times and mass spectral characteristics. Methionine sulfoxide analogues of MCs appear to be formed by autoxidation [22], and it appears that the same process can also lead to formation of the corresponding sulfones. The stereochemistry of 6-8 cannot be verified by LC-MS methods. However, because 7 is presumably biosynthesized in the culture by the same synthetase that produces 1 and 3, and that 6 and 8 are autoxidation products of 7, 6-8 can therefore be assumed to have the same stereochemistry as 1 and 3 (Figure 1).
A careful non-targeted LC-MS analysis of a field sample by Foss et al. [11] recently reported more than 20 Leu 1 -containing MCs in a cyanobacterial bloom, with [D-Leu 1 ]MC-LR (3) as the major component, but no 1 was detected. Including the present study, 1 now appears to have been detected in samples originating from three different locations in Canada [5,9], but so far, nowhere else in the world. Geographical differences in the distribution of microcystins are being reported [23,24]. Leu 1 -containing MCs have been implicated in bird deaths in both Canada and the USA [6,11] and have been reported in samples from cyanobacterial blooms in Brazil and Argentina [7,25,26] as well as in lichens from Argentina, USA, China, Japan, Norway, Sweden, and Finland [25,27]. Leu 1 variants may be more common and widespread than these studies indicate, as many analyses for MCs are conducted using highly targeted LC-MS/MS methods, and the Leu 1 -containing variants are heavier by 42 Da than the more common (and more commonly targeted) Ala 1 -containing MCs. Both types of variants would be readily detected if they were targeted in the LC-MS/MS method, or if untargeted LC-MS methods were used. Protein phosphatase inhibition assays, or immunoassays with appropriate cross-reactivities [28,29], can also be expected to detect both D-Ala 1 -and D-Leu 1 -containing MCs although they cannot indicate which type of variant is present.
Conclusions
[D-Leu 1 ]MC-LR (3) has been reported previously and its structure confirmed by NMR spectroscopy [8]. [5,9]. The results presented here firmly establish the identity of 1 and show that it has similar inhibitory potency towards PP2A as MC-LR (2). A calibration reference material has been prepared that can be used to identify and quantitate 1 in field samples and cultures. Microcystins containing D-Leu at position 1 may be fairly common in the Americas, and the data presented here and elsewhere suggest these to be only slightly less toxic than their more common D-Ala 1 -containing congeners. It is therefore important to consider the possible presence of a range of D-Leu 1 -containing MCs when analyzing bloom samples.
General Experimental Procedures
Purified 1 (250 µg) was dissolved in 30 µL of CD 3 OH for NMR spectroscopy. NMR spectra were acquired on a Bruker Avance III 600 MHz spectrometer (Bruker Biospin Ltd., Billerica, MA, USA) operating at a 1 H frequency of 600.28 MHz and 13 C frequency of 150.94 MHz using TOPSPIN 2.1 acquisition software with a 1.7 mm TXI gradient probe at 277 K. Standard Bruker pulse sequences were used for structure elucidation: one dimensional 1 H spectrum with composite pulse pre-saturation of water, double quantum filtered 1 H-1 H COSY, 1 H-1 H DIPSI-2 (mixing time 120 ms), 1 H- 13 The initial survey of cultures for the presence of MCs was performed by LC-UV-MS using an Agilent (Mississauga, ON, Canada) 1200 LC coupled with a SCIEX (Concord, ON, Canada) API 4000 Q-Trap mass spectrometer with UV monitoring at 238 nm and positive electrospray ionization MS, with full scans, m/z 135 precursor scans, product ion scans, and selected reaction monitoring. The LC column (50 × 2.1 mm; Agilent) was packed with 1.8 µm Zorbax SB-C18 and maintained at 40 • C. The flow rate was 0.3 mL/min, with a gradient of 10%-80% B over 30 min. Solvent A was water and B was 95% acetonitrile, each with 50 mM formic acid and 2 mM ammonium formate.
LC-HRMS was conducted with a Q Exactive-HF Orbitrap mass spectrometer equipped with a HESI-II heated electrospray ionization interface (ThermoFisher Scientific, Waltham, MA, USA) with an Agilent 1200 G1312B binary pump, G1367C autosampler, and G1316B column oven. Analyses were performed with a 3.5 µm Symmetry Shield C18 column (100 × 2.1 mm; Waters) held at 40 • C with mobile phases A and B of H 2 O and CH 3 CN, respectively, each of which contained formic acid (0.1% v/v). A linear gradient (0.3 mL min −1 ) was used from 20% to 90% B over 18 min, then to 100% B over 0.1 min, followed by a hold at 100% B (2.9 min), then returned to 20% B over 0.1 min with a hold at 20% B (3.9 min) to equilibrate the column. Injection volume was typically 1-5 µL. In positive ion mode the mass spectrometer was calibrated from m/z 74-1622, the spray voltage was 3.7 kV, the capillary temperature was 350 • C, and the sheath and auxiliary gas flow rates were 25 and 8 units, respectively, with MS data acquired from 2 to 20 min. Mass spectral data were collected using a combined FS/DIA method. FS data were collected from m/z 500-1400 using the 60000 resolution setting, an AGC target of 1 × 10 6 and a max IT of 100 ms. DIA data were collected using the 15000 resolution setting, an AGC target of 2 × 10 5 , max IT set to 'auto' and a stepped collision energy of 30, 60 and 80 V. Precursor isolation windows were 62 m/z wide and centered at m/z 530, 590, 650, 710, 770, 830, 890, 950, 1010, 1070, 1130, 1190, 1250, 1310, and 1370. DIA chromatograms were extracted for product ions at m/z 121.1011, 121.0647, 135. 0804, 135.1168, 375.1915, 389.2072, 361.1758, 213.0870, 426.2096, 440.2252, 454.2409, 412.1939, 393.2020, 379.1864, 585.3395, 599.3552, and 613.3709. Putative MCs detected using the above FS/DIA method were further probed in a targeted manner using the PRM scan mode with a 0.7 m/z precursor isolation window, typically using the 30,000 resolution setting, an AGC target of 5 ×
Toxins and Other Materials
Distilled H 2 O was further purified using a UV purification system (ThermoFisher Scientific) or a Milli-Q water purification system (Millipore Ltd., Oakville, ON, Canada). MeOH and CH 3 CN (Optima LC-MS grade) were from ThermoFisher Scientific. Hexanes was from Caledon. Formic acid and trifluoroacetic acid were from Sigma-Aldrich (Oakville, ON, Canada). A certified reference material for 2 (CRM-MCLR (Lot # 20070131)) and in-house reference materials for 4 and 5 were from the National Research Council Canada (Biotoxin Metrology, Halifax, NS, Canada).
Biological Material
M. aeruginosa cultures CPCC-464 and CPCC-299 were obtained from the University of Toronto Culture Collection (now the Canadian Phytoplankton Culture Collection housed at the University of Waterloo, ON, Canada). CPCC-464 was isolated from Trampling Lake, Saskatchewan, Canada, July 1998 and deposited by D. Parker as UWOCC#E7. CPCC-299 was isolated from Pretzlaff Pond, Alberta, Canada, August 1990 and deposited by E. Prepas and A. Lam as sample #45-2A. Bulk cultures of CPCC-464 were prepared in two aerated Brite-boxes (250 and 300 L), which are self-contained fiberglass boxes that optimize temperature and light to maximize biomass production. All cultures were grown on BG11 medium [30,31] made using filtered (1 µM) lake water that had pasteurized for 6 h at 85 • C. Light was provided by internally mounted cool white fluorescent tubes shaded with nylon mesh for an approximate intensity of 75-100 µmol m −2 s −1 on a 14:10 h light:dark cycle. Temperature was maintained at 20 • C and pH was monitored and remained constant at 8.6. When cultures reached late exponential stage, 188 g of wet biomass was harvested using a tangential flow centrifuge (IEC Centra MP-4R CEPA Z41 with an 804S rotor (GMI, Ramsey, MN, USA)) with a flow rate of 2-3 L min −1 . The biomass was stored at −20 • C. An extract of lyophilized material from a cyanobacterial bloom at Poplar Island, MD, USA, which contained authentic 3 as well as the tentatively identified 6-8, was available from an earlier study [11].
Toxin Isolation from Culture Biomass
Wet cell biomass of CPCC-464 (104.8 g) was extracted four times with 70% MeOH-H 2 O (400 mL). After centrifugation, the supernatants were pooled (1.7 L) and partitioned with hexanes (700 mL). The hexane portion was back-extracted with 85% MeOH-H 2 O (300 mL) and combined with the first extract. The cleaned extract was adjusted to 85% MeOH and partitioned a second time with hexane (300 mL). The combined MeOH-H 2 O extracts were partially evaporated, pre-adsorbed oñ 14 g of Waters 55-105µm prep C18 and packed on top of a vacuum liquid chromatography column After analyzing the fractions, those containing 1 were purified using a 3 µm Luna C18 (2) Figure S19).
Preparation of Reference Material
An aliquot containing [D-Leu 1 ]MC-LY (1) (4.3 mg) was evaporated under N 2 and dissolved in 3.0 mL 90% CD 3 OH-H 2 O. This stock solution was quantitated directly by 1 H NMR using high purity caffeine as the external calibrant as described previously [18]. A dilution of the stock solution was prepared with 50% MeOH-H 2 O for analysis by LC-UV-CLND [19] using an Agilent 1100 HPLC system with a 1050 UV detector connected to a model 8060 CLND (Antek PAC, Houston, TX, USA). Separations were performed on an Agilent 3.5 µm Poroshell SB-C8 (2.1 × 150 mm) maintained at 40 • C. Isocratic elution was at 0.2 mL/min, using 65% MeOH-H 2 O (0.2% HCOOH) for 1. The external calibrant was also caffeine, with serial dilutions prepared gravimetrically in deionized H 2 O. Caffeine was eluted with 40% MeOH-H 2 O (0.2% HCOOH). The concentration of contaminating [D-Leu 1 ,D-Glu(OMe) 6 ]MC-LY was measured using the UV detector at 238 nm, with an accurate dilution of the RM of 1 as the calibrant.
After quantitation, the stock solution was quantitatively transferred using 50% high purity degassed MeOH-H 2 O to a calibrated volumetric flask, then diluted to the mark with the same. The solution was packaged under argon in flame sealed ampoules using an automatic ampouling machine (Cozzolli, Model FPS1-SS-428, NJ, USA), then stored at −80 • C.
Protein Phosphatase Inhibition Assay
Ampoules of the RM of 1, along with the CRM of 2 (CRM-MCLR), were sent to Abraxis LLC (Warminster, PA, USA) for evaluation of toxicity. PP2A assays were performed using the microcystin-PP2A plate kit according to the kit's standard procedures [32]. | 6,167 | 2020-01-23T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Annals of Clinical Microbiology and Antimicrobials Open Access Empiric Antibiotic Therapy in Acute Uncomplicated Urinary Tract Infections and Fluoroquinolone Resistance: a Prospective Observational Study
Background: The aims of this study were to determine the antimicrobial susceptibility patterns of urinary isolates from community acquired acute uncomplicated urinary tract infections (uUTI) and to evaluate which antibiotics were empirically prescribed in the outpatient management of uUTI.
Background
Community acquired urinary tract infection (UTI) in women is a prevalent problem in primary care, accounting for approximately eight million ambulatory visits annually in the United States [1]. UTI has several different clinical presentations [1]. Acute uncomplicated UTI (uUTI) occurs in otherwise healthy non-pregnant women with normal genitourinary tract [2]. The current treatment of uUTI is empirical, based on the limited and predictable spectrum of etiological microorganisms [3]. However, as with many community acquired infections, resistance rates to antimicrobials which are commonly used in uUTI is increasing and susceptibility of microorganisms shows significant geographical variations [4][5][6]. The most important driving factor of resistance is overuse of antimicrobials [4][5][6]. Increasing antimicrobial resistance complicates uUTI treatment by increasing patient morbidity, costs of reassessment and re-treatment and use of broader spectrum antibiotics. Several studies have demonstrated increasing antibiotic resistance levels in E. coli causing community acquired UTI but most in vitro data come from laboratory based surveys that often do not define the sex, age, clinical syndrome or other data of interest regarding the patients from whom the urine specimens were collected [7]. And also even in the same country the susceptibility patterns of the microorganisms exhibit regional differences [8]. Appropriate knowledge about local and national antimicrobial resistance trends is the utmost importance in order to setup evidence based recommendations in empirical antibiotic treatment of uUTI [8,9].
We therefore in this prospective observational study aimed to obtain data on the resistance rates of common pathogens in 18-65 years old female patients with uUTI and to determine which empiric antibiotics are prescribed in relevant settings at our university hospital for the outpatient management of community acquired uUTI.
Data collection and Patients
Female patients aged between 18-65 years who attended to a tertiary care hospital emergency department and outpatient clinics (Ankara University Medical Faculty Ibni-Sina Hospital Ankara, Turkey which has 1000 beds and admits more than 30 000 patients in emergency department annually) during the period between 1 March 2005 and 1 September 2006, with symptoms of community acquired acute uUTI and to whom empiric antibacterial treatment had prescribed were enrolled in this prospective observational study.
The symptomatic uUTI described by set of symptoms involving dysuria, frequency, urgency and suprapubic tenderness without the presence of fever. Diagnosis were made and recorded by the treating physician. Exclusion criteria were symptoms for > 7 days, signs of pyelonephritis (a body temperature > 38°C orally, flank pain or costovertebral angle tenderness), three or more episodes of UTI in the past year, symptoms of UTI in the last three months, previous upper UTI, other functional and structural urinary tract abnormalities, indwelling or recent Foley catheter, previous history of genitourinary system operation including urinary stones, current pregnancy, antibiotic use during the previous three months, patients who were hospitalised for any reason during past three months, patients with diabetes mellitus and known immune compromised state.
Patients' demographical data, symptoms, physical examination results, urinalysis, urine culture results, pathogen microorganisms and resistance rates to antimicrobials and prescribed empiric antimicrobial therapy (agent and duration) were recorded by a trained physician or a nurse.
Laboratory Methods
The urine specimens were taken after instructing the patient on the midstream technique. Pyuria was detected either with positive dip-stick test or > 5-10 leucocytes in the urine centrifuged at 2000 rpm for 5 minutes. Cleancatch urine samples obtained from patients were inoculated onto 5% blood agar and Eosin-Methylene Blue (EMB) agar with 0.01 mL calibrated loops by a semiquantitative technique. Culture plates were incubated for 18-24 h at 37°C. A threshold of > 10 5 organisms per mL of urine defined as a positive culture. The isolated bacteria were identified by conventional methods and BBL Crystal Enteric/NF 4.0 identification kits (Becton Dickinson-NY, USA) were used when needed. The susceptibility of each isolated pathogen to antibiotics (the fluoroquinolones (FQ), ampicillin, ampicillin-sulbactam, amoxicillin-clavulonate, co-trimoxazole (TMP-SMX), ceftriaxone, cefuroxime, gentamicin) were determined by the Kirby-Bauer disc diffusion method and by automatic system (Becton Dickinson). Samples were processed in the University Hospital Laboratory according to standard procedures defined by the Clinical and Laboratory Standards Institute (CLSI, formerly the National Committee for Clinical Laboratory) [10].
The hard paper copies of locally generated reports and susceptibility results were collected and added to the patients' records.
Statistical analysis
The data from the study were inserted into MS Excel, coded than transferred into SPSS 14.0 for Windows for statistical analysis. Pearson chi-squared test was used to compare the parameters. Data are presented with 95% confidence intervals (CI). A P value of < 0.05 was considered statistically significant.
Ethics
Ethical approval was granted for this study by Ankara University Local Research Ethics Committee (No: 73-1877)
Patient demographics
A total of 429 female patients were diagnosed with uUTI and received empiric antibiotics included in the study. 78.3% of the patients (336) were enrolled from emergency department, 16.6% of patients (71) from internal medicine and, 5.1% (22) from urology outpatient clinics. 278 of them (64.8%) were belonged to the age group of 18-50 years. 151 (35.2%) patients were between 50 and 65 years of age. Mean age of the study population was 42.41 (± 14.7). 276 (64.3%) of the patients were premenopausal and 153 (35.7%) were postmenopausal.
Discussion
This study shows the distribution of microbial species isolated from patients with uUTI and their resistance rates to antimicrobial agents at a university hospital in Turkey. As it has previously been reported, we found the majority of patients with uUTI were under 50 years of age and the predominant bacterium was E. coli [11]. The uropathogens isolated in this patient population were similar to those in other comparable studies [12,13]. It was reported that in postmenopausal women owing to the loss of oestrogen and consequent changes in vaginal flora, the etiological agents in uUTI can be different from premenopausal patients [14][15][16]. Our data have not revealed such a difference between these two groups of patients.
Main international guidelines recommend empirical therapy in uUTI [17,18]. The efficacy of such empirical ther-apy depends on periodic assessment of antimicrobial resistance profiles. Although the spectrum of bacteria isolated from patients with uUTI worldwide has remained largely unchanged in which E. coli is the most prevalent microorganism, there have been significant changes in the resistance patterns of uropathogens over the past few decades and antibiotic resistance has become a major problem in uUTI [13]. Increasing antimicrobial resistance has been documented all over the world [11,19,20]. Resistance rates among strains of E. coli isolated from women with uUTI averages 30% for both sulphonamides and Antimicrobial treatment durations (%) * P < 0.001 Figure 1 Antimicrobial treatment durations (%) * P < 0.001.
* P < 0.001 ampicillin, varying from 17% to 54% in different countries [11]. Trimethoprim resistance ranges from 11% in Scandinavian countries to 34% in Spain and Portugal. FQ resistance was not found in Scandinavian countries but reaches 20% in southern Europe [11]. Recently Gobernado et al. showed reduced susceptibility of E. coli strains isolated from patients with uUTI to TMP-SMX (26%) and to FQ (16%) in Spain where antimicrobials can be used without restriction [8]. The variability amongst different centres confirms the need for local resistance prevalence data to be available to the practitioner who treats UTI, especially where empirical therapy is being used for urinary infections.
Our study clearly shows that there is a significant increase in TMP-SMX and FQ resistance among E. coli isolates from patients with uUTI in our region which makes the empirical treatment of uUTI a great challenge. This observation is in accordance with recent studies conducted in Spain and Europe [8,11]. Arslan et al. reported 36% resistance to TMP-SMX and 17% resistance rates to FQ among 288 E. coli isolates from patients with uUTI in Turkey [21]. Also Ozyurt et al. have found 34% resistance to TMP-SMX and 18% resistance to FQ among community-acquired uropathogen E. coli isolates from Istanbul region [22]. The reported rates of resistance among uropathogens may vary depending on whether study sample consist of primarily of outpatients with uUTI or patients with complicated UTI.
Influence of age has previously been shown to have an impact on resistance rates in UTI [20,21,23,24]. Although our study population was younger than comparable studies, E. coli isolates of this study were also more likely to be resistant to ampicillin, ampicillin-sulbactam, amoxicillinclavulonate, ceftriaxone, gentamicin, and FQ in the patients over fifty years of age. Cefuroxime resistance is the only statistically significant one.
In several studies it has been shown that the prescribing habits of the physicians are the driving factor for the antibiotic resistance [25][26][27]. Goettsche et al. reported that the resistance against FQ is strongly associated with a high number of prescriptions for this group of antibiotics [25]. Because of aforementioned reasons pharmacological surveillance is an essential part of antimicrobial agent resistance studies. Although the association between antimicrobial agent utilisation and resistance in hospital services is well known, there is a lack of information for the same issue in community acquired infections [28][29][30].
There are relatively few studies published on variations in treatment for uUTI. McEwen et al. found that 37% of physicians actually prescribe TMP-SMX, closely followed by FQ (32%) and average duration of antibiotic therapy was 8.6 days in the United States [31]. Kahan NR et al. revealed that FQ were the most frequently prescribed drugs (25.57%) in Israel [32]. In our study we found that although not recommended as first line antibiotic, FQ were the most frequent empirically prescribed drugs in our hospital settings for uUTI. In the FQ group, ciprofloxacin was the most commonly prescribed drug and the average duration of the therapy was longer than the recommended 3 days. The prescribed antimicrobial agent and duration of the treatment were not different between age groups.
To our knowledge this is the first study in Turkey which directly evaluates the prescription behaviour of treating physicians on a medical condition. The data have not been collected from a drug surveillance database or from medical records retrospectively without knowledge of patient's clinical situation. The results are based on the actual physician habits, so it does give an accurate description of which antibiotics are prescribed and the duration of therapy. Since uUTI is relatively common, widespread inappropriate prescribing increases resistance among uropathogens. Our resistance rates to FQ among E. coli strains were found to be much higher than other European studies. This may be due to high use of FQ, since it is considered the antimicrobial group of choice in UTI. Inappropriate antibiotic prescribing for UTI was documented in 47.3% of patients in a study from Turkey [33]. In addition to increasing risk of resistance; current prescribing patterns in our hospital increase the medical costs. In this study we also found that newer FQ like moxifloxacin and levofloxacin were also prescribed for the treatment of uUTI. It is known that among susceptible isolates, the more expensive broad-spectrum FQ are not more effective than the cheaper alternatives [1,12].
Limitations
Although it is local, we believe that in this observational prospective study we reached our goal in terms of obtain- ing precise scientific data dealing with the resistance rates of uropathogens in a university hospital serving to the central Anatolia region. At the same time we had an opportunity to evaluate the actual prescribing habits of our physicians in a medical condition which is most of the times improperly treated.
We did not have a chance to evaluate the resistance patterns of E. coli isolates to the other alternative antimicrobials such as nitrofurantoin and fosfomycin due to the lack of antimicrobial discs in our hospital laboratory.
Further studies with larger number of isolates from each individual geographical region are needed to confirm our results. However clinicians should be aware of regional resistance rates and it should be taken into consideration before initiating empirical antimicrobial therapy for uUTI.
Conclusion
FQ should be used with consideration since the resistance to FQ is in increase; rather it ought to be prescribed for treatment of serious infections like connective tissue infections, respiratory tract infections and upper UTI. If conditions allowing use of nonfluoroquinolone drugs for the treatment of uUTI should be encouraged when TMP-SMX is not an option. Since uUTI is relatively easy to cure with limited morbidity, agents like nitrofurantoin and fosfomycin should be used instead of FQ. | 2,937.2 | 0001-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Tracing Pilots’ Situation Assessment by Neuroadaptive Cognitive Modeling
This study presents the integration of a passive brain-computer interface (pBCI) and cognitive modeling as a method to trace pilots’ perception and processing of auditory alerts and messages during operations. Missing alerts on the flight deck can result in out-of-the-loop problems that can lead to accidents. By tracing pilots’ perception and responses to alerts, cognitive assistance can be provided based on individual needs to ensure they maintain adequate situation awareness. Data from 24 participating aircrew in a simulated flight study that included multiple alerts and air traffic control messages in single pilot setup are presented. A classifier was trained to identify pilots’ neurophysiological reactions to alerts and messages from participants’ electroencephalogram (EEG). A neuroadaptive ACT-R model using EEG data was compared to a conventional normative model regarding accuracy in representing individual pilots. Results show that passive BCI can distinguish between alerts that are processed by the pilot as task-relevant or irrelevant in the cockpit based on the recorded EEG. The neuroadaptive model’s integration of this data resulted in significantly higher performance of 87% overall accuracy in representing individual pilots’ responses to alerts and messages compared to 72% accuracy of a normative model that did not consider EEG data. We conclude that neuroadaptive technology allows for implicit measurement and tracing of pilots’ perception and processing of alerts on the flight deck. Careful handling of uncertainties inherent to passive BCI and cognitive modeling shows how the representation of pilot cognitive states can be improved iteratively for providing assistance.
This study presents the integration of a passive brain-computer interface (pBCI) and cognitive modeling as a method to trace pilots' perception and processing of auditory alerts and messages during operations. Missing alerts on the flight deck can result in out-of-the-loop problems that can lead to accidents. By tracing pilots' perception and responses to alerts, cognitive assistance can be provided based on individual needs to ensure they maintain adequate situation awareness. Data from 24 participating aircrew in a simulated flight study that included multiple alerts and air traffic control messages in single pilot setup are presented. A classifier was trained to identify pilots' neurophysiological reactions to alerts and messages from participants' electroencephalogram (EEG). A neuroadaptive ACT-R model using EEG data was compared to a conventional normative model regarding accuracy in representing individual pilots. Results show that passive BCI can distinguish between alerts that are processed by the pilot as task-relevant or irrelevant in the cockpit based on the recorded EEG. The neuroadaptive model's integration of this data resulted in significantly higher performance of 87% overall accuracy in representing individual pilots' responses to alerts and messages compared to 72% accuracy of a normative model that did not consider EEG data. We conclude that neuroadaptive technology allows for implicit measurement and tracing of pilots' perception and processing of alerts on the flight deck. Careful handling of uncertainties inherent to passive BCI and cognitive modeling shows how the representation of pilot cognitive states can be improved iteratively for providing assistance.
INTRODUCTION
Irrespective of ubiquitous automation, current-generation commercial and business aircraft still rely on pilots to resolve critical situations caused, among others, by system malfunctions. Pilots need to maintain situational awareness (SA) so they can assume manual control or intervene when necessary. It is essential for flight safety that pilots understand the criticality of flight deck alerts, and do not accidentally miss alerts, e.g., due to high workload and cognitive tunneling (Dehais et al., 2014). Human-machine interfaces on the flight deck therefore need to ensure messages are processed correctly to reduce the risk of out-of-the-loop problems (Endsley and Kiris, 1995;Berberian et al., 2017). Failed, delayed or otherwise inadequate response to flight deck alerts has been associated with several fatal accidents in the past (Air Accident Investigation and Aviation Safety Board, 2006;Aviation Safety Council, 2016).
Automation has transformed pilots' role from hands-on flying to monitoring system displays which is ill-matched to human cognitive capabilities (Bainbridge, 1983) and facilitates more superficial processing of information (Endsley, 2017). Furthermore, reduced-crew (e.g., single-pilot) operations can increase demands on pilots in commercial aircraft through elevated workload of remaining crew (Harris et al., 2015) and higher complexity imposed by additional automation (Bailey et al., 2017). More complex automation can impede the detection of divergence in the situation assessment by human operator and automated system, neither of which may adequately reflect reality . We believe that neurotechnologies can be used for cognitive enhancement and support of pilots in face of increased demands (Scerbo, 2006;Cinel et al., 2019). One way to achieve this is by monitoring the pilots' cognitive states and performance during flight deck operations in order to detect the onset of such divergence e.g., cognitive phenomena that may lead to out-of-the-loop situations. Being able to detect such cognitive states, corrective measures may be initiated to prevent or reduce risk of out-of-the-loop situations and to maintain the high level of safety in aviation.
OOTL and Situation Awareness
Out-of-the-loop problems arise when pilots lack SA (Endsley and Jones, 2011). SA is progressively developed through the levels of perception (1), comprehension (2), and projection (3) of a situation's elements. Missing critical alerts impairs situation perception and inhibits the development of higher SA levels. In a study on pilot errors, the vast majority of errors could be accounted to incorrect perception (70.3%) and comprehension (20.3%) of situations (Jones and Endsley, 1996).
Situational awareness is commonly measured by sampling with the help of probing questions. Probes can give insights into pilots' deeper understanding of a situation as well as whether or not a probed piece of information can be retrieved from memory. However, probing methods either require flight scenarios to be frozen (e.g., Endsley, 2000) or incur extra workload (Pierce, 2012) when assessing pilots' SA. Physiological (e.g., Berka et al., 2006;van Dijk et al., 2011;Di Flumeri et al., 2019) and performance-based metrics (e.g., Vidulich and McMillan, 2000) are less direct measures of memory contents, but they can be used unobtrusively in operations (see Endsley and Jones, 2011, for a summary of measures). As an example, van Dijk et al. (2011) showed how eye tracking can serve as an indicator of pilots' perceptual and attentional processes. The abundance of visual information in the cockpit, however, makes tracing visual attention very challenging and susceptible to selective ignoring and inattentional blindness (Haines, 1991;Most et al., 2005).
Alerts in the cockpit are presented both visually and acoustically, while acoustic stimuli have shown to be more effective in attracting attention (Spence and Driver, 1997). Physiological responses to alert stimuli may reveal whether or not alerts have been perceived and processed. For example, eventrelated potentials (ERPs) in operators' electroencephalogram (EEG) were proposed as indicators of attended and unattended stimuli in the assessment of SA (Endsley, 1995). Dehais et al. (2016) demonstrated that ERP components indeed allow to differentiate between missed and processed auditory stimuli in the cockpit, even in single trials (Dehais et al., 2019). They noted that these differences are primarily reflected in early perceptual and late attentional stages of auditory processing. According to Dehais et al. (2019), failure to adequately perceive or process an alert is likely due to excessive demand to cognitive resources in terms of attention and memory at a central executive level. In addition, deterministic modeling individual processed or missed alerts requires lots of data about the situation and the pilot's state and neurophysiological measures can help reduce uncertainty.
Thus, by monitoring what stimuli are provided when and checking for ERPs at stimulus onset, perception of a situation could be tracked in real-time (Wilson, 2000). After that, performance metrics in terms of comparing pilots' actual behavior to normative procedures can provide information on later SA stages. In contrast to product-focused measures, this process-based approach of situation assessment (Sarter and Sarter, 2003;Durso and Sethumadhavan, 2008) allows to also capture implicit components of SA (Endsley, 2000) that might be overlooked in SA probing.
Requirements for Cognitive State Assessment
As cognitive states underlying situation assessment are not directly observable, their detection and prediction in this study is approached from different angles by neurophysiological measures and cognitive modeling. Consistent monitoring of a pilot's situation assessment in flight requires tracing what elements of a situation are perceived and processed. Tracing perceptual and cognitive processing can best be done implicitly by interpreting psycho-physiological measures so as not to increase the pilots' load or otherwise interfere with operations. As we are interested in event-related cognitive processing, i.e., the processing of specific visual or auditory alerts, one requirement is that the onset of these alerts is captured accurately (Luck, 2014). This allows the timing of each alert to be synchronized with a measurement of the pilots' neuroelectric activity, which is sensitive to even slight temporal misalignments. This activity can then be analyzed relative to each alert's exact onset, allowing alert-specific cognitive states to be decoded. Such automated, non-intrusive detection of cognitive processing can be done using a passive brain-computer interface (pBCI), based on a continuous measurement of brain activity (Zander and Kothe, 2011;Krol et al., 2018).
If unprocessed alerts are detected, cognitive assistance can be offered depending on the alert's significance for the course of the operation. In order to assess the significance of a missed alert, its impact on SA and the operation can be simulated. This way, critical drops in pilot performance can be anticipated and assistance can be provided to prevent the pilot from getting out of the loop. This simulation can be performed using cognitive models that capture the characteristics of the human cognitive system such as resource limitations.
Cognitive Pilot Models
ACT-R 1 (Anderson et al., 2004) is the most comprehensive and widely used architecture to build models that can simulate, predict, and keep track of cognitive dynamics. It is based on accumulated research about the human brain's modular architecture, where each module maps onto a different functional area of the brain. In its current 7.14 version the ACT-R architecture comprises separate modules for declarative and procedural memory, temporal, and intentional (i.e., "goal") processing and visual, aural, motor, speech modules for limited perceptual-motor capabilities. While highly interconnected within themselves, exchange of symbolic information between modules is constrained by a small number of interfaces that are modeled as buffers (Anderson, 2007) 2 . These intermodular connections meet in the procedural memory module (representing the caudate of the basal ganglia; Anderson et al., 2008), where condition-action statements (i.e., "productions") are triggered depending on buffer contents. Actions can be defined for example in terms of memory retrieval, directing attention or manipulating the outside world through speech or motor actions. Based on sub-symbolic mechanisms such as utility learning, spreading activation, memory decay, and random noise, ACT-R models can adapt to dynamic environments and represent average human behavior in non-deterministic fashion.
ACT-R has frequently been used for modeling pilots' cognitive dynamics (e.g., Byrne and Kirlik, 2005;Gluck, 2010;Somers and West, 2013). It allows for the creation of cognitive models according to specific task descriptions, e.g., a goal-directed hierarchical task analysis (HTA; Endsley, 1995;Stanton, 2006). When this task description focuses on maintaining good SA, a normative cognitive model can be developed that acts in order to optimize SA. Normative models can be compared to individual pilot behavior to detect deviations and to make inferences about individual pilots' SA. Tracing individual behavior (model-tracing; Fu et al., 2006) can suffer from epistemic uncertainty (Kiureghian and Ditlevsen, 2009), for example, when it is unknown why a pilot did not react to an alert. This uncertainty can be reduced by using physiological data alongside system inputs to build richer models of individual performance (Olofsen et al., 2010;Putze et al., 2015;Reifman et al., 2018). However, sensor data inaccuracies can introduce a different, aleatory kind of uncertainty that is hard to assign to individual observations and needs to be considered in design of adaptive models (Kiureghian and Ditlevsen, 2009).
ACT-R has gained popularity in modeling human autonomy interaction. The work of Putze et al. (2015) showed how an ACT-R model allows to modulate interface complexity according to operator workload measured in EEG. Ball et al. (2010) have developed a synthetic teammate able to pilot unmanned aerial vehicles and communicate with human teammates based on an extensive model of SA (see also Rodgers et al., 2013;Freiman et al., 2018). Both these models demonstrate how selected human capabilities such as piloting and communicating (McNeese et al., 2018) or being empathic to operators' cognitive state (Putze et al., 2015) can be allocated to an ACT-R model in human autonomy teaming.
Neuroadaptive Technology
Neuroadaptive technology refers to technology that uses cognitive state assessments as implicit input in order to enable intelligent forms of adaptation (Zander et al., 2016;Krol and Zander, 2017). One way to achieve this, is to maintain a model that is continuously updated using measures of situational parameters as well as the corresponding cognitive states of the user (e.g., Krol et al., 2020). Adaptive actions can then be initiated based on the information provided by the model. Cognitive states can be assessed in different ways. Generally, certain cognitive states result, on average, in specific patterns of brain activity, and can be inferred from brain activity if the corresponding pattern distributions are known. As patterns differ to some extent between individuals and even between sessions, it is usually necessary to record multiple samples of related brain activity in order to describe the pattern distribution of cognitive responses in an individual. Given a sufficient amount of samples of a sufficiently distinct pattern, a so-called classifier can be calibrated which is capable of detecting these patterns in real time, with typical single-trial accuracies between 65 and 95% (Lotte et al., 2007).
Importantly, since these cognitive states occur as a natural consequence of the ongoing interaction, no additional effort is required, nor task load induced, for them to be made detectable. It is thus possible to use a measure of a user's cognitive state as implicit input, referring to input that was acquired without this being deliberately communicated by the operator (Schmidt, 2000;Zander et al., 2014). Among other things, this has already been used for adaptive automation. For example, without the pilots explicitly communicating anything, a measure of their brain activity revealed indices of e.g., engagement or workload, allowing the automation to be increased or decreased accordingly (e.g., Pope et al., 1995;Bailey et al., 2003;Aricò et al., 2016).
In the cockpit, each alert can be expected to elicit specific cortical activity, e.g., an ERP. If this activity can be decoded to reveal whether or not the alert has been perceived, and potentially whether and how it was processed, it can be used as implicit input. Since such input can be obtained from an ongoing measurement of the pilots' brain activity, no additional demands are placed on the pilots. By interpreting this information alongside historic pilot responses and further operational parameters, an informed decision can be made about the current cognitive state of the pilots and recommended adaptive steps.
Current Study
The remainder of this article describes the implementation and application of a concept for tracing individual pilots' perception and processing of aural alerts based on neuroadaptive cognitive modeling. In contrast to conventional measures of SA, this method is designed for application in operations that require unobtrusive tracing of cognitive states. The method is applied to explore how to anticipate pilot behavior and when to offer assistance according to their cognitive state. To this end, we test (1) the feasibility of distinguishing between processed and missed alerts based on pilots' brain activity, (2) whether individual pilot behavior can be anticipated using cognitive models, and (3) how the methods of pBCI and cognitive modeling can be integrated. Results are discussed regarding their implications for cognitive assistance on the flight deck and potential benefits for single pilot operations. Limitations are addressed to explore what else is needed in cognitive assistance for the anticipation and prevention of out-of-the-loop situations.
MATERIALS AND METHODS
This research complied with the American Psychological Association Code of Ethics and was approved by the Institutional Review Board at TU Berlin. Informed consent was obtained from each participant.
Participants
Twenty-four aircrew (one female) with a mean age of 49.08 years (SD = 6.08) participated in the flight simulator study. Participants were predominantly military pilots with an average experience of 3230 h of flight (SD = 2330.71), of which on average 51.21 h (SD = 90.76) were performed in the previous year. All participating aircrew had normal or corrected to normal vision, all but two were right-handed.
Procedure
Participating aircrew were asked for information on their flight experience and physical health relevant for physiological data assessment in the simulator. After application of EEG sensors, participants performed a desktop-based auditory oddball training paradigm (Debener et al., 2005). Participants performed 10 blocks during each of which a sequence of 60 auditory tones was presented. Each tone could be either a standard tone of 350 Hz occurring 70-80% of the time, a target deviant tone of 650 Hz (10-15%), or non-target deviant (2000 Hz, 10-15%). There was a variable interval between stimulus onsets of 1.5 ± 0.2 s, and a self-paced break after each block. Each tone lasted 339 ms. Participants were instructed to count the target tones in each block with eyes open, and to verbally report their count after each block to ensure they stayed attentive during the task. Thus, the standard tones represent frequent but task-irrelevant events, target tones represent rare task-relevant events, and the deviants were rare but task-irrelevant.
Following this, participants were seated in the simulator and briefed on the flying task. For the flight scenario, participants were instructed to avoid communicating with the experimenter during the scenario but were allowed to think aloud and to perform readbacks of air traffic control (ATC) messages just as they would during a normal flight. After the scenario, a debriefing session was conducted in order to collect feedback from participants.
Simulator and Scenario
Participants flew a mission in the fixed-base cockpit simulator of a mission aircraft similar to current-generation business jets certified according to EASA CS-23, which may be operated by a single pilot. The mission was implemented and simulated using the open source flight simulation software "FlightGear 3.4" 3 . Participants' task was to perform a fictitious routine VIP passenger transport from Ingolstadt-Manching (ETSI) to Kassel (EDVK) airport. To keep workload levels associated with basic flying low, the scenario started with the aircraft already airborne at cruise flight level (FL 250) with autopilot (altitude and NAV 4 mode) engaged. According to the flight management system (FMS) flight plan presented, the remaining flight time was approximately 40 min in fair weather conditions. To maintain speed, thrust had to be adjusted manually, since the aircraft waslike most business jets today -not equipped with auto-thrust. To simulate interactions with ATC and to ensure a consistent flow of the scenario for all participants, pilots were presented with pre-recorded routine ATC instructions relating to flight level and heading changes at fixed time intervals after the start of the scenario.
Also, at pre-defined times, pilots would encounter a series of flight deck alerts of varying, but generally increasing severity. First, 4 min into the scenario, the main fuel pump in the right wing tank failed, resulting in a caution level flight deck alert and, subsequently, the display of a simple recovery procedure, which was automatically presented as electronic checklist. After 6 min, a small fuel leak appeared in the right fuel tank, which had initially no salient flight deck effects and would therefore go mostly unnoticed. Contributing to this was a TCAS traffic advisory (caution level alert) after approximately 7 min, which would coincide with an ATC instruction to descend due to traffic (e.g., "F-UO 5 , due to traffic, descend and maintain FL 280" or "F-UO, direct TUSOS and descend FL 200"). Moreover, to simulate the effects of an intermittent spurious alert, and to divert pilot attention from the FUEL format to decrease the chance of the pilot noticing the leak, an identical caution-level alert of an electrical bus system failure was triggered four times throughout the scenario. This alert would automatically be removed after 5 s without any pilot action, and before pilots were able to access the associated recovery procedure. When the fuel leak had caused a fuel imbalance exceeding a certain threshold, a caution-level alert relating to the imbalance would be raised. The associated procedure would then guide pilots through several steps intended to find the root cause of the fuel imbalance. The scenario ended once an in-flight fire of the left engine initiated after 16:40 min, resulting in a warning level alert, had successfully been extinguished by the pilot. To make sure that all participants encountered all events of the scenario, speed warnings were issued dynamically by the simulated ATC whenever airspeed did not remain within a predefined range. Figure 1 gives an overview of events' position on the flight path while Figure 2 shows the vertical profile including timing of events during the flight task. Normative responses to these events would result in the following respective parameter changes: • ATC 1: Altitude-Select 280 and Speed-Select 220.
EEG
Electroencephalogram was recorded continuously at 500 Hz using a mobile, wireless LiveAmp amplifier (Brain Products, Gilching, Germany) using 32 active Ag/AgCl electrodes arranged on actiCAP caps according to the international 10-20 system and referenced to FCz. EEG was synchronized with both the desktop stimuli and the flight events using the Lab Streaming Layer (Kothe, 2014) software framework to ensure that EEG data could be related to the respective simulator events with adequate temporal resolution. In particular, FlightGear was configured to log the status of each of the alarms and send it at 100 Hz to a UDP port, where a custom Python script listened for incoming data and immediately forwarded each packet through LSL. A change in alert status could then be interpreted as the onor offset of the alert.
ERP Classification
A windowed-means classifier (Blankertz et al., 2011) was calibrated on the EEG data recorded for each individual participant during the oddball paradigm to distinguish between their neurophysiological response to two different categories of tones. Features were the mean amplitudes of eight consecutive non-overlapping time windows of 50 ms each starting at 150 ms following onset of the auditory tone, after bandpass filtering the signal between 0.3 and 20 Hz. Shrinkageregularized linear discriminant analysis was used to separate the classes. A fivefold cross-validation with margins of five was used to obtain estimates of the classifier's parameters and accuracy. We focused on distinguishing between standard versus target tones, i.e., task-irrelevant versus task-relevant events.
The trained classifier was optimally capable of distinguishing between the two categories of tones based solely on the participant's brain activity following each tone's onset. Having trained the classifier on detecting differences between these events in an abstract oddball task, we then applied the classifier to the data recorded during that same participant's flight. This thus allowed us to investigate to what extent flight deck alerts could be reliably identified as the comparable equivalent of "standard" (task-irrelevant, unimportant) or "target" (etc.) tones, based solely on the pilots' EEG data less than 1 second after onset of each event. For each simulated flight event, the classifier returned a number between 1 and 2, signifying that the neurophysiological response was closest to the activity following standard (1) or target (2) tones in the oddball paradigm, respectively.
Cognitive Model
A normative and a neuroadaptive cognitive model were created following a HTA performed with a subject matter expert for the flight scenario using ACT-R. For the HTA and the cognitive model, good SA level 1 was defined as perceiving and paying attention to all auditory stimuli provided in the scenario. While adequacy of responses depended on the type of alert or contents of ATC messages, the time limit for initiating a first reaction to an alert was set to 25 s for all events. As the spurious electrical bus alerts disappeared before pilots were able to react, they are not included in the analysis of this article. The interface between the models and the simulator/Flight Gear was implemented as an extended version of ACT-CV (Halbrügge, 2013), where log files of cockpit system states recorded with a sampling rate of 20 Hz served as ACT-R task environment.
Both normative and neuroadaptive model were based on a routine loop consisting of monitoring flight parameters and managing thrust accordingly in order to have comparable workload as participants in the simulator; however, cognitively plausible modeling of workload and accuracy in thrust management was beyond the focus of this study and therefore not evaluated. The routine loop was temporarily exited when an aural alert was perceived. The normative model shifted its attention to read the warning message and initiate the corresponding procedure.
In order to illustrate the model's flow of information from one module to another with respect to ACT-R's neuroanatomical assumptions, associated brain areas as described by Anderson et al. (2008) and Borst et al. (2013) will be given in parentheses behind each module. The validation of activity predicted by the model with brain imaging data was beyond the scope of this article. For example of the fuel pump failure alert the model would go through the following steps: (1) a chunk representing a sound activates the aural module (mapped to the superior temporal gyrus) by being put in the model's aural-location buffer.
(2) Next, this information allows the procedural module (basal ganglia) to fire a production that starts counting seconds passed since the alert with the temporal module and that decodes the sound as an alert sound using the aural buffer. This latter information would trigger productions that (3) make the model shift its visual attention to the warning display by calling on the visual module's (fusiform gyrus) visual-location buffer and (4) read the written fuel pump failure message using the visual buffer. (5) The following production would result in calling up the corresponding pump failure checklist, memorizing its first item (i.e., pressing the right main fuel pump pushbutton) in the imaginal buffer (intraparietal sulcus, representing the model's short-term memory problem state). (6) Then, using its motor module (precentral gyrus), the model acts as if pressing the pump pushbutton (without changing any of the flight parameters) before (7) reading and carrying out the remaining checklist items in the same fashion while it keeps counting. (8) Finally when the count in the temporal module has reached 25 s, the module checks the flight parameters for the state of the right main fuel pump's pushbutton to verify whether the pilot has carried out the action required by the first checklist item as memorized in the model's imaginal buffer.
As the normative model assumed that pilots will correctly process each alert, adequate responses were scored as correct and inadequate (i.e., commission errors) as well as lacking and too late responses (i.e., omission errors) as incorrect classification of behavior. Adequacy and timeliness of responses were scored according to criteria assessed in the HTA with subject matter experts. For example, if an ATC message requested a flight level change to 300, entering an altitude-select of 300 in the flight control unit within a time window of 25 s was scored as good performance; all other responses such as entering an altitudeselect of 280 or entering the correct altitude-select after 25 s were classified as missed ATC message. The fraction of incorrect classifications was treated as epistemic uncertainty (µ Epistemic ) as the model had no information about why the pilot did not respond as expected.
The neuroadaptive model considered individual brain activity when classifying behavior to reduce this uncertainty. pBCI data were provided to the model along with the cockpit systems data. After each acoustic alert and message was decoded, the neuroadaptive model checked if the sound was processed as task-relevant by the participant according to pBCI data before shifting its visual attention to read the alert's or message's actual content. To build and improve on the normative model's accuracy, the neuroadaptive model assumed that alerts will be processed correctly. If pBCI data showed that a message was processed as irrelevant (classifier output <1.5), the model scored lacking or inadequate responses as correct behavior classification. If the message was processed as relevant but no adequate response can be found, the model scored its classification as incorrect and treats these cases as epistemic uncertainty.
Responses were assessed for 10 events for each of the 21 pilots whereof eight ATC messages, one amber, and one red alert. Model accuracies were computed across participants as the fraction of correct classifications in all events. Normative and neuroadaptive model were compared by a paired samples t-test. Effect size is reported as Cohen's d av (Lakens, 2013). Aleatory uncertainty (µ Aleatory ) was defined as one minus EEG classifier accuracy. Though aleatory uncertainty affects correct and incorrect classifications, an accuracy corrected for aleatory uncertainty was computed for the neuroadaptive model. The distribution of lacking and inadequate responses was tested for a relationship with EEG classifications by a Chi-square test. A detailed description of the cognitive model including the overall approach and modeling decisions made can be found in Klaproth et al. (2020). Figure 3 shows the grand-average ERPs on channel Pz for the standard and target tones during the oddball experiment on three electrode sites. Note that there is a delay. We had previously estimated our stimulus presentation pipeline to contain a lag of approximately 150 ms. This would coincide with the common interpretation that the initial negative peak visible in these plots is the N100. The classifier was trained to detect the differences between single-trial ERPs using all 32 channels and had a cross-validated averaged accuracy of 86%. Given the class imbalance between the standard deviant tones, chance level was not at 50% for this binary classifier. Instead, significant classification accuracy (p < 0.05) is reached at 78%. The classes could be separated with significant accuracy for all but three participants. This was in part due to technical issues with the EEG recording. These three participants were excluded from further analysis.
ERP Classification
The classifier trained on data from the oddball paradigm was subsequently applied to data following four flight events: ATC messages, the spurious electrical bus system failure alert, the fuel imbalance alert, and the fire alert. These classification results provided information to be used in the neuroadaptive cognitive model.
Cognitive Model
The normative model correctly described participants' behavior for 162 of the total 210 observed events (M Normative = 0.72, SD = 0.09), indicating that participants missed to respond to 48 events. The neuroadaptive model was able to simulate 182 of participant's responses correctly (M Neuroadaptive = 0.87, SD = 0.13, see Figure 4), resulting in a significant added value of including pBCI data compared to the normative model [t(20) = 5.62, p < 0.01, d av = 1.3]. Figure 5 shows the respective models' accuracies for each of the 21 pilots.
Epistemic uncertainties for the models are µ Epistemic = 0.28 for the normative and µ Epistemic = 0.13 for the neuroadaptive model. The added value of the neuroadaptive over the normative model is 0.15, so the neuroadaptive model's accuracy corrected for EEG-classifier accuracy of 0.88 is 0.85 with µ Epistemic = 0.15 and µ Aleatory = 0.02.
Of the 58 events left unexplained by the normative model, 22 events did not show a response to the respective alert or message and 36 showed an incorrect response by the participant. Chi-square tests yielded no significant relationship between EEG classifier output (standard/target) and the event having missing or incorrect responses [χ 2 (1, N = 58) = 1.04, p = 0.31), i.e., pBCIdata do not predict whether a participant will respond incorrectly or not at all to missed alerts.
DISCUSSION
The use of increasingly complex and less traceable automation can result in out-of-the loop situations thanks to different assessment of situations by pilot and automated system. Results of this study have demonstrated the feasibility of implicitly detecting and handling of emerging divergence in situation assessment with the help of a neuroadaptive cognitive model.
Using a pBCI for real-time assessment of cognitive responses evoked by events in the cockpit provides insight into subjective situational interpretations. Such information is highly dependent on the context sensitive, individual state of the operator and can hardly, if at all, be inferred by purely behavioral or environmental measures. In general, we conclude that the combination of pBCI approaches with advanced methods of cognitive modeling, leads to an increase in the reliability and capability of the resulting cognitive model -introducing the idea of neuroadaptive cognitive modeling -as shown in this study.
Specifically, the ERP produced by the oddball paradigm shows clear differences between the different categories of tones. In particular, a P300 at Pz clearly distinguishes between target (taskrelevant) and standard (task-irrelevant) tones. Based on these differences in single-trial event-related activity, the classifier was capable of distinguishing between target and standard tones with single-trial accuracies significantly higher than chance in the training session.
The improvement in the cognitive model that resulted from including the pBCI output indicates that it is possible to obtain informative cognitive state information based on a pilot's brain activity immediately following an auditory event. The fact that the classifier decoding this information was trained in a desktop setting demonstrates that no elaborate training sessions are required.
Normative model results suggest that individual pilot behavior can be traced and anticipated by a cognitive model. By comparing individual pilots' actions to the normative model behavior, deviations could be detected and inferences about SA could be made without intruding the task (Vidulich and McMillan, 2000). Twenty-eight percent of epistemic uncertainty, with lacking and incorrect responses evenly distributed, indicate that additional diagnostic information is required for meaningful analysis and support in cases of deviating behavior.
The improvement in accuracy for the neuroadaptive model demonstrate how individual behavior models can benefit from the integration of physiological data. Not only can top-down modeling of human cognition in a task be complemented by bottom-up integration of (neuro-) behavioral data for example to account for behavioral moderators (e.g., Ritter et al., 2004), it can also provide contextual information required for situationdependent interpretation of EEG data. The different types of uncertainties inherent to model tracing and pBCI determined the model's systematic design: pBCI data could only be used to reduce the fraction of the normative model's unexplained behavior to deal with aleatory uncertainty.
The method's limitations are quantified in terms of uncertainty. Later SA stages need to be monitored to increase accuracy in pilot modeling. Measures of additional physiological indicators might be connected in line to further reduce both epistemic uncertainty with new types of information, and aleatory with joint probability distributions. For example, gaze data such as visual search behavior in response to alerts could be indicative of comprehension problems and reinforce or challenge pBCI classifications of alerts being perceived or not. Other indicators, for example the error-related negativity component of the ERP, could help to identify situations where operators have low comprehension or are out of the loop (Berberian et al., 2017).
Any cockpit application of passive BCI technology requires a thorough consideration regarding the intrusiveness of the measurement, the intended function(s) enabled by the BCI, as well as the safety and airworthiness implications associated with this function. The intrusiveness perceived by pilots will mainly depend on how well the (dry) EEG electrodes can be integrated for example into the interior lining of a pilot helmet or the headband of a headset. The intended cockpit (assistance) function, in turn, will mainly determine the airworthiness certification and associated validation effort required.
If the system described in this article is merely be used to enhance the efficiency of the already certified flight deck alerting system of an aircraft, the design assurance level required from an airworthiness and safety perspective could be lower compared to a solution where a passive BCI-based cockpit function is an integral part of the aircraft's safety net. In the latter case, the airworthiness effort will be substantial irrespective of whether AI and/or machine learning are used or not. Although evaluated offline after data collection, the methods presented in this paper are well-suited to be applied online without substantial modifications. While the abstract oddball task can replace more realistic alternatives to gather training data, and thus substantially shorten the amount of time required to do so, it may still be necessary to gather new training data before each flight due to the natural non-stationarity present in EEG activity. For a truly walk-up-and-use neuroadaptive solution, a subject-independent classifier would be required (e.g., Fazli et al., 2009). Monitoring pilots' ERPs in response to alerts gives diagnostic value. Detection of inattentional deafness in early, perceptual ERP components could trigger communication of the alert in alternative modalities (e.g., tactile or visual; Liu et al., 2016). For unattended alerts detected in later ERP components, cockpit automation could prioritize and choose to postpone reminders in case of minor criticality. Withholding information that is not alert-related can be effective in forcing pilots' attention onto the alert, but it may be accompanied by decrease in pilots' authority and associated risks, for example to resilience in unexpected situations and technology acceptance.
The simulator setting likely introduced biases in task engagement and density of events in the scenario. Measuring system input from pilots while they monitor instruments in real flight conditions may not provide enough data to make inferences about cognitive states. This emphasizes the need for additional behavioral measures (e.g., neurophysiological activity, speech, or gaze) to provide individual assistance.
Pilots are capable of anticipating complex system behavior but reports of automation surprises and out-of-the-loop situations stress the importance of a shared understanding of situations by pilot and cockpit automation. Increasing complexity of automation should therefore go together with a paradigm shift toward human-autonomy teaming based on a shared understanding of the situation. This includes bi-directional communication whenever a significant divergence in the understanding of a situation occurs to provide information missing for shared awareness of the human autonomy team (Shively et al., 2017). Anticipation of divergences and understanding human information needs to ensure shared awareness remains a challenge for human autonomy teaming (McNeese et al., 2018). By addressing divergences in human and autonomy situation assessment, critical situations might be prevented or at least resolved before they result in incidents or accidents. Tracing pilots' perception of cockpit events represents a first step toward this goal.
CONCLUSION
A pBCI allows to implicitly monitor whether pilots have correctly processed alerts or messages without intruding the mission using a classifier trained in a desktop setting. The integration of pBCI data in cognitive pilot models significantly improves the accuracy in following up with pilots' situation assessment. Tracing pilots' situation assessment through neuroadaptive cognitive modeling may facilitate the early detection of divergences in situation assessment in human autonomy teams. While sensor obtrusiveness and computational limitations may obstruct application, neuroadaptive cognitive modeling could help to tracing of pilots' situation awareness and enable adaptive alerting.
DATA AVAILABILITY STATEMENT
The datasets presented in this article were mainly collected using aircrew employed by Airbus. For privacy and confidentiality reasons, they are not readily available; requests to address the datasets should be directed to<EMAIL_ADDRESS>
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethik-Kommission Fakultät V -Verkehrs -und Maschinensysteme Institut für Psychologie und Arbeitswissenschaft TU Berlin. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
OK, CV, LK, TZ, and NR designed the experiment. CV designed the flight task. LK and TZ designed the EEG trainings. OK and NR created the cognitive model. MH created the interface between flight simulator data and ACT-R. OK, CV, and LK performed the experiments and drafted the manuscript. OK and LK analyzed the data. OK, CV, LK, MH, TZ, and NR edited, revised, and approved the manuscript. All authors contributed to the article and approved the submitted version. | 9,269.4 | 2020-08-11T00:00:00.000 | [
"Computer Science"
] |
Effect of Fabrication Parameters on the Performance of 0.5 wt.% Graphene Nanoplates-Reinforced Aluminum Composites
Aluminum composites reinforced by graphene nanoplates(GNP) with a mass fraction of 0.5% (0.5 wt.% GNP/Al) were fabricated using cold pressing and hot pressing. An orthogonal test was used to optimize the fabrication parameters. Ball milling time, ball milling speed, and ultrasonic time have the largest influence on the uniformity of the graphene in the composites. Afterwards, the microstructure, interfacial properties, and fracture morphology of the composites obtained with different parameters were further analyzed. The results show that ball milling time and ball milling speed have obvious influences on the mechanical properties of the composite. In this paper, when the ball milling speed is 300 r/min and the ball milling time is 6 h, the dispersion uniformity of graphene in the 0.5 wt.% GNP/Al composite is the best, the agglomeration is the lowest, and the mechanical properties of the composites are the best, among which the tensile strength is 156.8 MPa, 56.6% higher than that of pure aluminum fabricated by the same process (100.1 MPa), and the elongation is 19.9%, 39.8% lower than that of pure aluminum (33.1%).
Introduction
Aluminum is a structural material used commonly in aerospace, automobile, machinery, electronics, and other fields because of its light weight, high strength, good ductility, easy processing, and so on [1][2][3]. The development of science and technology has higher requirements for the quality and performance of materials, so traditional Al and alloys no longer meet the needs of modern society [4]. Graphene is an ideal reinforcement for metal matrix composites due to its good friction properties [5][6][7][8][9], high conductivity [10,11], high thermal conductivity [12,13], good mechanical properties [14][15][16][17][18][19][20][21][22], and so on. GNP (graphene nanoplates), composed of a small number of graphene layers, own properties similar to those of monolayer graphite with high yield and low price. In addition, GNP retains some residual oxygen groups, which have a positive effect on the mechanical properties as well as the interfacial structure of the composites.
In recent years, research on graphene-reinforced Al composites has gradually increased. Yan et al. [16] used ball milling and the hot isostatic pressing method to prepare 0.5 wt.% GNP/Al
Materials
The pure Al powder produced by Shandong Sitaili Metal Materials Co., Ltd. (Jinan, China) was used as the matrix whose particle size is approximately 30 µm; graphene was purchased from Qingdao Huagao Diluted Energy Co., Ltd. (Qingdao, China) with a specific surface area of approximately 400 m 2 /g and approximately 3-5 layers with a particle size of 0.1-5.0 µm. Figure 1 shows the scanning electron microscopy (SEM) image of pure Al powder and the transmission electron microscopy (TEM) image of graphene. The pure Al powder particles are nearly spherical and have a uniform particle size, as shown in Figure 1a. Figure 1b is a TEM image of the graphene. The GNP is feathery and translucent, with typical folded structure characteristics. As seen from Figure 1b in the lower right corner, the number of graphene layers is approximately 3-5. In Figure 1b, the electron diffraction pattern shows that the GNP belongs to a polycrystal, and the formation of the diffraction ring is due to the electron beam hitting the fold of the GNP. The different orientations of the different regions on the fold lead to the mixing of the diffraction spots of different orientations to form the diffraction ring.
Materials 2020, 13, x FOR PEER REVIEW 3 of 21 the diffraction ring is due to the electron beam hitting the fold of the GNP. The different orientations of the different regions on the fold lead to the mixing of the diffraction spots of different orientations to form the diffraction ring.
Fabrication of the 0.5 wt.% GNP/Al Composite Powder
First, the pre-weighed graphene is mixed with an appropriate amount of industrial ethanol and placed in an ultrasonic cleaner (KQ-300E, Kunshan Ultrasonic Instrument Co., Ltd. Kunshan, China) to reduce the agglomeration of graphene and increase its dispersibility. The ultrasonication time was set to 60 min or 90 min. Then, the alcoholic solution of graphene and Al powder prepared in the ethanol solution dispersion is packaged into a tank of a planetary ball mill (XQM-8, Changsha Tianchuang Powder Technology Co., Ltd., Changsha, China) for ball milling. Stearic acid is used as the process control agent, which can increase the dispersibility and prevent cold welding. Nitrogen is used as a shielding gas to prevent Al powder from being oxidized.
Orthogonal Test Design of the Fabrication Parameters
In this paper, ball milling speed, ball milling time, B-P ratio, ultrasonication time, and stearic acid content were selected as the factors to study. To simplify the experimental process, this paper uses the orthogonal test method to distribute the five fabrication parameters reasonably, and the orthogonal distribution table of parameters (L8 (4 × 24)) is shown in Table 1. To facilitate the expression, the five parameters are named X1, X2, X3, X4, and X5. As shown in Table 2, X1 has 4 levels, while X2, X3, X4, and X5 have 2 levels.
Fabrication of the 0.5 wt.% GNP/Al Composite Powder
First, the pre-weighed graphene is mixed with an appropriate amount of industrial ethanol and placed in an ultrasonic cleaner (KQ-300E, Kunshan Ultrasonic Instrument Co., Ltd. Kunshan, China) to reduce the agglomeration of graphene and increase its dispersibility. The ultrasonication time was set to 60 min or 90 min. Then, the alcoholic solution of graphene and Al powder prepared in the ethanol solution dispersion is packaged into a tank of a planetary ball mill (XQM-8, Changsha Tianchuang Powder Technology Co., Ltd., Changsha, China) for ball milling. Stearic acid is used as the process control agent, which can increase the dispersibility and prevent cold welding. Nitrogen is used as a shielding gas to prevent Al powder from being oxidized.
Orthogonal Test Design of the Fabrication Parameters
In this paper, ball milling speed, ball milling time, B-P ratio, ultrasonication time, and stearic acid content were selected as the factors to study. To simplify the experimental process, this paper uses the orthogonal test method to distribute the five fabrication parameters reasonably, and the orthogonal distribution table of parameters (L8 (4 × 24)) is shown in Table 1. To facilitate the expression, the five parameters are named X 1 , X 2 , X 3 , X 4 , and X 5 . As shown in Table 2, X 1 has 4 levels, while X 2 , X 3 , X 4 , and X 5 have 2 levels.
A particle size analyzer (Hydro-2000 Mu (A), Malvern instruments Ltd., Malvern, UK) was used to test the particle size distribution of the composite powders after ball milling. Figure 2 shows the particle size of the 0.5 wt.% GNP/Al composite powder corresponding to the test numbers 1-8 in Table 1. As the milling time increases, the particle size of the composite becomes increasingly uniform. It can be seen from Figure 3 that, in the process of ball milling, the average particle size of Al powder increases gradually due to cold welding, but this growth tends to slow down gradually. The corresponding degree of slowing for the ball milling speed of 300 r/min is smaller than that for 200 r/min because the impact of the ball on the particles is larger when the ball milling speed is larger, so the cold welding effect is less serious. 1 90 3 3 6 ----4 8 ----A particle size analyzer (Hydro-2000 Mu (A), Malvern instruments Ltd., Malvern, UK) was used to test the particle size distribution of the composite powders after ball milling. Figure 2 shows the particle size of the 0.5 wt.% GNP/Al composite powder corresponding to the test numbers 1-8 in Table 1. As the milling time increases, the particle size of the composite becomes increasingly uniform. It can be seen from Figure 3 that, in the process of ball milling, the average particle size of Al powder increases gradually due to cold welding, but this growth tends to slow down gradually. The corresponding degree of slowing for the ball milling speed of 300 r/min is smaller than that for 200 r/min because the impact of the ball on the particles is larger when the ball milling speed is larger, so the cold welding effect is less serious. The purpose of ball milling is to achieve uniform dispersion of the graphene and uniform particle size of the composite powder. In the initial stage of this paper, the standard deviation value (SDV) of the particle size of the composite powder samples after ball milling, as shown in Figure 2, is used as the optimization objective of the orthogonal test, and the reliability will be analyzed later.
where is the number of particles, is the particle size of the composite powder, and is the average value of the composite powder.
The average SDV value of each factor at different levels is calculated to study the influence of the parameters on the dispersal of the composite, as shown in Table 3. It can be seen that there are different trends in the average value of SDV under different factor levels. These trends reflect the influence of different levels of each factor on the particle size homogenization of the composite powder after ball milling. As seen from Table 3, with the increase of ball milling time (X1), the average value of SDV first changed from 0.39 (2 h) to a maximum of 2.0 (4 h), decreased to 0.055 (6 h), with a decrease ratio of 97.25%, and then increased to 1.775 (8 h). This shows that the ball milling time has a very obvious effect on the particle size homogenization of the composite powder, and the particle size of the powder is the most uniform when the ball milling time is 6 h. In addition, with the increase in ball milling speed (X2), the average value of SDV decreased from 3.52 (200 r/min) to 2.065 (300 r/min), with a decrease of 36.5%, indicating that the ball milling speed had a significant effect on the particle size homogenization of the composite powder, and the speed of 300 r/min could make the particle size more uniform. When the B-P ratio (X3) changed, the average value of SDV decreased from 2.90 to 2.675, a decrease of 7.8%, indicating that the ball-to-powder weight ratio has little impact on the particle size homogenization of the composite powder. When the ultrasonication time (X4) changed, the average value of SDV increased from 2.19 to 3.4, an increase of 35.6%, which indicates that the ultrasonication time has an obvious effect on the particle size homogenization of the composite powder. When the stearic acid content (X5) changed, the average value of SDV increased from 2.5825 to 2.755, an increase of 6.3%, indicating that the stearic acid content has little effect on the particle size homogenization of the composite powder. In summary, ball milling time (X1), ball milling speed (X2), and ultrasonication time (X4) are the three main factors affecting the particle size of the composites. In the following section, the effects of ultrasonication time (X4), ball milling time (X1), and ball milling speed (X2) on the properties of the composites and the mechanisms are further analyzed. The purpose of ball milling is to achieve uniform dispersion of the graphene and uniform particle size of the composite powder. In the initial stage of this paper, the standard deviation value (SDV) of the particle size of the composite powder samples after ball milling, as shown in Figure 2, is used as the optimization objective of the orthogonal test, and the reliability will be analyzed later.
where n is the number of particles, D i is the particle size of the composite powder, and D is the average value of the composite powder. The average SDV value of each factor at different levels is calculated to study the influence of the parameters on the dispersal of the composite, as shown in Table 3. It can be seen that there are different trends in the average value of SDV under different factor levels. These trends reflect the influence of different levels of each factor on the particle size homogenization of the composite powder after ball milling. As seen from Table 3, with the increase of ball milling time (X 1 ), the average value of SDV first changed from 0.39 (2 h) to a maximum of 2.0 (4 h), decreased to 0.055 (6 h), with a decrease ratio of 97.25%, and then increased to 1.775 (8 h). This shows that the ball milling time has a very obvious effect on the particle size homogenization of the composite powder, and the particle size of the powder is the most uniform when the ball milling time is 6 h. In addition, with the increase in ball milling speed (X 2 ), the average value of SDV decreased from 3.52 (200 r/min) to 2.065 (300 r/min), with a decrease of 36.5%, indicating that the ball milling speed had a significant effect on the particle size homogenization of the composite powder, and the speed of 300 r/min could make the particle size more uniform. When the B-P ratio (X 3 ) changed, the average value of SDV decreased from 2.90 to 2.675, a decrease of 7.8%, indicating that the ball-to-powder weight ratio has little impact on the particle size homogenization of the composite powder. When the ultrasonication time (X 4 ) changed, the average value of SDV increased from 2.19 to 3.4, an increase of 35.6%, which indicates that the ultrasonication time has an obvious effect on the particle size homogenization of the composite powder. When the stearic acid content (X 5 ) changed, the average value of SDV increased from 2.5825 to 2.755, an increase of 6.3%, indicating that the stearic acid content has little effect on the particle size homogenization of the composite powder. In summary, ball milling time (X 1 ), ball milling speed (X 2 ), and ultrasonication time (X 4 ) are the three main factors affecting the particle size of the composites. In the following section, the effects of ultrasonication time (X 4 ), ball milling time (X 1 ), and ball milling speed (X 2 ) on the properties of the composites and the mechanisms are further analyzed.
Powder Metallurgy (PM)
The composite powders with different parameters set by the orthogonal test were cold-pressed in a mold with a size of ϕ50 mm × 3 mm by a hydraulic press machine (WAW-600, Jinan New Times Assay Instrument Co., Ltd. Jinan, china) at atmospheric temperature. The pressure was 500 MPa, the compression rate was 2 KN/min, and the holding time was 5 min. And then, the cold-pressed billets were hot-pressed in a vacuum thermocompressor (ZRC85-25T, Jinan Shanda Nonferrous Metal Casting Co., Ltd. Jinan, China) at 600 • C for 1 h with a pressure of 30 MPa supplied by another mould with the diameter of ϕ50 mm and then cooled in the furnace. Pure aluminium billet was also fabricated by the same processes.
Mechanical Property Tests and Microstructural Experiments
Samples were wire electrode cut from the hot-pressed billets of the composite and the pure Al billets for tensile testing, optical microscopy (OM), and SEM scanning. The size of the tensile samples [28,29] is shown in Figure 4. The tensile tests were carried out on a universal testing machine (WDW-5G, Jinan Hengsi Shanda Co., Ltd, Jinan, china) at atmospheric temperature with a tensile rate of 1 mm/min. Three samples for each of the conditions were tested for the repetition.
Powder Metallurgy (PM)
The composite powders with different parameters set by the orthogonal test were cold-pressed in a mold with a size of φ50 mm × 3 mm by a hydraulic press machine (WAW-600, Jinan New Times Assay Instrument Co., Ltd. Jinan, china) at atmospheric temperature. The pressure was 500 MPa, the compression rate was 2 KN/min, and the holding time was 5 min. And then, the cold-pressed billets were hot-pressed in a vacuum thermocompressor (ZRC85-25T, Jinan Shanda Nonferrous Metal Casting Co., Ltd. Jinan, China) at 600 °C for 1 h with a pressure of 30 MPa supplied by another mould with the diameter of φ50 mm and then cooled in the furnace. Pure aluminium billet was also fabricated by the same processes.
Mechanical Property tests and Microstructural Experiments
Samples were wire electrode cut from the hot-pressed billets of the composite and the pure Al billets for tensile testing, optical microscopy (OM), and SEM scanning. The size of the tensile samples [28,29] is shown in Figure 4. The tensile tests were carried out on a universal testing machine (WDW-5G, Jinan Hengsi Shanda Co., Ltd, Jinan, china) at atmospheric temperature with a tensile rate of 1 mm/min. Three samples for each of the conditions were tested for the repetition.
A field emission SEM (Sigma-300, Carl Zeiss AG, Oberkochen, Germany) was used to observe the morphology of the composite powder after milling and the tensile fracture morphology of the hot-pressed billets and to analyze the element of the composite (EDS). A Raman spectrometer (Renishaw-2000 Renishaw, London, UK) was used to test the components of the graphene, composite powders, cold-pressed, and the hot-pressed composite billets. X-ray diffraction (D/max-2500/PC, Rigaku, Japan) was used to analyze the components of the composite after vacuum hot pressing. A field emission SEM (Sigma-300, Carl Zeiss AG, Oberkochen, Germany) was used to observe the morphology of the composite powder after milling and the tensile fracture morphology of the hot-pressed billets and to analyze the element of the composite (EDS). A Raman spectrometer (Renishaw-2000 Renishaw, London, UK) was used to test the components of the graphene, composite powders, cold-pressed, and the hot-pressed composite billets. X-ray diffraction (D/max-2500/PC, Rigaku, Japan) was used to analyze the components of the composite after vacuum hot pressing.
Results and Discussion
The effects of the three major fabrication parameters on the properties of the 0.5 wt.% GNP/Al composites were studied via comparison of the macroscopic mechanical properties or microscopic characterization.
Effect of Ultrasonication Time (X 4 ) on the Dispersion of Graphene
Graphene has many surface atoms and dangling bonds and tends to agglomerate together to reduce the surface activity, which has a negative impact on the mechanical properties of the composites. In addition, due to the large density difference and poor wettability of the Al matrix and graphene, it is more difficult for graphene to uniformly disperse in the Al matrix. The polarity of ethanol is similar to that of graphene, and the van der Waals force between ethanol and graphene is greater than that between the graphene's own layers. So, under the ultrasonic shaking, graphene in ethanol solution can unfold more easily, dispersing and extending without structural destruction. Figure 5 is the Raman spectra of the GNPs before and after ultrasonication. There are three main characteristic peaks in the Raman spectra of graphene, namely, the D peak around 1350 cm −1 , the G peak around 1580 cm −1 , and the 2D peak around 2670 cm −1 . It is known that the smaller the intensity ratio of the D peak to the G peak (I D /I G ), the more complete the graphene, and the lower the intensity ratio of the 2D band to the G band(I 2D /I G ), the larger the number of layers. In this study, as shown in Figure 5, the I D /I G value of the original graphene in Figure 5 is 1.21, and the I 2D /I G value is 0.269; the graphene I D /I G value is 1.19, and the I 2D /I G value is 0.283 after ultrasound for 60 min, while the I D /I G value is 1.06 and the I 2D /I G value is 0.622 after ultrasound for 90 min. The I D /I G value of graphene decreases while the I 2D /I G value increases after ultrasound for 60 and 90 min, indicating that the graphene unfolded and the layer number decreased because of the ultrasonic action, and the longer the ultrasonic time is, the better the dispersion of graphene.
Results and Discussion
The effects of the three major fabrication parameters on the properties of the 0.5 wt.% GNP/Al composites were studied via comparison of the macroscopic mechanical properties or microscopic characterization.
Effect of Ultrasonication Time (X4) on the Dispersion of Graphene
Graphene has many surface atoms and dangling bonds and tends to agglomerate together to reduce the surface activity, which has a negative impact on the mechanical properties of the composites. In addition, due to the large density difference and poor wettability of the Al matrix and graphene, it is more difficult for graphene to uniformly disperse in the Al matrix. The polarity of ethanol is similar to that of graphene, and the van der Waals force between ethanol and graphene is greater than that between the graphene's own layers. So, under the ultrasonic shaking, graphene in ethanol solution can unfold more easily, dispersing and extending without structural destruction. Figure 5 is the Raman spectra of the GNPs before and after ultrasonication. There are three main characteristic peaks in the Raman spectra of graphene, namely, the D peak around 1350 cm -1 , the G peak around 1580 cm -1 , and the 2D peak around 2670 cm -1 . It is known that the smaller the intensity ratio of the D peak to the G peak (ID/IG), the more complete the graphene, and the lower the intensity ratio of the 2D band to the G band(I2D/IG), the larger the number of layers. In this study, as shown in Figure 5, the ID/IG value of the original graphene in Figure 5 is 1.21, and the I2D/IG value is 0.269; the graphene ID/IG value is 1.19, and the I2D/IG value is 0.283 after ultrasound for 60 min, while the ID/IG value is 1.06 and the I2D/IG value is 0.622 after ultrasound for 90 min. The ID/IG value of graphene decreases while the I2D/IG value increases after ultrasound for 60 and 90 min, indicating that the graphene unfolded and the layer number decreased because of the ultrasonic action, and the longer the ultrasonic time is, the better the dispersion of graphene.
Microstructural Morphology of the Composite Powder
During high-energy ball milling, the deformation of powder is generally divided into three stages: from spherical to flattened, cold-welded to form larger plate-like particles, and broken to form smaller particles. Meanwhile, graphene was first dispersed on the surface of the flake Al powder and then cold welded into the Al particles. Figure 6 is the SEM image of the composite powder after ball milling. Figure 6a
Microstructural Morphology of the Composite Powder
During high-energy ball milling, the deformation of powder is generally divided into three stages: from spherical to flattened, cold-welded to form larger plate-like particles, and broken to form smaller particles. Meanwhile, graphene was first dispersed on the surface of the flake Al powder and then cold welded into the Al particles. Figure 6 is the SEM image of the composite powder after ball milling. Figure 6a,c,e,g, on the left, are the low-magnification SEM images of the composite correspond to a ball milling speed of 200 r/min and the ball milling time of 2 h, 4 h, 6 h, and 8 h, respectively, and Figure 6b,d,f,h are the high-magnification images. It is the same for Figure 6i-p, but the ball milling speed is 300 r/min. As can be seen from Figure 6a Table 3. When the milling time is 2 h, there are more free GNPs, as shown by the white arrow in Figure 6b. As the milling time increases, the integrity of the graphene is gradually destroyed to form smaller graphene particles, as shown by the white arrows in Figure 6f,h. However, even when the ball milling time reaches 8 h, graphene fails to form a good bond with the surface of the Al powder, as shown in Figure 6h. At a speed of 300 r/min, when the ball milling time was 6 h, the GNPs were evenly dispersed and partly covered the surface of the Al powder with another part inside the particles, indicating a good infiltration as shown in Figure 6n. This was confirmed by the surface scanning using EDS, as shown in the mapping in Figure 7. When the milling time is 8 h, Al powders become flatter, but at the same time, the GNPs are destroyed, their size is reduced, and the coating is slightly worse than that of the 6 h milling time. This is shown in Figure 6p. And from the comparison of Figure 6a,c,e,g, as well as the comparison of Figure 6i,k,m,o, it can be seen that the uniformity of the powder increases at first and then slopes down to the least when the milling time is 6 h, then becomes larger again when the time reach 8 h, showing the same trend with the SDV values in Table 3. When the milling time is 2 h, there are more free GNPs, as shown by the white arrow in Figure 6b. As the milling time increases, the integrity of the graphene is gradually destroyed to form smaller graphene particles, as shown by the white arrows in Figure 6f,h. However, even when the ball milling time reaches 8 h, graphene fails to form a good bond with the surface of the Al powder, as shown in Figure 6h. At a speed of 300 r/min, when the ball milling time was 6 h, the GNPs were evenly dispersed and partly covered the surface of the Al powder with another part inside the particles, indicating a good infiltration as shown in Figure 6n. This was confirmed by the surface scanning using EDS, as shown in the mapping in Figure 7. When the milling time is 8 h, Al powders become flatter, but at the same time, the GNPs are destroyed, their size is reduced, and the coating is slightly worse than that of the 6 h milling time. This is shown in Figure 6p. Figure 8c shows the change in the ID/IG value for 0.5 wt.% GNP composite powders with ball milling times. It can be seen from Figure 8 that, at the same speed, as the ball milling time increases, the value of ID/IG gradually increases, indicating that the graphene structure is gradually destroyed. At the same ball milling time, the Raman ratio ID/IG with 300 r/min is slightly higher than that at 200 r/min. This is because the high speed will cause a strong impact and a certain degree of damage to the graphene structure. However, the damage degree is limited, and when the ball milling speed is 200 r/min, the coating and bonding between the graphene and the matrix are worse than at 300 r/min, as shown in Figure 6. Figure 8c shows the change in the I D /I G value for 0.5 wt.% GNP composite powders with ball milling times. It can be seen from Figure 8 that, at the same speed, as the ball milling time increases, the value of I D /I G gradually increases, indicating that the graphene structure is gradually destroyed. At the same ball milling time, the Raman ratio I D /I G with 300 r/min is slightly higher than that at 200 r/min. This is because the high speed will cause a strong impact and a certain degree of damage to the graphene structure. However, the damage degree is limited, and when the ball milling speed is 200 r/min, the coating and bonding between the graphene and the matrix are worse than at 300 r/min, as shown in Figure 6. Figure 8c shows the change in the ID/IG value for 0.5 wt.% GNP composite powders with ball milling times. It can be seen from Figure 8 that, at the same speed, as the ball milling time increases, the value of ID/IG gradually increases, indicating that the graphene structure is gradually destroyed. At the same ball milling time, the Raman ratio ID/IG with 300 r/min is slightly higher than that at 200 r/min. This is because the high speed will cause a strong impact and a certain degree of damage to the graphene structure. However, the damage degree is limited, and when the ball milling speed is 200 r/min, the coating and bonding between the graphene and the matrix are worse than at 300 r/min, as shown in Figure 6. Figure 9a,b show the Raman diagrams of the cold-pressed billets with ball milling speeds of 200 r/min and 300 r/min and different milling speeds. Figure 10 shows the comparison of the Raman values before and after cold pressing. Considering the difference in the samples before and after cold pressing, the value difference before and after cold pressing can be ignored, which means that the cold pressing process does not damage the structure of the graphene. Figure 9a,b show the Raman diagrams of the cold-pressed billets with ball milling speeds of 200 r/min and 300 r/min and different milling speeds. Figure 10 shows the comparison of the Raman values before and after cold pressing. Considering the difference in the samples before and after cold pressing, the value difference before and after cold pressing can be ignored, which means that the cold pressing process does not damage the structure of the graphene. Figure 10 shows the comparison of the Raman values before and after cold pressing. Considering the difference in the samples before and after cold pressing, the value difference before and after cold pressing can be ignored, which means that the cold pressing process does not damage the structure of the graphene.
Vacuum Hot-Pressed Composite
The density of the billet obtained using vacuum hot pressing is approximately 2.69 g/mm 3 . The compactness can reach 99.6%. Figure 11 shows the optical microscope morphology of the vacuum hot-pressed composite. It can be seen clearly that when the ball milling time is 2 h and 4 h, the graphene agglomerates, and the dispersion effect is not as good. When the ball milling time is 6 h, graphene is dispersed uniformly. When the ball milling time is the same, the dispersion of graphene with a ball milling speed of 300 r/min is obviously better than that of the graphene milled at 200 r/min, which shows the same trend as the SDV values in Table 3, verifying that it is reasonable to use SDV to represent the dispersion uniformity of the composites in this paper.
Vacuum Hot-Pressed Composite
The density of the billet obtained using vacuum hot pressing is approximately 2.69 g/mm 3 . The compactness can reach 99.6%. Figure 11 shows the optical microscope morphology of the vacuum hot-pressed composite. It can be seen clearly that when the ball milling time is 2 h and 4 h, the graphene agglomerates, and the dispersion effect is not as good. When the ball milling time is 6 h, graphene is dispersed uniformly. When the ball milling time is the same, the dispersion of graphene with a ball milling speed of 300 r/min is obviously better than that of the graphene milled at 200 r/min, which shows the same trend as the SDV values in Table 3, verifying that it is reasonable to use SDV to represent the dispersion uniformity of the composites in this paper. Figure 12 shows the XRD characterization of the hot-pressed composites. There are four peaks, which are all characteristic peaks of Al, and no peaks of graphene and Al 4 C 3 are detected, which may be due to the low content of graphene and the limited resolution of elements of the X-ray diffraction instrument [30].
Materials 2020, 13, x FOR PEER REVIEW 14 of 21 Figure 12 shows the XRD characterization of the hot-pressed composites. There are four peaks, which are all characteristic peaks of Al, and no peaks of graphene and Al4C3 are detected, which may be due to the low content of graphene and the limited resolution of elements of the X-ray diffraction instrument [30]. Figure 13 shows the tensile curve of the 0.5 wt.% GNP/Al composite. Figures 14 and 15 show the mean tensile strength and the mean elongations of it. It can be seen from Figure 13a,b that, for the ball milling speed of 200 r/min and 300 r/min, the tensile strength increases first with the increase of the milling time as a whole. For 200 r/min, when the ball milling time is 6 h, the tensile strength reaches the largest value of 141.7 MPa with the largest elongation at the same time. For 300 r/min, when the ball milling time is 6 h, the elongation of the composite reaches the maximum value of 19.9%, which is 39.82% lower than that of pure aluminum fabricated by the same process in this paper (33.1%), and the tensile strength is 156.8 MPa, which is 56.6% higher than that of pure aluminum (100.5 MPa). When the ball milling time is short, the agglomerate graphene may act as the crack initiation, so the tensile strength and the elongation are poor. When the milling time is appropriate, the graphene can work the best, while if the time is too long, the graphene structure will be destroyed, decreasing the strength and the toughness of the composite [31]. Figure 13 shows the tensile curve of the 0.5 wt.% GNP/Al composite. Figures 14 and 15 show the mean tensile strength and the mean elongations of it. It can be seen from Figure 13a,b that, for the ball milling speed of 200 r/min and 300 r/min, the tensile strength increases first with the increase of the milling time as a whole. For 200 r/min, when the ball milling time is 6 h, the tensile strength reaches the largest value of 141.7 MPa with the largest elongation at the same time. For 300 r/min, when the ball milling time is 6 h, the elongation of the composite reaches the maximum value of 19.9%, which is 39.82% lower than that of pure aluminum fabricated by the same process in this paper (33.1%), and the tensile strength is 156.8 MPa, which is 56.6% higher than that of pure aluminum (100.5 MPa). When the ball milling time is short, the agglomerate graphene may act as the crack initiation, so the tensile strength and the elongation are poor. When the milling time is appropriate, the graphene can work the best, while if the time is too long, the graphene structure will be destroyed, decreasing the strength and the toughness of the composite [31].
Mechanical Properties and Fracture Mechanism
Materials 2020, 13, x FOR PEER REVIEW 14 of 21 Figure 12 shows the XRD characterization of the hot-pressed composites. There are four peaks, which are all characteristic peaks of Al, and no peaks of graphene and Al4C3 are detected, which may be due to the low content of graphene and the limited resolution of elements of the X-ray diffraction instrument [30]. Figure 13 shows the tensile curve of the 0.5 wt.% GNP/Al composite. Figures 14 and 15 show the mean tensile strength and the mean elongations of it. It can be seen from Figure 13a,b that, for the ball milling speed of 200 r/min and 300 r/min, the tensile strength increases first with the increase of the milling time as a whole. For 200 r/min, when the ball milling time is 6 h, the tensile strength reaches the largest value of 141.7 MPa with the largest elongation at the same time. For 300 r/min, when the ball milling time is 6 h, the elongation of the composite reaches the maximum value of 19.9%, which is 39.82% lower than that of pure aluminum fabricated by the same process in this paper (33.1%), and the tensile strength is 156.8 MPa, which is 56.6% higher than that of pure aluminum (100.5 MPa). When the ball milling time is short, the agglomerate graphene may act as the crack initiation, so the tensile strength and the elongation are poor. When the milling time is appropriate, the graphene can work the best, while if the time is too long, the graphene structure will be destroyed, decreasing the strength and the toughness of the composite [31]. Figure 16 shows the tensile fracture morphology of the composite. Low-power SEM is on the left side of Figure 15, and the corresponding high-power SEM is on the right. The arrows indicate GNP. The interface bonding between aluminum and graphene is very important to the properties of the composites. The failure mechanisms of the composites mainly include the debonding of the interface, Figure 16 shows the tensile fracture morphology of the composite. Low-power SEM is on the left side of Figure 15, and the corresponding high-power SEM is on the right. The arrows indicate GNP. The interface bonding between aluminum and graphene is very important to the properties of the composites. The failure mechanisms of the composites mainly include the debonding of the interface, Figure 16 shows the tensile fracture morphology of the composite. Low-power SEM is on the left side of Figure 15, and the corresponding high-power SEM is on the right. The arrows indicate GNP. The interface bonding between aluminum and graphene is very important to the properties of the composites. The failure mechanisms of the composites mainly include the debonding of the interface, destruction of the reinforcement, and pulling out of the reinforcement. During the process of tensile loading, the agglomerated graphene will act as the crack initiation point, thus affecting the strength of the composites [32]. From Figure 16a,c,e,g, it can be seen that when the ball-milling speed is 200 r/min, with the increase of ball milling time, the fracture surfaces almost have the same morphology: a few large and deep dimples with many small dimples around them. The torn edges are not bright. Figure 16b,d,f,h are the high power magnification near the large dimples. It can be seen clearly that there is discrete and integrated graphene existing inside the relatively larger dimples, which is homologous to the tensile load, and its interface with the matrix is not so good and act as the crack initials. The graphene with good interface of the aluminum matrix bears the most load and forms the bright torn edges. From Figure 16i,k,m,o, it can be seen that when the ball milling speed is 300 r/min, with the increase of the ball milling time, the dimple in the fracture surface gradually becomes larger and deeper, with embedded GNP along the torn edge; at the same time, a large number of small dimples with bright torn edges are distributed around the large dimples, indicating the interface bonding between the matrix and the GNP is perfect. From Figure 16j, when the ball milling time is 2 h, it can be seen that the interface between GNP and alumnium is poor. The fracture tends to be a brittle fracture whose dimples have almost no tearing of the edges. From Figure 16k,m, when the ball milling time is 4 h and 6 h, there is seldom large and discrete graphene, indicating a better embedding between the graphene and the matrix. It can be seen that there are many larger and deeper dimples in the fracture surface, and the torn edges are bright and thin. Therefore, the composite has the best toughness and almost the largest tensile strength. From Figure 16o,p, the ball milling time is 8 h, and the dimple morphology is elongated, which indicates that the maximum principal stress of the material is along the direction of the cross-section. The tensile strength and the toughness begin to decrease. In summary, from the comparison of the particle size and the morphology of the composite powder as well as the mechanical properties and the microstructure of the composite, the ball milling speed of 300 r/min and the ball milling time of 6 h are the best conditions used in this study. This is basically consistent with the results of Xiao et al. [33]. Besides, the graphene ultrasonic time was 90 min, and the stearic acid content was 1.5%.
Mechanical Properties and Fracture Mechanism
Materials 2020, 13, x FOR PEER REVIEW 16 of 21 destruction of the reinforcement, and pulling out of the reinforcement. During the process of tensile loading, the agglomerated graphene will act as the crack initiation point, thus affecting the strength of the composites [32]. From Figure 16a,c,e,g, it can be seen that when the ball-milling speed is 200 r/min, with the increase of ball milling time, the fracture surfaces almost have the same morphology: a few large and deep dimples with many small dimples around them. The torn edges are not bright. Figure 16b,d,f,h are the high power magnification near the large dimples. It can be seen clearly that there is discrete and integrated graphene existing inside the relatively larger dimples, which is homologous to the tensile load, and its interface with the matrix is not so good and act as the crack initials. The graphene with good interface of the aluminum matrix bears the most load and forms the bright torn edges. From Figure 16i,k,m,o, it can be seen that when the ball milling speed is 300 r/min, with the increase of the ball milling time, the dimple in the fracture surface gradually becomes larger and deeper, with embedded GNP along the torn edge; at the same time, a large number of small dimples with bright torn edges are distributed around the large dimples, indicating the interface bonding between the matrix and the GNP is perfect. From Figure 16j, when the ball milling time is 2 h, it can be seen that the interface between GNP and alumnium is poor. The fracture tends to be a brittle fracture whose dimples have almost no tearing of the edges. From Figure 16k,m, when the ball milling time is 4 h and 6 h, there is seldom large and discrete graphene, indicating a better embedding between the graphene and the matrix. It can be seen that there are many larger and deeper dimples in the fracture surface, and the torn edges are bright and thin. Therefore, the composite has the best toughness and almost the largest tensile strength. From Figure 16o,p, the ball milling time is 8 h, and the dimple morphology is elongated, which indicates that the maximum principal stress of the material is along the direction of the cross-section. The tensile strength and the toughness begin to decrease. In summary, from the comparison of the particle size and the morphology of the composite powder as well as the mechanical properties and the microstructure of the composite, the ball milling speed of 300 r/min and the ball milling time of 6 h are the best conditions used in this study. This is basically consistent with the results of Xiao et al. [33]. Besides, the graphene ultrasonic time was 90 min, and the stearic acid content was 1.5%.
Fracture Mode Analysis
To analyze the role of the graphene in the reinforcement of the composite, its distributions on the tensile fracture surface are compared in Figure 17. In the composite, the two-dimensional graphene sheet is perpendicular to or parallel to the stretching direction or somewhere intervenient. When the two-dimensional plane of graphene is parallel to the stretching direction (denoted as GNPp in Figure 17a,b), the graphene sheets have the largest contribution to the strength of the composite, whose folds are spread, and then some of them are broken and some are pulled out of the matrix, as shown in Figure 17b,d. The folds spread and broken graphene can effectively bear most of the load, and a large number of torn dimples appear, as shown in Figure 17c, thus improving the tensile strength of the composites effectively [34]. When the two-dimensional plane of the graphene is vertical to the direction of tensile force, (denoted as GNPv in Figure 17e, f ), the intermolecular force between graphene sheets is small, the crack initiates on the interlayer of graphene, and the load is mainly borne by the Al matrix, so that a large number of sharp torn edges appear as a cliff on the fracture surface. Thus, the force that the composite can bear is greatly reduced, as shown in Figure 17e, in which the strength of graphene is far from being utilized [34,35]. Therefore, for twodimensional reinforcement with material such as graphene, the issue of how to reduce its agglomeration and how to achieve uniform dispersion in the matrix is particularly important. For some materials with obvious high-performance in a special dimension, the two-dimensional lamellar direction of graphene should be oriented to be consistent with the stress direction using subsequent processing, such as extrusion, which will be discussed in another paper.
Fracture Mode Analysis
To analyze the role of the graphene in the reinforcement of the composite, its distributions on the tensile fracture surface are compared in Figure 17. In the composite, the two-dimensional graphene sheet is perpendicular to or parallel to the stretching direction or somewhere intervenient. When the two-dimensional plane of graphene is parallel to the stretching direction (denoted as GNP p in Figure 17a,b), the graphene sheets have the largest contribution to the strength of the composite, whose folds are spread, and then some of them are broken and some are pulled out of the matrix, as shown in Figure 17b,d. The folds spread and broken graphene can effectively bear most of the load, and a large number of torn dimples appear, as shown in Figure 17c, thus improving the tensile strength of the composites effectively [34]. When the two-dimensional plane of the graphene is vertical to the direction of tensile force, (denoted as GNP v in Figure 17e,f), the intermolecular force between graphene sheets is small, the crack initiates on the interlayer of graphene, and the load is mainly borne by the Al matrix, so that a large number of sharp torn edges appear as a cliff on the fracture surface. Thus, the force that the composite can bear is greatly reduced, as shown in Figure 17e, in which the strength of graphene is far from being utilized [34,35]. Therefore, for two-dimensional reinforcement with material such as graphene, the issue of how to reduce its agglomeration and how to achieve uniform dispersion in the matrix is particularly important. For some materials with obvious high-performance in a special dimension, the two-dimensional lamellar direction of graphene should be oriented to be consistent with the stress direction using subsequent processing, such as extrusion, which will be discussed in another paper.
Conclusions
In this paper, 0.5 wt.% GNP/Al composites were prepared using ultrasonication + ball milling + cold pressing + vacuum hot pressing. The microstructure of 0.5 wt.% GNP/Al created by different fabrication parameters was systematically studied. The main conclusions are as follows: (1) The density of 0.5 wt.% GNP/Al prepared by cold pressing + vacuum hot pressing is high, and the interface bonding is perfect. No obvious oxidation phenomenon and Al4C3 were found in the composite. Graphene distributed on the grain boundaries can effectively hinder the growth of the grain during the vacuum hot-pressing sintering and play a role in grain refinements. (2) Ball milling time, ball milling speed, and ultrasonic time have great influences on the performance of 0.5 wt.% GNP/Al composites. (3) Adding 0.5% percentage of graphene in weight can obviously increase the strength of the Al matrix. When the ball milling speed was 300 r/min, the ball milling time was 6 h, the B-P ratio was 5:1, the graphene ultrasonic time was 90 min, and the stearic acid content was 1.5%, the graphene nanoplates were uniformly distributed in the Al matrix without being destroyed. The composite had the best comprehensive mechanical properties, with a tensile strength of 156.8 MPa, which is 56.6% higher than that of pure aluminum fabricated by the same process (100.1MPa), while the elongation is 19.9%, which is 39.8% lower than that of pure aluminum (33.1%).
Conclusions
In this paper, 0.5 wt.% GNP/Al composites were prepared using ultrasonication + ball milling + cold pressing + vacuum hot pressing. The microstructure of 0.5 wt.% GNP/Al created by different fabrication parameters was systematically studied. The main conclusions are as follows: (1) The density of 0.5 wt.% GNP/Al prepared by cold pressing + vacuum hot pressing is high, and the interface bonding is perfect. No obvious oxidation phenomenon and Al 4 C 3 were found in the composite. Graphene distributed on the grain boundaries can effectively hinder the growth of the grain during the vacuum hot-pressing sintering and play a role in grain refinements. (2) Ball milling time, ball milling speed, and ultrasonic time have great influences on the performance of 0.5 wt.% GNP/Al composites. (3) Adding 0.5% percentage of graphene in weight can obviously increase the strength of the Al matrix. When the ball milling speed was 300 r/min, the ball milling time was 6 h, the B-P ratio was 5:1, the graphene ultrasonic time was 90 min, and the stearic acid content was 1.5%, the graphene nanoplates were uniformly distributed in the Al matrix without being destroyed. The composite had the best comprehensive mechanical properties, with a tensile strength of 156.8 MPa, which is 56.6% higher than that of pure aluminum fabricated by the same process (100.1MPa), while the elongation is 19.9%, which is 39.8% lower than that of pure aluminum (33.1%). | 11,457.6 | 2020-08-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Host Plant Effects on Halyomorpha halys (Hemiptera: Pentatomidae) Nymphal Development and Survivorship
Abstract Halyomorpha halys (Stål) (Hemiptera: Pentatomidae) is a highly polyphagous invasive species and an important pest of orchard crops in the United States. In the Mid-Atlantic region, wild hosts of H. halys are common in woodlands that often border orchards, and H. halys movement from them into orchards poses ongoing management issues. To improve our understanding of host plant effects on H. halys populations at the orchard–woodland interface, nymphal survivorship, developmental duration, and adult fitness (size and fresh weight) on apple (Malus domestica Borkh.), peach (Prunus persica (L.) Batsch), Tree of Heaven (Ailanthus altissima (Mill.) Swingle), and northern catalpa (Catalpa speciosa (Warder)) were examined in laboratory studies. Specifically, we investigated nymphal performance on the foliage and fruiting structures of those hosts and on single- versus mixed-host diets, as well as the effects of host phenology on their suitability. Nymphal performance was poor on a diet of foliage alone, regardless of host. When fruiting structures were combined with foliage, peach was highly suitable for nymphal development and survivorship, whereas apple, Tree of Heaven, and catalpa were less so, although nymphal survival on Tree of Heaven was much greater later in the season than earlier. Mixed-host diets yielded increased nymphal survivorship and decreased developmental duration compared with diets of suboptimal single hosts. Adult size and weight were generally greater when they developed from nymphs reared on mixed diets. The implications of our results to the dispersal behavior, establishment, and management of H. halys are discussed.
Plant species vary widely in their suitability as food for polyphagous insect herbivores (Scriber 1984) and can affect insect development and survival differently via their chemical (e.g., nutritional quality, allelochemicals) or physical (e.g., trichomes, tissue hardness) characteristics, which may vary with plant age and phenology (Bernays and Chapman 1994). Polyphagous insect herbivores may complete development on a single host, but their fitness is generally enhanced when they feed and develop on multiple plant species , H€ agele and Rowell-Rahier 1999, Miura and Ohsaki 2004. For example, the polyphagous hemipteran, Bemisia tabaci (Gennadius), exhibited higher survivorship and increased fecundity on a mixed diet of cotton, cucumber, tomato, cabbage, and kidney beans than on any of those plants alone (Zhang et al. 2014). Earlier studies found that fitness advantages of mixed diets were associated with nutritional complementarity and/or the dilution of allelochemicals (Bernays et al. 1994, H€ agele andRowell-Rahier 1999).
Research on the relative suitability of plant species to polyphagous insect pests that utilize both cultivated and wild hosts has important implications for understanding aspects of pest biology and ecology, such as their movement in the landscape, host use patterns, and population dynamics, as well as the susceptibility of economic crops to attack (Panizzi 1997). Such studies have yielded beneficial information about basic pest biology, informed the development of ecologically based pest management options (Panizzi and Parra 2012), and are especially relevant to recently invasive economic pests, about which there are often important knowledge gaps regarding their biology and ecology in the invaded range. The brown marmorated stink bug, Halyomorpha halys (Stå l) (Hemiptera: Pentatomidae), is a classic example of this.
Halyomorpha halys is an invasive pest from Asia that did not become a major pest until the late 2000s (Leskey et al. 2012a), following its initial detection in Allentown, PA, about a decade earlier (Hoebeke and Carter 2003). Since its widespread outbreak in the Mid-Atlantic region of the United States in 2010, H. halys has caused significant economic damage to various fruit, vegetable, and field crops (Rice et al. 2014). Its effects in Mid-Atlantic fruit orchards have been especially pronounced via reductions in fruit yield and quality and changed pest management practices (Leskey et al. 2012b) that have resulted in secondary pest outbreaks. Halyomorpha halys is reported to utilize well over 100 plant species V C The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email<EMAIL_ADDRESS>as feeding or reproductive hosts (Rice et al. 2014), many of which are deciduous trees that grow in forested areas that often border commercial orchards in this region (Bakken et al. 2015). The abundance of wild hosts in these woodlands, the development of large H. halys populations on some, and the high dispersal capacity of H. halys adults and nymphs , Wiman et al. 2014 combine to create pest pressure in commercial orchards through most or all of the fruiting period (Joseph et al. 2015). Moreover, Funayama (2006) showed that fitness of H. halys nymphs was positively affected when they developed on a mixed diet.
Investigation of the effects of different wild and cultivated host plants on H. halys development and survivorship in its invaded range should further our understanding of their relative contributions to local H. halys population densities and the risk to economic crops and may aid the development of ecologically based pest management tactics against this important pest. Here, we report laboratory experiments that examined the effects of selected wild and tree fruit hosts in Virginia on the survivorship and developmental duration of H. halys nymphs and aspects of adult fitness (size and fresh weight). Specifically, the suitability of apple (Malus domestica Borkh.), peach (Prunus persica (L.) Batsch), Tree of Heaven (Ailanthus altissima (Mill.) Swingle), and northern catalpa (Catalpa speciosa (Warder)) was examined, focusing on vegetative and reproductive structures, the effects of single-versus mixed-host diets, and changes in host suitability during the growing season.
Insects
Adult male and female H. halys collected from natural overwintering aggregations in northern Virginia in April 2012 and February 2013 were placed in black, plastic bags with crumpled newspaper and held in a dark room at 4 C at Virginia Tech's Alson H. Smith, Jr. Agricultural Research and Extension Center (AHSAREC), Winchester, VA. In mid-April of each year, $30 male and $30 female adults collected during the same year were placed in each of several 30.48-cm 3 , screened cages (BioQuip Products, Inc., Rancho Dominguez, CA) in a laboratory room at $ 25 C, $70% RH, and a photoperiod of 16:8 (L:D) h provided by overhead banks of 34-W fluorescent lights (Ace Hardware Corp., Oak Brook, IL). For the experiment that began in August 2013, fifth instars and adult H. halys were collected from the field in July and reared under the same conditions. Cages were provisioned regularly with popcorn kernels, barley, buckwheat, soybeans, dried figs, dry roasted, unsalted peanuts, sundried tomatoes, and water. Oviposition substrates in each cage included paper towel on the cage floor and three to four freshly excised compound leaves of Tree of Heaven in a water-filled vase. Egg masses produced between late May and early June and in early August were used in the early-and late-season experiments, respectively. Halyomorpha halys females most often deposit eggs in clutches of 28 (Nielsen et al. 2008). Egg masses ( 1-d-old) were removed in situ, held in groups of five in 100 by 15-mm petri dishes (Thermo Fisher Scientific Inc., Pittsburgh, PA) in the same room, and monitored daily for hatch. Those that hatched within a 5-d period and had $28 first instars were assigned to the diet treatments. First-instar H. halys aggregate around and on the empty egg mass before dispersing (Taylor et al. 2014) and thus were easily transferred to cages as cohorts from each egg mass.
Host Plant Sources
'Smoothee Golden' apple and 'Redhaven' peach trees growing at the AHSAREC were the sources of cultivated host plant material. These trees were treated with fungicides, but not insecticides, during the growing season. Northern catalpa (hereafter referred to as catalpa) and Tree of Heaven growing on or near the AHSAREC property were selected as the wild hosts, based primarily on their inclusion in the host list for H. halys (see StopBMSB.org) but also on reports by Bakken et al. (2015) and anecdotal observations of large populations of H. halys nymphs and adults on both species and the results of a census showing that Tree of Heaven was the most common deciduous tree species growing at the edge of woodlands adjacent to orchards in this region (Acebes-Doria, unpublished data). Like H. halys, Tree of Heaven is an invasive species from Asia (Kowarik and S€ aumel 2007) and catalpa is native to eastern North America (see http://dendro.cnre.vt.edu).
Freshly excised foliage and reproductive structures (flowers, fruit, or seed pods) of apples, peaches, Tree of Heaven, and catalpa were offered to nymphs. The reproductive structures used reflected the stage of development of each host plant in the field at the time each experiment was conducted. From early-to mid-June, immature apple and peach fruit (three to four per replicate) and foliage, Tree of Heaven flowers and foliage, and catalpa flowers and foliage were used. The volume of Tree of Heaven and catalpa flowers offered was comparable with the total volume of the apple and peach fruit offered. Later in the season,-one to two larger apple and peach fruit and foliage were offered. At that time, three catalpa seed pods (10.16-12.70 cm in length) and foliage and Tree of Heaven samaras and foliage were offered. The volume of seed pods and samaras offered was approximately the same as the other treatments.
The plant materials were washed thoroughly under running tap water to remove contaminants (e.g., fungicide residues, other arthropods, etc.). The foliage offered to nymphs included three terminal twigs of apples and peaches with-six to eight leaves (20.32-25.40 cm in length), two apical branches of catalpa with-two to three leaves (15.24-20.32 cm in length), and three compound leaves of Tree of Heaven (25.4-30.48 cm in length). All shoots offered, including those with flowers, samaras, or catalpa pods, were inserted into-two to four holes (8 mm diameter) in the plastic lid of plastic containers (8 cm height, 115 mm diameter) containing water, while peach and apple fruit were placed on the cage floor. All plant tissues were replaced-two to three times per week.
Single-Host Diets
A completely randomized design with two factors, host plant species and plant tissue (foliage versus foliage plus reproductive structures), was used in an experiment between late May and early August 2012. Cohorts of first instars, each with $28 individuals, were assigned individually to seven replicates of each diet treatment. Each cohort of first instars on the egg mass was placed on or near the plant material inside a 30.48 by 30.48 by 30.48-cm cage (BioQuip Products, Inc, Rancho Dominguez, CA) with ad libitum access to the food source and water. Water was provided in a plastic container with a cotton wick inserted through the lid. Cages were inspected daily for the presence of exuviae, which indicated the molt between instars, and the specific instars were identified following Hoebeke and Carter (2003). As well, the numbers of live and dead nymphs and live adults were recorded. Stage-specific survivorship and nymphal developmental period were recorded for each diet treatment.
Single-Versus Mixed-Host Diets Early and Late in the Growing Season
The onset of these experiments coincided with two key points in the seasonal phenology of H. halys populations in the Mid-Atlantic region. Peak emergence of H. halys adults from overwintering sites occurs between approximately mid-May and early June (Bergh and Leskey, unpublished data), and F 1 generation adults are estimated to be reproducing in early to mid-August based on degree-day accumulations (Leskey et al. 2012c;Bakken et al. 2015). Experiments initiated in late May (early season) and mid-August 2013 (late season) included the following treatments in a completely randomized design with host plant as the factor: 1) apple, 2) peach, 3) Tree of Heaven, 4) catalpa, 5) apple plus Tree of Heaven, 6) apple plus Tree of Heaven plus peach, and 7) apple plus Tree of Heaven plus peach plus catalpa. Host plants in the mixed diet treatments were provided all at once. Hereafter, we refer to the diet treatments that consisted of three or four hosts as the three-host and four-host diet, respectively. All treatments included foliage and fruiting structures and each treatment was replicated four times.
As in the previous experiment, cohorts of $28 H. halys first instars on the egg mass were placed individually in 30.48 by 30.48 by 30.48-cm rearing cages provisioned ad libitum with the diet treatment and water. Cages were checked daily to monitor the development and survivorship of nymphs from each egg mass. Stage-specific survivorship and nymphal developmental duration were recorded for nymphs from each cohort. Within 48 h after molting to the adult stage, the fresh weight of all adults produced was measured (nearest 0.1 mg) using a digital scale (AB54-S Mettler Toledo, Columbus, OH) and their protonum width was measured (nearest 0.01 mm) with a digital caliper (ROK International Industry Co., Ltd., Guangdong, China). Wild adult H. halys (n ¼ 30) collected from Tree of Heaven trees at the AHSAREC in early September 2013 also were weighed and measured for comparison with adults obtained from the late-season laboratory experiments.
Data Analyses
All analyses were conducted using JMPV R Pro version 11 (SAS Institute Inc., Cary, NC, 2007) and outcomes were considered significant at P < 0.05. Data that did not satisfy the assumptions of parametric tests were transformed using arcsine-square root for percentage data and log(xþ1) for nymphal developmental duration and the pronotum width and fresh weight. Two-way analysis of variance (ANOVA) was used to compare nymphal stage-specific survivorship and developmental duration among the diets that included foliage alone and foliage plus reproductive structures. One-way ANOVA was used to analyze the survivorship and developmental duration among the diets during the early-and late-season experiments. Student's t-test was used to compare nymphal survivorship between the early-and late-season experiments for each diet. One-way ANOVA was used to compare the effect of host plant diet on the fresh weight and pronotum width of the females and males that developed to the adult stage during the early-and late-season experiments, including measurements from the field-collected adult males and females, which were compared only with the adults reared from the late-season experiment. Student's t-test was used to compare the fresh weight and pronotum width, based on pooled data across all diet treatments. Multiple mean comparisons used Tukey's post hoc honestly significant difference test.
Single-Host Diets
In this experiment, 94.45 6 1.39% of the eggs from each mass hatched. As first-instar H. halys do not feed on plant tissue, their survivorship was not significantly affected by diet treatment (Table 1). There were significant effects of host plant and the inclusion of reproductive structures on the survivorship of second through fifth instars (Table 1). In general, the percentage of nymphs that survived each of these instars was much higher on diets that combined foliage and reproductive structures than on foliage alone, although this effect was much less pronounced on Tree of Heaven, on which the fewest number of nymphs survived the second instar.
As the developmental duration of first instars was not significantly affected by diet treatment (F 7, 48 ¼ 0.30, P ¼ 0.95), the duration between the second instar and adult eclosion was compared. Moreover, as diets of foliage alone yielded very few adults, developmental duration was analyzed only for treatments that included fruiting structures. Among those, the developmental duration varied significantly among the treatments (Table 1) and was shortest on Tree of Heaven and significantly shorter on peach than on apple or catalpa, which did not differ from each other.
Single-Versus Mixed-Host Diets Early and Late in the Growing Season: Survivorship.
In this experiment, 93.61 6 1.25% of eggs from each mass hatched. As in the previous experiment, diet treatment had no effect on the survivorship of first instars during the early-or late-season experiments (Table 2). In the early-season study, there were numerical, but not statistically significant, diet effects on second-instar survivorship, with the lowest survivorship on apple. In the late-season study, apple also resulted in second-instar survivorship that was significantly or numerically lower than on the other diets, which did not differ.
The early-and late-season experiments yielded significant diet effects on survivorship during the third through fifth instars (Table 2). In the early-season study, highest nymphal survivorship through the fifth instar was on peach, apple plus Tree of Heaven, and the threeand four-host diets; the lowest survivorship was on apple, Tree of Heaven, and catalpa. In the late-season study, apple and catalpa again yielded the lowest survivorship through the fifth instar, whereas the other diets yielded >69% survivorship. In both experiments, there was a pronounced decrease in survivorship on catalpa between the fourth and fifth instars that was not observed on the other diets.
Between the early-and late-season experiments, nymphal survival to the adult stage increased from 20 to 72% on Tree of Heaven (t ¼ 6.32, df ¼ 6, P ¼ 0.001); survivorship to the adult stage on the other diets did not differ significantly between early-and late-season studies (apple:
Single-Versus Mixed-Host Diets Early and Late in the Growing Season: Developmental Duration and Adult Size and Fresh Weight
There was no effect of diet on the developmental duration of first instars (Table 3). Significant diet treatment effects on the developmental duration of second, fourth, and fifth instars were recorded during Environmental Entomology, 2016, Vol. 45, No. 3 the early-season experiment, while in the late-season study, there were significant effects on second, third, and fifth instars. In the early season, total developmental period was significantly shorter on peach and the three-and four-host diets than on apple or catalpa, with intermediate durations on the others. The late-season study yielded similar results, with significantly shorter total developmental duration on peach, Tree of Heaven, and the four-host diet than on apple or catalpa, with intermediate periods among the other diets.
Based on pooled data across all diet treatments, mean adult pronotum width was significantly greater for females (7.95 6 0.02 SE Means with the same letters under the same column and in the same experiment are not significantly different at a ¼ 0.05. mm) than males (7.17 6 0.02 SE mm; t ¼ À26.18, df ¼ 878, P < 0.0001) and females were significantly heavier (109.37 6 1.10 SE mg) than males (87.48 6 0.86 SE mg; t ¼ À15.61, df ¼ 878, P < 0.0001). Diets that yielded fewer than three adult males or females were excluded from statistical comparisons. In both the earlyand late-season experiments, the fresh weight and pronotum width of both sexes varied significantly among diets (Table 4). In the early season, Tree of Heaven and apple plus Tree of Heaven yielded females that were heaviest and had the largest pronotum width, while apple yielded the lightest and smallest females. The heaviest and largest males also were recorded from Tree of Heaven and apple plus Tree of Heaven, while the lightest and smallest males were from catalpa and apple. In the late-season study, the heaviest females and males were recorded from peach, although the three-and four-host diets also produced relatively heavy individuals of both sexes (Table 4). As in the early-season experiment, females and males from apple and catalpa were the lightest. The three-and fourhost diets produced females with the largest pronotum width and smallest females were from catalpa and apple, while male pronotum width was largest from the four-host diet and smallest from apple. Field-collected females and males had statistically or numerically comparable fresh weights and pronotum widths to those from the three-and four-host diet treatments during the late-season experiment (Table 4).
Discussion
The survivorship and developmental duration of laboratory-reared H. halys nymphs were significantly affected by host plant tissue, host plant species, host phenology, and diet mixing. Like other pentatomid species, such as the southern green stink bug, Nezara viridula (L.), and the Neotropical brown stink bug, Euchistus heros (F.) (Panizzi 2000), H. halys nymphs have been known to feed on nonreproductive parts of plants such as the stem (Martinson et al. 2013) and foliage (Hoebeke and Carter 2003). For all plants examined, we found that diets consisting only of stems and foliage were unsuitable, and that most nymphs did not survive the second instar on them. Martinson et al. (2015) showed that the presence of fruit on ornamental trees in a nursery strongly influenced the abundance of H. halys nymphs and adults and that fruit removal had a profound negative effect on H. halys counts. Complementing their results, we found that combining foliage and fruiting structures of peach dramatically increased H. halys nymphal survivorship and reduced their developmental duration compared with foliage alone. For apple and catalpa, we also showed that fruiting structures and foliage only led to marginal improvements in nymphal survivorship both early and late in the season. Tree of Heaven foliage plus samaras produced the same result early in the season, but nymphal survivorship on that diet increased in the late-season study.
With regard to the cultivated plant species offered as single-host diets that combined foliage and fruit, the relatively poor suitability of apple for H. halys nymphs concurs with Funayama (2002), who reported that nymphs reared on apple had low survivorship and developed poorly. In our study, we consistently found that nymphal survivorship was higher and their developmental duration was shorter on peach than on apple, indicating a clear difference in the suitability of these two economically important hosts for nymphal a Total number of first instars at the outset of the experiment. b Total number of nymphs that developed to the adult stage. Second through fifth instars are the plant-feeding stages of juvenile H. halys. c ToH denotes Tree of Heaven. d The three-host diet consisted of the foliage and fruiting structures of apple, ToH, and peach. e The four-host diet consisted of the foliage and fruiting structures of apple, ToH, peach, and catalpa.
Means with the same letters under the same column and in the same experiment are not significantly different at a ¼ 0.05. Entomology, 2016, Vol. 45, No. 3 development. The high suitability of peach for nymphal development and survival conforms to field observations that peach can support large H. halys populations from late May through harvest (Nielsen and Hamilton 2009) and often incurs higher levels of injury early in the growing season than apple (Leskey et al. 2012b, Joseph et al. 2015. While apples can sustain substantial injury from H. halys (Nielsen and Hamilton 2009, Leskey et al. 2012b, Joseph et al. 2015, this may be associated more with transient visits and feeding bouts by adults and nymphs than by resident populations; Morrison III et al. (2015) used harmonic radar to show that tagged H. halys adults remained on apple trees for only $3.5 h. A survey of 78 species of native and invasive trees and herbaceous shrubs in urban, rural, and forested areas in the eastern United States by Bakken et al. (2015) revealed that catalpa and Tree of Heaven were among the species that consistently yielded comparatively high counts of H. halys adults and nymphs. In China, Tree of Heaven is considered a preferred host of H. halys (reviewed in Lee et al. 2013). A possible explanation for the relatively poor performance of H. halys nymphs on catalpa and Tree of Heaven, discussed earlier, is that members of the Simaroubaceae (e.g., Tree of Heaven) and Bignoniaceae (e.g., catalpa) produce secondary metabolites with insecticidal properties (Tsao et al. 2002, De Feo et al. 2009, Castillo and Rossini 2010. The pronounced reduction in survivorship between the fourth and fifth instars of nymphs reared on catalpa may have been associated with the cumulative effects of these allelochemicals during their development. As well, the marked seasonal difference in the suitability of Tree of Heaven for nymphal survival may have been due to seasonal variation in allelochemical concentration and/or increasing nutritional value of the maturing samaras.
Environmental
Diet mixing was clearly beneficial to the survivorship of H. halys nymphs. Funayama (2006) reported similar results when carrots were added to a peanut and soybean diet for H. halys nymphs; indeed, mixed diets for rearing H. halys have been universally adopted (Medal et al. 2012, Leskey and. In the early-season study in 2013, the diet that combined apple and Tree of Heaven, both of which were suboptimal as single hosts, yielded improved nymphal survivorship through the fifth instar, although this effect was not found for the same diet in the late-season study, perhaps due to the apparently increased suitability of Tree of Heaven. Nymphs of the polyphagous grasshopper, Parapodisma subastris (Huang), reared on diets of two, four, and six suboptimal hosts also showed higher survivorship compared with those on a diet of a single suboptimal host (Miura and Ohsaki 2004). The survivorship of P. subastris nymphs reared on all mixed diets of suboptimal hosts was statistically equivalent to that on single diets of the superior hosts, as was the case for H. halys reared on mixed diets of suboptimal hosts versus those on peach alone.
Overall, total developmental durations from the second through fifth instars were considerably longer than reported from H. halys rearing studies under similar environmental conditions by Nielsen et al. (2008) and Medal et al. (2012). Both previous studies used a bean, peanut, corn, and carrot diet, resulting in 34-d and 37-d developmental durations from the second through fifth instars, respectively. The underlying reasons for the differences between the present and previous experiments are unknown, but may be associated with differences in the nutritional quality of the diets used.
Previous studies on N. viridula showed that adult size was positively correlated with longevity, female fecundity (McLain et al. 1990), and winter survival (Todd 1989). Moreover, studies on N. viridula (Kester and Smith 1984) and B. tabaci (Zhang et al. 2014) found that nymphs reared on mixed diets produced longer-lived adults and more fecund females. These results suggest that the larger and heavier adult H. halys from nymphs reared on suitable diets, whether of single or mixed hosts, may also have improved longevity and fecundity. The size and weight of adults from nymphs reared on mixed diets in the late-season study in 2014 did not differ significantly from field-collected adults in September 2013, which had likely also developed from nymphs that had fed on a range of host plants. As well, Todd's (1989) finding of higher overwintering survivorship in larger than smaller N. viridula adults suggests that the bigger and heavier H. halys adults from nymphs reared on mixed diets or peach later in the season also may be better able to overwinter successfully. Further investigation of the mechanisms behind the relative suitability of the different host plants in our study and the benefits of diet mixing on H. halys development and survival may further improve our understanding of its host-utilization at the orchard-woodland interface. Toward the end, Acebes-Doria (unpublished data) quantified the nutrient content in the adults from this study that had developed from nymphs reared on the different diets, revealing new information about host plant effects on H. halys nutrition.
Despite the reports (Bakken et al. 2015) and observations of Tree of Heaven and catalpa supporting large populations of H. halys in the eastern United States, our results suggest that H. halys nymphs may need to disperse from these trees during their development to find and feed on other plants. In the laboratory, Lee et al. (2014) demonstrated that H. halys nymphs can walk up to 41 m in 1 h and up to 8 m in 15 min on smooth horizontal and vertical surfaces, respectively. In a follow-up field experiment using pheromone-baited traps near a woodlot, Lee et al. (2014) found that marked nymphs walked over 20 m on a mowed grassy lawn within 4-5 h. Acebes-Doria et al. (2016) captured H. halys nymphs in traps designed to monitor their upward and downward movement on tree trunks. Results from Tree of Heaven revealed that second through fifth instars were captured walking up and down tree trunks, but that more nymphs were captured while walking up than down, leading them to speculate that nymphs may also disperse from the tree canopy by dropping (Acebes-Doria et al. 2016). Follow-up field studies using trunk traps have also examined seasonal patterns of the upward and downward walking dispersal of H. halys nymphs on cultivated and wild tree hosts at the orchard-woodland interface.
The relative suitability of available hosts may affect the extent to which H. halys nymphs disperse among hosts at the orchard-woodland interface. Our data suggest that nymphs from eggs laid on apple, catalpa, or Tree of Heaven early in the season would be more likely to disperse to other hosts than those on peaches. Host phenology and the presence and maturity of fruiting structures appear to strongly influence seasonal patterns of host use by adults and nymphs (Martinson et al. 2015). Moreover, Bakken et al. (2015) reported that among the 78 plant species surveyed, H. halys egg masses, nymphs, and adults were found on 34 species, including Tree of Heaven and catalpa, while only nymphs and adults were found on 41 species and none were detected on three species. These findings further support the likelihood that H. halys nymphs and adults disperse among available plants, that some species serve as its feeding and reproductive hosts, and that others are only feeding hosts or are unsuitable.
At the orchard-woodland interface, many known wild and tree fruit hosts of H. halys are commonly found growing close to one another (Acebes-Doria, unpublished data) and well within the dispersal distance of which nymphs are capable ).
Consequently, additional and very relevant questions that follow from the data reported here include: 1) how long do nymphs remain on a particular host, 2) do nymphs that disperse from one host species move to the same or different species, 3) do nymphs disperse from wild hosts into orchards, and if so, 4) how far into the orchard do nymphs move? Moreover, the potential geographic distribution of H. halys based on niche modeling indicates that much of the Eastern United States and portions of its Pacific coastal regions could be successfully colonized by this invasive species (Zhu et al. 2012). Our results indicate that within a particular ecosystem, available host plants also could have a major impact on the survivorship and population growth of H. halys, particularly if available hosts are suboptimal. The apparent need for diet mixing due to suboptimal hosts could limit H. halys establishment and build-up in areas that lack suitable host diversity. Indeed, some suboptimal hosts may be considered a "dead end" for nymphal survivorship and development if alternate hosts that provide additional nutritional benefits are not available nearby.
In summary, H. halys nymphs are commonly found on wild hosts adjacent to fruit orchards and have the capacity to disperse between the two habitats . Indeed, transects of pheromone-baited traps that extended from woodlots into apple orchards revealed highest captures of adults and nymphs in traps at the orchard and woodland edges (Bergh, unpublished data). In addition, field studies using trunk traps deployed on apple and peach trees and known tree hosts of H. halys at the orchard-woodland interface revealed movement of H. halys nymphs both up and down the trees during much of the growing season (Acebes-Doria, unpublished data). As well, more H. halys injury has been recorded from apples in trees at orchard edges adjacent to woodlands than from orchard interior trees (Leskey et al. 2012b, Joseph et al. 2014). These findings and our demonstration of the effects of host plant species and diet mixing on nymphal performance suggest the likelihood that H. halys nymphs move between these two habitats during much of the growing season and that effective management programs targeting H. halys in orchards next to woodlands might be perimeter-driven. Restricting applications of the most efficacious insecticides against H. halys to the orchard perimeter, whether via border sprays (Blaauw et al. 2014) or sprays to pheromone-baited trees at intervals along the borders ("attract-and-kill"; Morrison III et al. 2015), should translate to fewer secondary pest problems throughout the orchard and facilitate a return to the more ecologically and economically sustainable programs that were widely practiced before H. halys became an issue. | 7,434 | 2016-03-24T00:00:00.000 | [
"Biology",
"Environmental Science"
] |